Compare commits

..

11 Commits
ix ... jkl-dev

Author SHA1 Message Date
498a948b34 gofmt some files 2023-10-04 01:23:03 -05:00
2679751896 Fix handling of hardlinks and special files
Also, don't attempt xattr ops on special files on the BSD likes.

TODO: Should have a way to allow restore without special files,
otherwise very cumbersome for a regular user.
2023-10-04 01:12:13 -05:00
66a938af67 Support for hardlinks to symlinks
This is a specialized use-case, but it is indeed possible to do this
and can be used if xattrs are attached to the symlink.
2023-10-04 01:12:11 -05:00
015d2200da Fix handling of xattrs with symlinks
Fix Linux, Darwin, and other BSD (untested) to allow proper
handling of xattrs with symlinks. On Linux we cannot use the f*
syscalls for symlinks because symlinks cannot be opened.
File flags must be handled differently on darwin and other BSD due
to the lack of the LCHFLAGS syscall on darwin, and the fact that it
is emulated in libc. However, we do have O_SYMLINK on darwin.
2023-10-04 01:11:20 -05:00
96e7c93a2c Support backup and restore of special files on POSIX style systems
Special files are device nodes and named pipes. The necessity of the
former is clear, the latter is debatable.
In order to preserve backward compatibility, the device number is
encoded in the StartChunk/StartOffset fields of the entry.
2023-10-03 16:26:19 -05:00
f06779659e Don't overwrite symlinks if file already exists 2023-10-03 15:08:47 -05:00
16885eaa61 Support backup and restore of hardlinks
This tracks inode/device from the stat info and creates backward
compatible snapshots that allow preserving hardlinks. Backwards
compatibility is preserved by saving a virtual inode number index in the
Link field of the file entry. Since this field was previously only used
for symlinks, this won't break old versions. Additionally, the entry
data is cloned so restoration with an old version works.

Current limitations are primarility with restore. They include:
- no command line option to prevent hard link restore
- if a file has the immutable or append only flag it will be set before
hardlinks are restored, so hardlinking will fail.
- if a partial restore includes a hardlink but not the parent
directories the hardlink will fail.

These will be solved by grouping restore of hardlinks together
with file, prior to applying final metadata.

- if a file is changed and is being rewritten by a restore hardlinks are
not preserved.
2023-10-03 12:21:46 -05:00
bf2565b5c3 Initial implementation of file/inode flags (Linux, BSD, darwin)
Basic support for BSD and Darwin style chflags (stat flags). Applies
these flags at the end of file restore.
Supports linux style ioctl_iflags(2) in a 2 step process. Flags that
need to be applied prior to writes such as compress and especially no-COW
are applied immediately upon file open.

The flags format is backwards compatible. An attribute starting with a
null byte is used to store flags in the entry attributes table. With
an old version of duplicacy the restore of this attribute should silently
fail (effectively be ignored).

Fixes xattr restore to use O_NOFOLLOW so attributes are applied to symlink.

TODO: Tests, possible option to switch off mutable/append prior to
restore of existing file similar to rsync. Does not apply attributes
or flags to the top most directory.
2023-10-03 12:15:54 -05:00
c07eef5063 Increase b2 client max file listing count to 10000
Considerable speed improvement with listing large storage.
2023-10-02 12:46:02 -05:00
2fdedcb9dd Fix exclude_by_attribute feature on POSIX
The exclude by attribute function is broken on non-Darwin POSIX: linux and freebsd.
This is because those xattrs must be prefixed by a legal namespace. The old xattr
library implicitly appended the user namespace to the xattr, but the current
official go pkg does not (which is just as well).

Also fix the test to remove the discordant old xattr dependency and provide
test cases for both darwin and non-darwin POSIX.
2023-10-02 12:41:50 -05:00
7bdd1cabd3 Use S3 ListObjectsV2 for listing files
ListObjects has been deprecated since 2016 and ListObjectsV2 with use of
explicit pagination tokens is more performant for large listings as well.

This also mitigates an issue with iDrive E2 where the StartAfter/Marker
is included in the output, leading to duplicate entries. Right now this
causes an exhaustive prune to delete chunks erroneously flagged as
duplicate, destroying the storage.
2023-10-02 12:41:50 -05:00
28 changed files with 877 additions and 3840 deletions

View File

@@ -1,45 +1,3 @@
# Dupluxy
An experimental Duplicacy derivative with improved support for preserving state on UNIX like systems. Produces snapshots compatible with Duplicacy.
NOTE: This project/repository is not affiliated with nor endorsed by Duplicacy, Acrosync or their associated rights holders. This project is open source but is not free/libre software. It is developed and distributed in accordance with the associated LICENSE. Commercial use may require purchase of a license from Acrosync, please contact them if you have any doubts.
## Added Features
* Support for hard links. Hard links are tracked during local file listing. All linked entries will reuse the same chunk data, so this can give a time and space saving benefit as hard-linked files only need to be packed once. Hard links are supported to everything (regular files, symlinks, special files) except directories.
* Optional File flags, that is chflags(1) on BSD/Darwin, and ioctl_iflags(2) on Linux. The primary use case is to preserve iflags used by btrfs for no-COW and compression.
* Optional Special files (character/block devices, FIFOs, and sockets) are preserved along with associated metadata.
## Assorted Changes
* The S3 backend uses the newer ListObjectsV2 interface originally because of a bug with some providers with the old, obsolete interface, but now because this API is considerably faster on a number of providers tested.
* B2 client max listing per request increased to 10,000
* A fix for the exclude_by_attribute feature on BSD/Unix which has been broken upstream for ages.
## Snapshot Format
The generated preserves snapshots are backward compatible with vanilla versions of duplicacy and also do not increase the encoding size of metadata significantly. Unfortunately duplicacy does not have a formal forward-compatible snapshot versioning system, but that's not too surprising. This does mean that the data encoding is somewhat abusive of the existing format.
### Hard links
The storage differs for regular files vs. every other target. Entry records contain a `Link` string field for the symlink target. When a likely hard linked file is encountered (`st_nlink > 1`) that entry is marked as a hard link root with the string `"/"` in the `Link` field and it is placed in an array, the index of this array will serve as a link address. Plain duplicacy only uses the `Link` field for symlinks. Files that hard link to this initial file have the index of the array of root files encoded as a base-16 integer into their `Link` field. These entries are placed in the snapshot with valid start/end chunk and offset values and all metadata is cloned so official Duplicacy will recover them as regular files with all metadata, it just will never make hard links.
For hard links to symlinks and special files, the `Link` isn't used. Instead, since these files never have content the `EndChunk/EndOffset` fields are used. A magic number (-9) is encoded in `EndChunk` for root entries and (-10) for clone/child entries. The `EndOffset` contains the index into the root entry array.
### Special Files
Duplicacy simply skips special files. Dupluxy does not skip them. The `st_rdev` (device number) for character and block devices is stored with the lower 32-bits in `StartChunk` and the upper 32-bits in `StartOffset`, though no actual supported system uses anything bigger than 32-bits. The packing of this quantity is OS specific, but major, minor numbers are also OS specific.
### File flags
Files flags are stored in the extended attributes table with a short (2 character) OS specific key prefixed with a null-byte. Duplicacy will try to set these xattrs, however they will be ignored as the name appears to be empty with the initial null-byte.
## Motivation
Arguably system root directories are better preserved in a filesystem image format, however the line becomes blurred for home and data directories the former which tends to become a magnet for all kinds of data layout. This gives the option of a convenient random addressable cloud backup with easy partial restore while also being able to backup a nearly exact replica for use in disaster recovery. Nearly exact, the only metadata not preserved are times other than mtimes and ACLs on BSD-like systems.
Hard links are a pain and might be better to not exist but in actual use things like git repos and SDKs have a tendency to use them. Often one has no choice but to deal with them, and forgoing preserving them is painful.
File flags are primarily for the use case of btrfs snapshot backups, specifically with regards to compression and no-COW. The implementation applies certain flags immediately on open so that these flags apply to written blocks.
Special files serve a couple purposes. Backup of FIFOs and sockets are primarily for preserving metadata since these files have no useful content and can always be created on the fly. The other is support for backup of overlay2 file systems. overlay2 uses character mode dev-nodes for whiteouts in addition to trusted namespace xattrs. Dupluxy should be able to faithfully reproduce overlay2 fs layers.
## Caveats/TODO
* Improve handling of preferences. There are preferences to enable and disable most features (not hardlinks though) with reasonable defaults but nothing is much documented. Take a look at the generated `.duplicacy/preferences` file.
* File flags for immutability aren't handled smartly. Specifically immutable and append only files will break badly with hardlinks, since hardlink creation is deferred to after flags application.
* Some corner cases of replacing existing files with hard links might end up breaking links if not doing a full restore. Again not a pressing use case. For the primary use of disaster recovery of large portions or an entire volume it works fine.
* Possibly encode ACLs on Mac OS/FreeBSD. On Linux the crappy POSIX 1e ACLs that no one likes to use are picked up in the xattrs for free.
# Duplicacy: A lock-free deduplication cloud backup tool # Duplicacy: A lock-free deduplication cloud backup tool
Duplicacy is a new generation cross-platform cloud backup tool based on the idea of [Lock-Free Deduplication](https://github.com/gilbertchen/duplicacy/wiki/Lock-Free-Deduplication). Duplicacy is a new generation cross-platform cloud backup tool based on the idea of [Lock-Free Deduplication](https://github.com/gilbertchen/duplicacy/wiki/Lock-Free-Deduplication).

View File

@@ -22,7 +22,9 @@ import (
"github.com/gilbertchen/cli" "github.com/gilbertchen/cli"
duplicacy "github.com/gilbertchen/duplicacy/src" "io/ioutil"
"github.com/gilbertchen/duplicacy/src"
) )
const ( const (
@@ -314,7 +316,7 @@ func configRepository(context *cli.Context, init bool) {
// write real path into .duplicacy file inside repository // write real path into .duplicacy file inside repository
duplicacyFileName := path.Join(repository, duplicacy.DUPLICACY_FILE) duplicacyFileName := path.Join(repository, duplicacy.DUPLICACY_FILE)
d1 := []byte(preferencePath) d1 := []byte(preferencePath)
err = os.WriteFile(duplicacyFileName, d1, 0644) err = ioutil.WriteFile(duplicacyFileName, d1, 0644)
if err != nil { if err != nil {
duplicacy.LOG_ERROR("REPOSITORY_PATH", "Failed to write %s file inside repository %v", duplicacyFileName, err) duplicacy.LOG_ERROR("REPOSITORY_PATH", "Failed to write %s file inside repository %v", duplicacyFileName, err)
return return
@@ -703,7 +705,7 @@ func changePassword(context *cli.Context) {
} }
configPath := path.Join(duplicacy.GetDuplicacyPreferencePath(), "config") configPath := path.Join(duplicacy.GetDuplicacyPreferencePath(), "config")
err = os.WriteFile(configPath, description, 0600) err = ioutil.WriteFile(configPath, description, 0600)
if err != nil { if err != nil {
duplicacy.LOG_ERROR("CONFIG_SAVE", "Failed to save the old config to %s: %v", configPath, err) duplicacy.LOG_ERROR("CONFIG_SAVE", "Failed to save the old config to %s: %v", configPath, err)
return return
@@ -787,17 +789,7 @@ func backupRepository(context *cli.Context) {
uploadRateLimit := context.Int("limit-rate") uploadRateLimit := context.Int("limit-rate")
enumOnly := context.Bool("enum-only") enumOnly := context.Bool("enum-only")
storage.SetRateLimits(0, uploadRateLimit) storage.SetRateLimits(0, uploadRateLimit)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile, preference.FiltersFile, preference.ExcludeByAttribute)
&duplicacy.BackupManagerOptions{
NobackupFile: preference.NobackupFile,
FiltersFile: preference.FiltersFile,
ExcludeByAttribute: preference.ExcludeByAttribute,
ExcludeXattrs: preference.ExcludeXattrs,
NormalizeXattrs: preference.NormalizeXattrs,
IncludeFileFlags: preference.IncludeFileFlags,
IncludeSpecials: preference.IncludeSpecials,
FileFlagsMask: uint32(preference.FileFlagsMask),
})
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
@@ -858,6 +850,14 @@ func restoreRepository(context *cli.Context) {
password = duplicacy.GetPassword(*preference, "password", "Enter storage password:", false, false) password = duplicacy.GetPassword(*preference, "password", "Enter storage password:", false, false)
} }
quickMode := !context.Bool("hash")
overwrite := context.Bool("overwrite")
deleteMode := context.Bool("delete")
setOwner := !context.Bool("ignore-owner")
showStatistics := context.Bool("stats")
persist := context.Bool("persist")
var patterns []string var patterns []string
for _, pattern := range context.Args() { for _, pattern := range context.Args() {
@@ -881,38 +881,13 @@ func restoreRepository(context *cli.Context) {
duplicacy.LOG_INFO("SNAPSHOT_FILTER", "Loaded %d include/exclude pattern(s)", len(patterns)) duplicacy.LOG_INFO("SNAPSHOT_FILTER", "Loaded %d include/exclude pattern(s)", len(patterns))
storage.SetRateLimits(context.Int("limit-rate"), 0) storage.SetRateLimits(context.Int("limit-rate"), 0)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile, preference.FiltersFile, preference.ExcludeByAttribute)
excludeOwner := preference.ExcludeOwner
// TODO: for backward compat, eventually make them all overridable?
if context.IsSet("ignore-owner") {
excludeOwner = context.Bool("ignore-owner")
}
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password,
&duplicacy.BackupManagerOptions{
NobackupFile: preference.NobackupFile,
FiltersFile: preference.FiltersFile,
ExcludeByAttribute: preference.ExcludeByAttribute,
SetOwner: excludeOwner,
ExcludeXattrs: preference.ExcludeXattrs,
NormalizeXattrs: preference.NormalizeXattrs,
IncludeSpecials: preference.IncludeSpecials,
FileFlagsMask: uint32(preference.FileFlagsMask),
})
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false) loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false)
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
failed := backupManager.Restore(repository, revision, &duplicacy.RestoreOptions{ failed := backupManager.Restore(repository, revision, true, quickMode, threads, overwrite, deleteMode, setOwner, showStatistics, patterns, persist)
InPlace: true,
QuickMode: !context.Bool("hash"),
Overwrite: context.Bool("overwrite"),
DeleteMode: context.Bool("delete"),
ShowStatistics: context.Bool("stats"),
AllowFailures: context.Bool("persist"),
})
if failed > 0 { if failed > 0 {
duplicacy.LOG_ERROR("RESTORE_FAIL", "%d file(s) were not restored correctly", failed) duplicacy.LOG_ERROR("RESTORE_FAIL", "%d file(s) were not restored correctly", failed)
return return
@@ -952,8 +927,7 @@ func listSnapshots(context *cli.Context) {
tag := context.String("t") tag := context.String("t")
revisions := getRevisions(context) revisions := getRevisions(context)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", preference.ExcludeByAttribute)
&duplicacy.BackupManagerOptions{ExcludeByAttribute: preference.ExcludeByAttribute})
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
id := preference.SnapshotID id := preference.SnapshotID
@@ -1009,7 +983,7 @@ func checkSnapshots(context *cli.Context) {
tag := context.String("t") tag := context.String("t")
revisions := getRevisions(context) revisions := getRevisions(context)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, nil) backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", false)
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false) loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false)
@@ -1069,7 +1043,8 @@ func printFile(context *cli.Context) {
snapshotID = context.String("id") snapshotID = context.String("id")
} }
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, nil)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", false)
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false) loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false)
@@ -1127,14 +1102,13 @@ func diff(context *cli.Context) {
} }
compareByHash := context.Bool("hash") compareByHash := context.Bool("hash")
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, nil) backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", false)
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false) loadRSAPrivateKey(context.String("key"), context.String("key-passphrase"), preference, backupManager, false)
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.Diff(repository, snapshotID, revisions, path, compareByHash, backupManager.SnapshotManager.Diff(repository, snapshotID, revisions, path, compareByHash, preference.NobackupFile, preference.FiltersFile, preference.ExcludeByAttribute)
duplicacy.NewListFilesOptions(preference))
runScript(context, preference.Name, "post") runScript(context, preference.Name, "post")
} }
@@ -1173,7 +1147,7 @@ func showHistory(context *cli.Context) {
revisions := getRevisions(context) revisions := getRevisions(context)
showLocalHash := context.Bool("hash") showLocalHash := context.Bool("hash")
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, nil) backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", false)
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
@@ -1236,7 +1210,7 @@ func pruneSnapshots(context *cli.Context) {
os.Exit(ArgumentExitCode) os.Exit(ArgumentExitCode)
} }
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, nil) backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "", false)
duplicacy.SavePassword(*preference, "password", password) duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
@@ -1281,7 +1255,7 @@ func copySnapshots(context *cli.Context) {
sourcePassword = duplicacy.GetPassword(*source, "password", "Enter source storage password:", false, false) sourcePassword = duplicacy.GetPassword(*source, "password", "Enter source storage password:", false, false)
} }
sourceManager := duplicacy.CreateBackupManager(source.SnapshotID, sourceStorage, repository, sourcePassword, nil) sourceManager := duplicacy.CreateBackupManager(source.SnapshotID, sourceStorage, repository, sourcePassword, "", "", false)
sourceManager.SetupSnapshotCache(source.Name) sourceManager.SetupSnapshotCache(source.Name)
duplicacy.SavePassword(*source, "password", sourcePassword) duplicacy.SavePassword(*source, "password", sourcePassword)
@@ -1316,7 +1290,7 @@ func copySnapshots(context *cli.Context) {
destinationStorage.SetRateLimits(0, context.Int("upload-limit-rate")) destinationStorage.SetRateLimits(0, context.Int("upload-limit-rate"))
destinationManager := duplicacy.CreateBackupManager(destination.SnapshotID, destinationStorage, repository, destinationManager := duplicacy.CreateBackupManager(destination.SnapshotID, destinationStorage, repository,
destinationPassword, nil) destinationPassword, "", "", false)
duplicacy.SavePassword(*destination, "password", destinationPassword) duplicacy.SavePassword(*destination, "password", destinationPassword)
destinationManager.SetupSnapshotCache(destination.Name) destinationManager.SetupSnapshotCache(destination.Name)
@@ -1441,7 +1415,7 @@ func benchmark(context *cli.Context) {
if storage == nil { if storage == nil {
return return
} }
duplicacy.Benchmark(repository, storage, int64(fileSize)*1024*1024, chunkSize*1024*1024, chunkCount, uploadThreads, downloadThreads) duplicacy.Benchmark(repository, storage, int64(fileSize) * 1024 * 1024, chunkSize * 1024 * 1024, chunkCount, uploadThreads, downloadThreads)
} }
func main() { func main() {
@@ -1480,8 +1454,8 @@ func main() {
Argument: "<level>", Argument: "<level>",
}, },
cli.BoolFlag{ cli.BoolFlag{
Name: "zstd", Name: "zstd",
Usage: "short for -zstd default", Usage: "short for -zstd default",
}, },
cli.IntFlag{ cli.IntFlag{
Name: "iterations", Name: "iterations",
@@ -1556,8 +1530,8 @@ func main() {
Argument: "<level>", Argument: "<level>",
}, },
cli.BoolFlag{ cli.BoolFlag{
Name: "zstd", Name: "zstd",
Usage: "short for -zstd default", Usage: "short for -zstd default",
}, },
cli.BoolFlag{ cli.BoolFlag{
Name: "vss", Name: "vss",
@@ -1590,6 +1564,7 @@ func main() {
Usage: "the maximum number of entries kept in memory (defaults to 1M)", Usage: "the maximum number of entries kept in memory (defaults to 1M)",
Argument: "<number>", Argument: "<number>",
}, },
}, },
Usage: "Save a snapshot of the repository to the storage", Usage: "Save a snapshot of the repository to the storage",
ArgsUsage: " ", ArgsUsage: " ",
@@ -1649,7 +1624,7 @@ func main() {
cli.BoolFlag{ cli.BoolFlag{
Name: "persist", Name: "persist",
Usage: "continue processing despite chunk errors or existing files (without -overwrite), reporting any affected files", Usage: "continue processing despite chunk errors or existing files (without -overwrite), reporting any affected files",
}, },
cli.StringFlag{ cli.StringFlag{
Name: "key-passphrase", Name: "key-passphrase",
Usage: "the passphrase to decrypt the RSA private key", Usage: "the passphrase to decrypt the RSA private key",
@@ -2007,8 +1982,8 @@ func main() {
Argument: "<level>", Argument: "<level>",
}, },
cli.BoolFlag{ cli.BoolFlag{
Name: "zstd", Name: "zstd",
Usage: "short for -zstd default", Usage: "short for -zstd default",
}, },
cli.IntFlag{ cli.IntFlag{
Name: "iterations", Name: "iterations",
@@ -2273,8 +2248,8 @@ func main() {
Usage: "add a comment to identify the process", Usage: "add a comment to identify the process",
}, },
cli.StringSliceFlag{ cli.StringSliceFlag{
Name: "suppress, s", Name: "suppress, s",
Usage: "suppress logs with the specified id", Usage: "suppress logs with the specified id",
Argument: "<id>", Argument: "<id>",
}, },
cli.BoolFlag{ cli.BoolFlag{
@@ -2287,7 +2262,7 @@ func main() {
app.Name = "duplicacy" app.Name = "duplicacy"
app.HelpName = "duplicacy" app.HelpName = "duplicacy"
app.Usage = "A new generation cloud backup tool based on lock-free deduplication" app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
app.Version = "3.2.3" + " (" + GitCommit + ")" app.Version = "3.2.1" + " (" + GitCommit + ")"
// Exit with code 2 if an invalid command is provided // Exit with code 2 if an invalid command is provided
app.CommandNotFound = func(context *cli.Context, command string) { app.CommandNotFound = func(context *cli.Context, command string) {

File diff suppressed because it is too large Load Diff

14
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/dupluxy/dupluxy module github.com/gilbertchen/duplicacy
go 1.20 go 1.19
require ( require (
cloud.google.com/go v0.38.0 cloud.google.com/go v0.38.0
@@ -22,14 +22,13 @@ require (
github.com/minio/highwayhash v1.0.2 github.com/minio/highwayhash v1.0.2
github.com/ncw/swift/v2 v2.0.1 github.com/ncw/swift/v2 v2.0.1
github.com/pkg/sftp v1.11.0 github.com/pkg/sftp v1.11.0
github.com/pkg/xattr v0.4.9 github.com/pkg/xattr v0.4.1
github.com/vmihailenco/msgpack v4.0.4+incompatible github.com/vmihailenco/msgpack v4.0.4+incompatible
golang.org/x/crypto v0.12.0 golang.org/x/crypto v0.12.0
golang.org/x/net v0.10.0 golang.org/x/net v0.10.0
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sys v0.11.0
google.golang.org/api v0.21.0 google.golang.org/api v0.21.0
storj.io/uplink v1.12.1 storj.io/uplink v1.12.0
) )
require ( require (
@@ -56,7 +55,7 @@ require (
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/satori/go.uuid v1.2.0 // indirect github.com/satori/go.uuid v1.2.0 // indirect
github.com/segmentio/go-env v1.1.0 // indirect github.com/segmentio/go-env v1.1.0 // indirect
github.com/spacemonkeygo/monkit/v3 v3.0.22 // indirect github.com/spacemonkeygo/monkit/v3 v3.0.20-0.20230227152157-d00b379de191 // indirect
github.com/vaughan0/go-ini v0.0.0-20130923145212-a98ad7ee00ec // indirect github.com/vaughan0/go-ini v0.0.0-20130923145212-a98ad7ee00ec // indirect
github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3 // indirect github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3 // indirect
github.com/zeebo/blake3 v0.2.3 // indirect github.com/zeebo/blake3 v0.2.3 // indirect
@@ -64,6 +63,7 @@ require (
go.opencensus.io v0.22.3 // indirect go.opencensus.io v0.22.3 // indirect
golang.org/x/mod v0.10.0 // indirect golang.org/x/mod v0.10.0 // indirect
golang.org/x/sync v0.3.0 // indirect golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.11.0 // indirect
golang.org/x/term v0.11.0 // indirect golang.org/x/term v0.11.0 // indirect
golang.org/x/text v0.12.0 // indirect golang.org/x/text v0.12.0 // indirect
golang.org/x/tools v0.9.1 // indirect golang.org/x/tools v0.9.1 // indirect
@@ -71,7 +71,7 @@ require (
google.golang.org/genproto v0.0.0-20200409111301-baae70f3302d // indirect google.golang.org/genproto v0.0.0-20200409111301-baae70f3302d // indirect
google.golang.org/grpc v1.28.1 // indirect google.golang.org/grpc v1.28.1 // indirect
google.golang.org/protobuf v1.28.1 // indirect google.golang.org/protobuf v1.28.1 // indirect
storj.io/common v0.0.0-20230920095429-0ce0a575e6f8 // indirect storj.io/common v0.0.0-20230907123639-5fd0608fd947 // indirect
storj.io/drpc v0.0.33 // indirect storj.io/drpc v0.0.33 // indirect
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c // indirect storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c // indirect
) )

16
go.sum
View File

@@ -128,8 +128,6 @@ github.com/pkg/sftp v1.11.0 h1:4Zv0OGbpkg4yNuUtH0s8rvoYxRCNyT29NVUo6pgPmxI=
github.com/pkg/sftp v1.11.0/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI= github.com/pkg/sftp v1.11.0/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pkg/xattr v0.4.1 h1:dhclzL6EqOXNaPDWqoeb9tIxATfBSmjqL0b4DpSjwRw= github.com/pkg/xattr v0.4.1 h1:dhclzL6EqOXNaPDWqoeb9tIxATfBSmjqL0b4DpSjwRw=
github.com/pkg/xattr v0.4.1/go.mod h1:W2cGD0TBEus7MkUgv0tNZ9JutLtVO3cXu+IBRuHqnFs= github.com/pkg/xattr v0.4.1/go.mod h1:W2cGD0TBEus7MkUgv0tNZ9JutLtVO3cXu+IBRuHqnFs=
github.com/pkg/xattr v0.4.9 h1:5883YPCtkSd8LFbs13nXplj9g9tlrwoJRjgpgMu1/fE=
github.com/pkg/xattr v0.4.9/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -140,8 +138,8 @@ github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/segmentio/go-env v1.1.0 h1:AGJ7OnCx9M5NWpkYPGYELS6III/pFSnAs1GvKWStiEo= github.com/segmentio/go-env v1.1.0 h1:AGJ7OnCx9M5NWpkYPGYELS6III/pFSnAs1GvKWStiEo=
github.com/segmentio/go-env v1.1.0/go.mod h1:pEKO2ieHe8zF098OMaAHw21SajMuONlnI/vJNB3pB7I= github.com/segmentio/go-env v1.1.0/go.mod h1:pEKO2ieHe8zF098OMaAHw21SajMuONlnI/vJNB3pB7I=
github.com/spacemonkeygo/monkit/v3 v3.0.22 h1:4/g8IVItBDKLdVnqrdHZrCVPpIrwDBzl1jrV0IHQHDU= github.com/spacemonkeygo/monkit/v3 v3.0.20-0.20230227152157-d00b379de191 h1:QVUfVxilbPp8fBJ7701LL/WEUjBSiSxbs9LUaCIe5qM=
github.com/spacemonkeygo/monkit/v3 v3.0.22/go.mod h1:XkZYGzknZwkD0AKUnZaSXhRiVTLCkq7CWVa3IsE72gA= github.com/spacemonkeygo/monkit/v3 v3.0.20-0.20230227152157-d00b379de191/go.mod h1:kj1ViJhlyADa7DiA4xVnTuPA46lFKbM7mxQTrXCuJP4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
@@ -195,6 +193,7 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@@ -227,7 +226,6 @@ golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM= golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -293,11 +291,11 @@ honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
storj.io/common v0.0.0-20230920095429-0ce0a575e6f8 h1:i+bWPhVnNL6z/TLW3vDZytB6/0bsvJM0a1GhLCxrlxQ= storj.io/common v0.0.0-20230907123639-5fd0608fd947 h1:X75A5hX1nFjQH8GIvei4T1LNQTLa++bsDKMxXxfPHE8=
storj.io/common v0.0.0-20230920095429-0ce0a575e6f8/go.mod h1:ZmeGPzRb2sm705Nwt/WwuH3e6mliShfvvoUNy1bb9v4= storj.io/common v0.0.0-20230907123639-5fd0608fd947/go.mod h1:FMVOxf2+SgsmfjxwFCM1MZCKwXis4U7l22M/6nIhIas=
storj.io/drpc v0.0.33 h1:yCGZ26r66ZdMP0IcTYsj7WDAUIIjzXk6DJhbhvt9FHI= storj.io/drpc v0.0.33 h1:yCGZ26r66ZdMP0IcTYsj7WDAUIIjzXk6DJhbhvt9FHI=
storj.io/drpc v0.0.33/go.mod h1:vR804UNzhBa49NOJ6HeLjd2H3MakC1j5Gv8bsOQT6N4= storj.io/drpc v0.0.33/go.mod h1:vR804UNzhBa49NOJ6HeLjd2H3MakC1j5Gv8bsOQT6N4=
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c h1:or/DtG5uaZpzimL61ahlgAA+MTYn/U3txz4fe+XBFUg= storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c h1:or/DtG5uaZpzimL61ahlgAA+MTYn/U3txz4fe+XBFUg=
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c/go.mod h1:JCuc3C0gzCJHQ4J6SOx/Yjg+QTpX0D+Fvs5H46FETCk= storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c/go.mod h1:JCuc3C0gzCJHQ4J6SOx/Yjg+QTpX0D+Fvs5H46FETCk=
storj.io/uplink v1.12.1 h1:bDc2dI6Q7EXcvPJLZuH9jIOTIf2oKxvW3xKEA+Y5EI0= storj.io/uplink v1.12.0 h1:rTODjbKRo/lzz5Hp0isjoRfqDcH7kJg6aujD2M9v9Ro=
storj.io/uplink v1.12.1/go.mod h1:1+czctHG25pMzcUp4Mds6QnoJ7LvbgYA5d1qlpFFexg= storj.io/uplink v1.12.0/go.mod h1:nMAuoWi5AHio+8NQa33VRzCiRg0B0UhYKuT0a0CdXOg=

View File

@@ -7,9 +7,9 @@ package duplicacy
import ( import (
"bytes" "bytes"
"encoding/hex" "encoding/hex"
"encoding/json"
"fmt" "fmt"
"io" "io"
"math"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -26,6 +26,7 @@ import (
// BackupManager performs the two major operations, backup and restore, and passes other operations, mostly related to // BackupManager performs the two major operations, backup and restore, and passes other operations, mostly related to
// snapshot management, to the snapshot manager. // snapshot management, to the snapshot manager.
type BackupManager struct { type BackupManager struct {
snapshotID string // Unique id for each repository snapshotID string // Unique id for each repository
storage Storage // the storage for storing backups storage Storage // the storage for storing backups
@@ -33,35 +34,15 @@ type BackupManager struct {
SnapshotManager *SnapshotManager // the snapshot manager SnapshotManager *SnapshotManager // the snapshot manager
snapshotCache *FileStorage // for copies of chunks needed by snapshots snapshotCache *FileStorage // for copies of chunks needed by snapshots
config *Config // contains a number of options config *Config // contains a number of options
options BackupManagerOptions
nobackupFile string // don't backup directory when this file name is found
filtersFile string // the path to the filters file
excludeByAttribute bool // don't backup file based on file attribute
cachePath string cachePath string
} }
type BackupManagerOptions struct {
NobackupFile string // don't backup directory when this file name is found
FiltersFile string // the path to the filters file
ExcludeByAttribute bool // don't backup file based on file attribute
SetOwner bool
ExcludeXattrs bool
NormalizeXattrs bool
IncludeFileFlags bool
IncludeSpecials bool
FileFlagsMask uint32
}
type RestoreOptions struct {
Threads int
Patterns []string
InPlace bool
QuickMode bool
Overwrite bool
DeleteMode bool
ShowStatistics bool
AllowFailures bool
}
func (manager *BackupManager) SetDryRun(dryRun bool) { func (manager *BackupManager) SetDryRun(dryRun bool) {
manager.config.dryRun = dryRun manager.config.dryRun = dryRun
} }
@@ -70,19 +51,10 @@ func (manager *BackupManager) SetCompressionLevel(level int) {
manager.config.CompressionLevel = level manager.config.CompressionLevel = level
} }
func (manager *BackupManager) Config() *Config {
return manager.config
}
func (manager *BackupManager) SnapshotCache() *FileStorage {
return manager.snapshotCache
}
// CreateBackupManager creates a backup manager using the specified 'storage'. 'snapshotID' is a unique id to // CreateBackupManager creates a backup manager using the specified 'storage'. 'snapshotID' is a unique id to
// identify snapshots created for this repository. 'top' is the top directory of the repository. 'password' is the // identify snapshots created for this repository. 'top' is the top directory of the repository. 'password' is the
// master key which can be nil if encryption is not enabled. // master key which can be nil if encryption is not enabled.
func CreateBackupManager(snapshotID string, storage Storage, top string, password string, func CreateBackupManager(snapshotID string, storage Storage, top string, password string, nobackupFile string, filtersFile string, excludeByAttribute bool) *BackupManager {
options *BackupManagerOptions) *BackupManager {
config, _, err := DownloadConfig(storage, password) config, _, err := DownloadConfig(storage, password)
if err != nil { if err != nil {
@@ -102,10 +74,13 @@ func CreateBackupManager(snapshotID string, storage Storage, top string, passwor
SnapshotManager: snapshotManager, SnapshotManager: snapshotManager,
config: config, config: config,
}
if options != nil { nobackupFile: nobackupFile,
backupManager.options = *options
filtersFile: filtersFile,
excludeByAttribute: excludeByAttribute,
} }
if IsDebugging() { if IsDebugging() {
@@ -156,7 +131,8 @@ func (manager *BackupManager) SetupSnapshotCache(storageName string) bool {
func (manager *BackupManager) Backup(top string, quickMode bool, threads int, tag string, func (manager *BackupManager) Backup(top string, quickMode bool, threads int, tag string,
showStatistics bool, shadowCopy bool, shadowCopyTimeout int, enumOnly bool, metadataChunkSize int, maximumInMemoryEntries int) bool { showStatistics bool, shadowCopy bool, shadowCopyTimeout int, enumOnly bool, metadataChunkSize int, maximumInMemoryEntries int) bool {
top, err := filepath.Abs(top) var err error
top, err = filepath.Abs(top)
if err != nil { if err != nil {
LOG_ERROR("REPOSITORY_ERR", "Failed to obtain the absolute path of the repository: %v", err) LOG_ERROR("REPOSITORY_ERR", "Failed to obtain the absolute path of the repository: %v", err)
return false return false
@@ -177,7 +153,7 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
LOG_INFO("BACKUP_KEY", "RSA encryption is enabled") LOG_INFO("BACKUP_KEY", "RSA encryption is enabled")
} }
if manager.options.ExcludeByAttribute { if manager.excludeByAttribute {
LOG_INFO("BACKUP_EXCLUDE", "Exclude files with no-backup attributes") LOG_INFO("BACKUP_EXCLUDE", "Exclude files with no-backup attributes")
} }
@@ -262,16 +238,7 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
go func() { go func() {
// List local files // List local files
defer CatchLogException() defer CatchLogException()
localSnapshot.ListLocalFiles(shadowTop, localListingChannel, &skippedDirectories, &skippedFiles, localSnapshot.ListLocalFiles(shadowTop, manager.nobackupFile, manager.filtersFile, manager.excludeByAttribute, localListingChannel, &skippedDirectories, &skippedFiles)
&ListFilesOptions{
NoBackupFile: manager.options.NobackupFile,
FiltersFile: manager.options.FiltersFile,
ExcludeByAttribute: manager.options.ExcludeByAttribute,
ExcludeXattrs: manager.options.ExcludeXattrs,
NormalizeXattr: manager.options.NormalizeXattrs,
IncludeFileFlags: manager.options.IncludeFileFlags,
IncludeSpecials: manager.options.IncludeSpecials,
})
}() }()
go func() { go func() {
@@ -656,34 +623,21 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
} }
// Restore downloads the specified snapshot, compares it with what's on the repository, and then downloads // Restore downloads the specified snapshot, compares it with what's on the repository, and then downloads
// files that are different.'QuickMode' will bypass files with unchanged sizes and timestamps. 'DeleteMode' will // files that are different. 'base' is a directory that contains files at a different revision which can
// remove local files that don't exist in the snapshot. 'Patterns' is used to include/exclude certain files. // serve as a local cache to avoid download chunks available locally. It is perfectly ok for 'base' to be
func (manager *BackupManager) Restore(top string, revision int, options *RestoreOptions) int { // the same as 'top'. 'quickMode' will bypass files with unchanged sizes and timestamps. 'deleteMode' will
if options.Threads < 1 { // remove local files that don't exist in the snapshot. 'patterns' is used to include/exclude certain files.
options.Threads = 1 func (manager *BackupManager) Restore(top string, revision int, inPlace bool, quickMode bool, threads int, overwrite bool,
} deleteMode bool, setOwner bool, showStatistics bool, patterns []string, allowFailures bool) int {
patterns := options.Patterns
overwrite := options.Overwrite
allowFailures := options.AllowFailures
metadataOptions := RestoreMetadataOptions{
SetOwner: manager.options.SetOwner,
ExcludeXattrs: manager.options.ExcludeXattrs,
NormalizeXattrs: manager.options.NormalizeXattrs,
IncludeFileFlags: manager.options.IncludeFileFlags,
FileFlagsMask: manager.options.FileFlagsMask,
}
startTime := time.Now().Unix() startTime := time.Now().Unix()
LOG_DEBUG("RESTORE_PARAMETERS", "top: %s, revision: %d, in-place: %t, quick: %t, delete: %t", LOG_DEBUG("RESTORE_PARAMETERS", "top: %s, revision: %d, in-place: %t, quick: %t, delete: %t",
top, revision, options.InPlace, options.QuickMode, options.DeleteMode) top, revision, inPlace, quickMode, deleteMode)
if !strings.HasPrefix(GetDuplicacyPreferencePath(), top) { if !strings.HasPrefix(GetDuplicacyPreferencePath(), top) {
LOG_INFO("RESTORE_INPLACE", "Forcing in-place mode with a non-default preference path") LOG_INFO("RESTORE_INPLACE", "Forcing in-place mode with a non-default preference path")
options.InPlace = true inPlace = true
} }
if len(patterns) > 0 { if len(patterns) > 0 {
@@ -725,23 +679,13 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
localListingChannel := make(chan *Entry) localListingChannel := make(chan *Entry)
remoteListingChannel := make(chan *Entry) remoteListingChannel := make(chan *Entry)
chunkOperator := CreateChunkOperator(manager.config, manager.storage, manager.snapshotCache, options.ShowStatistics, chunkOperator := CreateChunkOperator(manager.config, manager.storage, manager.snapshotCache, showStatistics, false, threads, allowFailures)
false, options.Threads, allowFailures)
LOG_INFO("RESTORE_INDEXING", "Indexing %s", top) LOG_INFO("RESTORE_INDEXING", "Indexing %s", top)
go func() { go func() {
// List local files // List local files
defer CatchLogException() defer CatchLogException()
localSnapshot.ListLocalFiles(top, localListingChannel, nil, nil, localSnapshot.ListLocalFiles(top, manager.nobackupFile, manager.filtersFile, manager.excludeByAttribute, localListingChannel, nil, nil)
&ListFilesOptions{
NoBackupFile: manager.options.NobackupFile,
FiltersFile: manager.options.FiltersFile,
ExcludeByAttribute: manager.options.ExcludeByAttribute,
ExcludeXattrs: manager.options.ExcludeXattrs,
NormalizeXattr: manager.options.NormalizeXattrs,
IncludeFileFlags: manager.options.IncludeFileFlags,
IncludeSpecials: manager.options.IncludeSpecials,
})
}() }()
remoteSnapshot := manager.SnapshotManager.DownloadSnapshot(manager.snapshotID, revision) remoteSnapshot := manager.SnapshotManager.DownloadSnapshot(manager.snapshotID, revision)
@@ -766,23 +710,23 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
var hardLinkTable []hardLinkEntry var hardLinkTable []hardLinkEntry
var hardLinks []*Entry var hardLinks []*Entry
restoreHardLink := func(entry *Entry, fullPath string) bool { restoreHardlink := func(entry *Entry, fullPath string) bool {
if entry.IsHardLinkRoot() { if entry.IsHardlinkRoot() {
hardLinkTable[len(hardLinkTable)-1].willExist = true hardLinkTable[len(hardLinkTable)-1].willExist = true
} else if entry.IsHardLinkChild() { } else if entry.IsHardlinkedFrom() {
i, err := entry.GetHardLinkId() i, err := entry.GetHardlinkId()
if err != nil { if err != nil {
LOG_ERROR("RESTORE_HARDLINK", "Decode error for hard link entry %s: %v", entry.Path, err) LOG_ERROR("RESTORE_HARDLINK", "Decode error for hardlinked entry %s, %v", entry.Path, err)
return false return false
} }
if !hardLinkTable[i].willExist { if !hardLinkTable[i].willExist {
hardLinkTable[i] = hardLinkEntry{entry, true} hardLinkTable[i] = hardLinkEntry{entry, true}
} else { } else {
sourcePath := joinPath(top, hardLinkTable[i].entry.Path) sourcePath := joinPath(top, hardLinkTable[i].entry.Path)
LOG_INFO("RESTORE_HARDLINK", "Hard linking %s to %s", fullPath, sourcePath)
if err := MakeHardlink(sourcePath, fullPath); err != nil { if err := MakeHardlink(sourcePath, fullPath); err != nil {
LOG_ERROR("RESTORE_HARDLINK", "Failed to create hard link %s to %s: %v", fullPath, sourcePath, err) LOG_ERROR("RESTORE_HARDLINK", "Failed to create hard link %s to %s %v", fullPath, sourcePath, err)
} }
LOG_TRACE("DOWNLOAD_DONE", "Hard linked %s to %s", entry.Path, hardLinkTable[i].entry.Path)
return true return true
} }
} }
@@ -791,7 +735,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
for remoteEntry := range remoteListingChannel { for remoteEntry := range remoteListingChannel {
if remoteEntry.IsHardLinkRoot() { if remoteEntry.IsHardlinkRoot() {
hardLinkTable = append(hardLinkTable, hardLinkEntry{remoteEntry, false}) hardLinkTable = append(hardLinkTable, hardLinkEntry{remoteEntry, false})
} }
@@ -820,7 +764,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
} }
if compareResult == 0 { if compareResult == 0 {
if options.QuickMode && localEntry.IsFile() && localEntry.IsSameAs(remoteEntry) { if quickMode && localEntry.IsFile() && localEntry.IsSameAs(remoteEntry) {
LOG_TRACE("RESTORE_SKIP", "File %s unchanged (by size and timestamp)", localEntry.Path) LOG_TRACE("RESTORE_SKIP", "File %s unchanged (by size and timestamp)", localEntry.Path)
skippedFileSize += localEntry.Size skippedFileSize += localEntry.Size
skippedFileCount++ skippedFileCount++
@@ -837,8 +781,8 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
if stat.Mode()&os.ModeSymlink != 0 { if stat.Mode()&os.ModeSymlink != 0 {
isRegular, link, err := Readlink(fullPath) isRegular, link, err := Readlink(fullPath)
if err == nil && link == remoteEntry.Link && !isRegular { if err == nil && link == remoteEntry.Link && !isRegular {
remoteEntry.RestoreMetadata(fullPath, stat, metadataOptions) remoteEntry.RestoreMetadata(fullPath, nil, setOwner)
if remoteEntry.IsHardLinkRoot() { if remoteEntry.IsHardlinkRoot() {
hardLinkTable[len(hardLinkTable)-1].willExist = true hardLinkTable[len(hardLinkTable)-1].willExist = true
} }
continue continue
@@ -854,7 +798,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
os.Remove(fullPath) os.Remove(fullPath)
} }
if restoreHardLink(remoteEntry, fullPath) { if restoreHardlink(remoteEntry, fullPath) {
continue continue
} }
@@ -862,9 +806,9 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
LOG_ERROR("RESTORE_SYMLINK", "Can't create symlink %s: %v", remoteEntry.Path, err) LOG_ERROR("RESTORE_SYMLINK", "Can't create symlink %s: %v", remoteEntry.Path, err)
return 0 return 0
} }
remoteEntry.RestoreMetadata(fullPath, nil, metadataOptions) remoteEntry.RestoreMetadata(fullPath, nil, setOwner)
LOG_TRACE("DOWNLOAD_DONE", "Symlink %s updated", remoteEntry.Path)
LOG_TRACE("DOWNLOAD_DONE", "Symlink %s updated", remoteEntry.Path)
} else if remoteEntry.IsDir() { } else if remoteEntry.IsDir() {
stat, err := os.Stat(fullPath) stat, err := os.Stat(fullPath)
@@ -883,21 +827,15 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
return 0 return 0
} }
} }
if metadataOptions.IncludeFileFlags { remoteEntry.RestoreEarlyDirFlags(fullPath)
err = remoteEntry.RestoreEarlyDirFlags(fullPath, manager.options.FileFlagsMask)
if err != nil {
LOG_WARN("DOWNLOAD_FLAGS", "Failed to set early file flags on %s: %v", fullPath, err)
}
}
directoryEntries = append(directoryEntries, remoteEntry) directoryEntries = append(directoryEntries, remoteEntry)
} else if remoteEntry.IsSpecial() && manager.options.IncludeSpecials { } else if remoteEntry.IsSpecial() {
if stat, _ := os.Lstat(fullPath); stat != nil { if stat, _ := os.Lstat(fullPath); stat != nil {
if remoteEntry.IsSameSpecial(stat) { if remoteEntry.IsSameSpecial(stat) {
remoteEntry.RestoreMetadata(fullPath, nil, metadataOptions) remoteEntry.RestoreMetadata(fullPath, nil, setOwner)
if remoteEntry.IsHardLinkRoot() { if remoteEntry.IsHardlinkRoot() {
hardLinkTable[len(hardLinkTable)-1].willExist = true hardLinkTable[len(hardLinkTable)-1].willExist = true
} }
continue
} }
if !overwrite { if !overwrite {
LOG_WERROR(allowFailures, "DOWNLOAD_OVERWRITE", LOG_WERROR(allowFailures, "DOWNLOAD_OVERWRITE",
@@ -907,24 +845,22 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
os.Remove(fullPath) os.Remove(fullPath)
} }
if restoreHardLink(remoteEntry, fullPath) { if restoreHardlink(remoteEntry, fullPath) {
continue continue
} }
if err := remoteEntry.RestoreSpecial(fullPath); err != nil { if err := remoteEntry.RestoreSpecial(fullPath); err != nil {
LOG_ERROR("RESTORE_SPECIAL", "Failed to restore special file %s: %v", fullPath, err) LOG_ERROR("RESTORE_SPECIAL", "Unable to restore special file %s: %v", remoteEntry.Path, err)
return 0 return 0
} }
remoteEntry.RestoreMetadata(fullPath, nil, metadataOptions) remoteEntry.RestoreMetadata(fullPath, nil, setOwner)
LOG_TRACE("DOWNLOAD_DONE", "Special %s %s restored", remoteEntry.Path, remoteEntry.FmtSpecial())
} else { } else {
if remoteEntry.IsHardLinkRoot() { if remoteEntry.IsHardlinkRoot() {
hardLinkTable[len(hardLinkTable)-1].willExist = true hardLinkTable[len(hardLinkTable)-1].willExist = true
} else if remoteEntry.IsHardLinkChild() { } else if remoteEntry.IsHardlinkedFrom() {
i, err := remoteEntry.GetHardLinkId() i, err := remoteEntry.GetHardlinkId()
if err != nil { if err != nil {
LOG_ERROR("RESTORE_HARDLINK", "Decode error for hard link entry %s: %v", remoteEntry.Path, err) LOG_ERROR("RESTORE_HARDLINK", "Decode error for hardlinked entry %s, %v", remoteEntry.Path, err)
return 0 return 0
} }
if !hardLinkTable[i].willExist { if !hardLinkTable[i].willExist {
@@ -989,7 +925,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
fullPath := joinPath(top, file.Path) fullPath := joinPath(top, file.Path)
stat, _ := os.Stat(fullPath) stat, _ := os.Stat(fullPath)
if stat != nil { if stat != nil {
if options.QuickMode { if quickMode {
if file.IsSameAsFileInfo(stat) { if file.IsSameAsFileInfo(stat) {
LOG_TRACE("RESTORE_SKIP", "File %s unchanged (by size and timestamp)", file.Path) LOG_TRACE("RESTORE_SKIP", "File %s unchanged (by size and timestamp)", file.Path)
skippedFileSize += file.Size skippedFileSize += file.Size
@@ -1021,8 +957,8 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
} }
newFile.Close() newFile.Close()
file.RestoreMetadata(fullPath, nil, metadataOptions) file.RestoreMetadata(fullPath, nil, setOwner)
if !options.ShowStatistics { if !showStatistics {
LOG_INFO("DOWNLOAD_DONE", "Downloaded %s (0)", file.Path) LOG_INFO("DOWNLOAD_DONE", "Downloaded %s (0)", file.Path)
downloadedFileSize += file.Size downloadedFileSize += file.Size
downloadedFiles = append(downloadedFiles, file) downloadedFiles = append(downloadedFiles, file)
@@ -1031,13 +967,8 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
continue continue
} }
fileFlagsMask := metadataOptions.FileFlagsMask downloaded, err := manager.RestoreFile(chunkDownloader, chunkMaker, file, top, inPlace, overwrite, showStatistics,
if !metadataOptions.IncludeFileFlags { totalFileSize, downloadedFileSize, startDownloadingTime, allowFailures)
fileFlagsMask = math.MaxUint32
}
downloaded, err := manager.RestoreFile(chunkDownloader, chunkMaker, file, top, options.InPlace, overwrite,
options.ShowStatistics, totalFileSize, downloadedFileSize, startDownloadingTime, allowFailures,
fileFlagsMask)
if err != nil { if err != nil {
// RestoreFile returned an error; if allowFailures is false RestoerFile would error out and not return so here // RestoreFile returned an error; if allowFailures is false RestoerFile would error out and not return so here
// we just need to show a warning // we just need to show a warning
@@ -1056,12 +987,12 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
skippedFileSize += file.Size skippedFileSize += file.Size
skippedFileCount++ skippedFileCount++
} }
file.RestoreMetadata(fullPath, nil, metadataOptions) file.RestoreMetadata(fullPath, nil, setOwner)
} }
for _, linkEntry := range hardLinks { for _, linkEntry := range hardLinks {
i, _ := linkEntry.GetHardLinkId() i, _ := linkEntry.GetHardlinkId()
sourcePath := joinPath(top, hardLinkTable[i].entry.Path) sourcePath := joinPath(top, hardLinkTable[i].entry.Path)
fullPath := joinPath(top, linkEntry.Path) fullPath := joinPath(top, linkEntry.Path)
@@ -1073,7 +1004,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
if sourceStat == nil { if sourceStat == nil {
LOG_WERROR(allowFailures, "RESTORE_HARDLINK", LOG_WERROR(allowFailures, "RESTORE_HARDLINK",
"Target %s for hard link %s is missing", sourcePath, linkEntry.Path) "Target %s for hardlink %s is missing", sourcePath, linkEntry.Path)
continue continue
} }
if !overwrite { if !overwrite {
@@ -1084,14 +1015,14 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
os.Remove(fullPath) os.Remove(fullPath)
} }
LOG_DEBUG("RESTORE_HARDLINK", "Hard linking %s to %s", fullPath, sourcePath)
if err := MakeHardlink(sourcePath, fullPath); err != nil { if err := MakeHardlink(sourcePath, fullPath); err != nil {
LOG_ERROR("RESTORE_HARDLINK", "Failed to create hard link %s to %s: %v", fullPath, sourcePath, err) LOG_ERROR("RESTORE_HARDLINK", "Failed to create hard link %s to %s", fullPath, sourcePath)
return 0 return 0
} }
LOG_TRACE("RESTORE_HARDLINK", "Hard linked %s to %s", linkEntry.Path, hardLinkTable[i].entry.Path)
} }
if options.DeleteMode && len(patterns) == 0 { if deleteMode && len(patterns) == 0 {
// Reverse the order to make sure directories are empty before being deleted // Reverse the order to make sure directories are empty before being deleted
for i := range extraFiles { for i := range extraFiles {
file := extraFiles[len(extraFiles)-1-i] file := extraFiles[len(extraFiles)-1-i]
@@ -1103,10 +1034,10 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
for _, entry := range directoryEntries { for _, entry := range directoryEntries {
dir := joinPath(top, entry.Path) dir := joinPath(top, entry.Path)
entry.RestoreMetadata(dir, nil, metadataOptions) entry.RestoreMetadata(dir, nil, setOwner)
} }
if options.ShowStatistics { if showStatistics {
for _, file := range downloadedFiles { for _, file := range downloadedFiles {
LOG_INFO("DOWNLOAD_DONE", "Downloaded %s (%d)", file.Path, file.Size) LOG_INFO("DOWNLOAD_DONE", "Downloaded %s (%d)", file.Path, file.Size)
} }
@@ -1117,7 +1048,7 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
} }
LOG_INFO("RESTORE_END", "Restored %s to revision %d", top, revision) LOG_INFO("RESTORE_END", "Restored %s to revision %d", top, revision)
if options.ShowStatistics { if showStatistics {
LOG_INFO("RESTORE_STATS", "Files: %d total, %s bytes", len(fileEntries), PrettySize(totalFileSize)) LOG_INFO("RESTORE_STATS", "Files: %d total, %s bytes", len(fileEntries), PrettySize(totalFileSize))
LOG_INFO("RESTORE_STATS", "Downloaded %d file, %s bytes, %d chunks", LOG_INFO("RESTORE_STATS", "Downloaded %d file, %s bytes, %d chunks",
len(downloadedFiles), PrettySize(downloadedFileSize), chunkDownloader.numberOfDownloadedChunks) len(downloadedFiles), PrettySize(downloadedFileSize), chunkDownloader.numberOfDownloadedChunks)
@@ -1136,6 +1067,55 @@ func (manager *BackupManager) Restore(top string, revision int, options *Restore
return 0 return 0
} }
// fileEncoder encodes one file at a time to avoid loading the full json description of the entire file tree
// in the memory
type fileEncoder struct {
top string
readAttributes bool
files []*Entry
currentIndex int
buffer *bytes.Buffer
}
// Read reads data from the embedded buffer
func (encoder fileEncoder) Read(data []byte) (n int, err error) {
return encoder.buffer.Read(data)
}
// NextFile switches to the next file and generates its json description in the buffer. It also takes care of
// the ending ']' and the commas between files.
func (encoder *fileEncoder) NextFile() (io.Reader, bool) {
if encoder.currentIndex == len(encoder.files) {
return nil, false
}
if encoder.currentIndex == len(encoder.files)-1 {
encoder.buffer.Write([]byte("]"))
encoder.currentIndex++
return encoder, true
}
encoder.currentIndex++
entry := encoder.files[encoder.currentIndex]
if encoder.readAttributes {
entry.ReadAttributes(encoder.top)
}
description, err := json.Marshal(entry)
if err != nil {
LOG_FATAL("SNAPSHOT_ENCODE", "Failed to encode file %s: %v", encoder.files[encoder.currentIndex].Path, err)
return nil, false
}
if encoder.readAttributes {
entry.Attributes = nil
}
if encoder.currentIndex != 0 {
encoder.buffer.Write([]byte(","))
}
encoder.buffer.Write(description)
return encoder, true
}
// UploadSnapshot uploads the specified snapshot to the storage. It turns Files, ChunkHashes, and ChunkLengths into // UploadSnapshot uploads the specified snapshot to the storage. It turns Files, ChunkHashes, and ChunkLengths into
// sequences of chunks, and uploads these chunks, and finally the snapshot file. // sequences of chunks, and uploads these chunks, and finally the snapshot file.
func (manager *BackupManager) UploadSnapshot(chunkOperator *ChunkOperator, top string, snapshot *Snapshot, func (manager *BackupManager) UploadSnapshot(chunkOperator *ChunkOperator, top string, snapshot *Snapshot,
@@ -1217,7 +1197,8 @@ func (manager *BackupManager) UploadSnapshot(chunkOperator *ChunkOperator, top s
entry.StartChunk -= delta entry.StartChunk -= delta
entry.EndChunk -= delta entry.EndChunk -= delta
if entry.IsHardLinkRoot() { if entry.IsHardlinkRoot() {
LOG_DEBUG("SNAPSHOT_UPLOAD", "Hard link root %s %v %v", entry.Path, entry.StartChunk, entry.EndChunk)
hardLinkTable = append(hardLinkTable, hardLinkEntry{entry, entry.StartChunk}) hardLinkTable = append(hardLinkTable, hardLinkEntry{entry, entry.StartChunk})
} }
@@ -1225,24 +1206,28 @@ func (manager *BackupManager) UploadSnapshot(chunkOperator *ChunkOperator, top s
entry.StartChunk -= lastEndChunk entry.StartChunk -= lastEndChunk
lastEndChunk = entry.EndChunk lastEndChunk = entry.EndChunk
entry.EndChunk = delta entry.EndChunk = delta
} else if entry.IsHardLinkChild() { } else if entry.IsHardlinkedFrom() && !entry.IsLink() {
i, err := entry.GetHardLinkId() i, err := entry.GetHardlinkId()
if err != nil { if err != nil {
LOG_ERROR("SNAPSHOT_UPLOAD", "Decode error for hard link entry %s: %v", entry.Link, err) LOG_ERROR("SNAPSHOT_UPLOAD", "Decode error for hardlinked entry %s, %v", entry.Link, err)
return err return err
} }
targetEntry := hardLinkTable[i].entry targetEntry := hardLinkTable[i].entry
var startChunk, endChunk int var startChunk, endChunk int
if targetEntry.IsFile() && targetEntry.Size > 0 { if targetEntry.Size > 0 {
startChunk = hardLinkTable[i].startChunk - lastEndChunk startChunk = hardLinkTable[i].startChunk - lastEndChunk
endChunk = targetEntry.EndChunk endChunk = targetEntry.EndChunk
lastEndChunk = hardLinkTable[i].startChunk + endChunk
} }
entry = entry.HardLinkTo(targetEntry, startChunk, endChunk) entry = entry.HardLinkTo(targetEntry, startChunk, endChunk)
} else if entry.IsHardLinkRoot() { if targetEntry.Size > 0 {
lastEndChunk = hardLinkTable[i].startChunk + endChunk
}
LOG_DEBUG("SNAPSHOT_UPLOAD", "Uploading cloned hardlink for %s to %s (%v %v)", entry.Path, targetEntry.Path, startChunk, endChunk)
} else if entry.IsHardlinkRoot() {
hardLinkTable = append(hardLinkTable, hardLinkEntry{entry, 0}) hardLinkTable = append(hardLinkTable, hardLinkEntry{entry, 0})
} }
@@ -1317,8 +1302,7 @@ func (manager *BackupManager) UploadSnapshot(chunkOperator *ChunkOperator, top s
// false, nil: Skipped file; // false, nil: Skipped file;
// false, error: Failure to restore file (only if allowFailures == true) // false, error: Failure to restore file (only if allowFailures == true)
func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chunkMaker *ChunkMaker, entry *Entry, top string, inPlace bool, overwrite bool, func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chunkMaker *ChunkMaker, entry *Entry, top string, inPlace bool, overwrite bool,
showStatistics bool, totalFileSize int64, downloadedFileSize int64, startTime int64, allowFailures bool, showStatistics bool, totalFileSize int64, downloadedFileSize int64, startTime int64, allowFailures bool) (bool, error) {
fileFlagsMask uint32) (bool, error) {
LOG_TRACE("DOWNLOAD_START", "Downloading %s", entry.Path) LOG_TRACE("DOWNLOAD_START", "Downloading %s", entry.Path)
@@ -1365,10 +1349,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
LOG_ERROR("DOWNLOAD_CREATE", "Failed to create the file %s for in-place writing: %v", fullPath, err) LOG_ERROR("DOWNLOAD_CREATE", "Failed to create the file %s for in-place writing: %v", fullPath, err)
return false, nil return false, nil
} }
err = entry.RestoreEarlyFileFlags(existingFile, fileFlagsMask) entry.RestoreEarlyFileFlags(existingFile)
if err != nil {
LOG_WARN("DOWNLOAD_FLAGS", "Failed to set early file flags on %s: %v", fullPath, err)
}
n := int64(1) n := int64(1)
// There is a go bug on Windows (https://github.com/golang/go/issues/21681) that causes Seek to fail // There is a go bug on Windows (https://github.com/golang/go/issues/21681) that causes Seek to fail
@@ -1552,10 +1533,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
return false, nil return false, nil
} }
} }
err = entry.RestoreEarlyFileFlags(existingFile, fileFlagsMask) entry.RestoreEarlyFileFlags(existingFile)
if err != nil {
LOG_WARN("DOWNLOAD_FLAGS", "Failed to set early file flags on %s: %v", fullPath, err)
}
existingFile.Seek(0, 0) existingFile.Seek(0, 0)
@@ -1638,10 +1616,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
LOG_ERROR("DOWNLOAD_OPEN", "Failed to open file for writing: %v", err) LOG_ERROR("DOWNLOAD_OPEN", "Failed to open file for writing: %v", err)
return false, nil return false, nil
} }
err = entry.RestoreEarlyFileFlags(newFile, fileFlagsMask) entry.RestoreEarlyFileFlags(newFile)
if err != nil {
LOG_WARN("DOWNLOAD_FLAGS", "Failed to set early file flags on %s: %v", fullPath, err)
}
hasher := manager.config.NewFileHasher() hasher := manager.config.NewFileHasher()

View File

@@ -176,6 +176,8 @@ func assertRestoreFailures(t *testing.T, failedFiles int, expectedFailedFiles in
} }
func TestBackupManager(t *testing.T) { func TestBackupManager(t *testing.T) {
rand.Seed(time.Now().UnixNano())
setTestingT(t) setTestingT(t)
SetLoggingLevel(INFO) SetLoggingLevel(INFO)
@@ -251,23 +253,15 @@ func TestBackupManager(t *testing.T) {
time.Sleep(time.Duration(delay) * time.Second) time.Sleep(time.Duration(delay) * time.Second)
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
backupManager := CreateBackupManager("host1", storage, testDir, password, nil) backupManager := CreateBackupManager("host1", storage, testDir, password, "", "", false)
backupManager.SetupSnapshotCache("default") backupManager.SetupSnapshotCache("default")
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
backupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024) backupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024)
time.Sleep(time.Duration(delay) * time.Second) time.Sleep(time.Duration(delay) * time.Second)
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles := backupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles := backupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, false /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, false)
Patterns: nil,
InPlace: false,
QuickMode: false,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
@@ -291,16 +285,8 @@ func TestBackupManager(t *testing.T) {
backupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "second", false, false, 0, false, 1024, 1024) backupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "second", false, false, 0, false, 1024, 1024)
time.Sleep(time.Duration(delay) * time.Second) time.Sleep(time.Duration(delay) * time.Second)
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = backupManager.Restore(testDir+"/repository2", 2, &RestoreOptions{ failedFiles = backupManager.Restore(testDir+"/repository2", 2 /*inPlace=*/, true /*quickMode=*/, true, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, false)
Patterns: nil,
InPlace: true,
QuickMode: true,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
@@ -328,16 +314,8 @@ func TestBackupManager(t *testing.T) {
createRandomFile(testDir+"/repository2/dir5/file5", 100) createRandomFile(testDir+"/repository2/dir5/file5", 100)
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = backupManager.Restore(testDir+"/repository2", 3, &RestoreOptions{ failedFiles = backupManager.Restore(testDir+"/repository2", 3 /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ true /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, false)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: true,
DeleteMode: true,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
@@ -364,16 +342,8 @@ func TestBackupManager(t *testing.T) {
os.Remove(testDir + "/repository1/file2") os.Remove(testDir + "/repository1/file2")
os.Remove(testDir + "/repository1/dir1/file3") os.Remove(testDir + "/repository1/dir1/file3")
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
failedFiles = backupManager.Restore(testDir+"/repository1", 3, &RestoreOptions{ failedFiles = backupManager.Restore(testDir+"/repository1", 3 /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, []string{"+file2", "+dir1/file3", "-*"} /*allowFailures=*/, false)
Patterns: []string{"+file2", "+dir1/file3", "-*"},
InPlace: true,
QuickMode: false,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
@@ -388,17 +358,17 @@ func TestBackupManager(t *testing.T) {
if numberOfSnapshots != 3 { if numberOfSnapshots != 3 {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1, 2, 3} /*tag*/, "" /*showStatistics*/, false, backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{1, 2, 3}, /*tag*/ "", /*showStatistics*/ false,
/*showTabular*/ false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false /*rewiret*/, false, 1 /*allowFailures*/, false) /*showTabular*/ false, /*checkFiles*/ false, /*checkChunks*/ false, /*searchFossils*/ false, /*resurrect*/ false, /*rewiret*/ false, 1, /*allowFailures*/false)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, []int{1} /*tags*/, nil /*retentions*/, nil, backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, []int{1} /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, false /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1) /*exhaustive*/ false /*exclusive=*/, false /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
numberOfSnapshots = backupManager.SnapshotManager.ListSnapshots( /*snapshotID*/ "host1" /*revisionsToList*/, nil /*tag*/, "" /*showFiles*/, false /*showChunks*/, false) numberOfSnapshots = backupManager.SnapshotManager.ListSnapshots( /*snapshotID*/ "host1" /*revisionsToList*/, nil /*tag*/, "" /*showFiles*/, false /*showChunks*/, false)
if numberOfSnapshots != 2 { if numberOfSnapshots != 2 {
t.Errorf("Expected 2 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 2 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3} /*tag*/, "" /*showStatistics*/, false, backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{2, 3}, /*tag*/ "", /*showStatistics*/ false,
/*showTabular*/ false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false /*rewiret*/, false, 1 /*allowFailures*/, false) /*showTabular*/ false, /*checkFiles*/ false, /*checkChunks*/ false, /*searchFossils*/ false, /*resurrect*/ false, /*rewiret*/ false, 1, /*allowFailures*/ false)
backupManager.Backup(testDir+"/repository1" /*quickMode=*/, false, threads, "fourth", false, false, 0, false, 1024, 1024) backupManager.Backup(testDir+"/repository1" /*quickMode=*/, false, threads, "fourth", false, false, 0, false, 1024, 1024)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, nil /*tags*/, nil /*retentions*/, nil, backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, nil /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, true /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1) /*exhaustive*/ false /*exclusive=*/, true /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
@@ -406,8 +376,8 @@ func TestBackupManager(t *testing.T) {
if numberOfSnapshots != 3 { if numberOfSnapshots != 3 {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3, 4} /*tag*/, "" /*showStatistics*/, false, backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{2, 3, 4}, /*tag*/ "", /*showStatistics*/ false,
/*showTabular*/ false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false /*rewiret*/, false, 1 /*allowFailures*/, false) /*showTabular*/ false, /*checkFiles*/ false, /*checkChunks*/ false, /*searchFossils*/ false, /*resurrect*/ false, /*rewiret*/ false, 1, /*allowFailures*/ false)
/*buf := make([]byte, 1<<16) /*buf := make([]byte, 1<<16)
runtime.Stack(buf, true) runtime.Stack(buf, true)
@@ -416,7 +386,7 @@ func TestBackupManager(t *testing.T) {
// Create file with random file with certain seed // Create file with random file with certain seed
func createRandomFileSeeded(path string, maxSize int, seed int64) { func createRandomFileSeeded(path string, maxSize int, seed int64) {
r := rand.New(rand.NewSource(seed)) rand.Seed(seed)
file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644) file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil { if err != nil {
LOG_ERROR("RANDOM_FILE", "Can't open %s for writing: %v", path, err) LOG_ERROR("RANDOM_FILE", "Can't open %s for writing: %v", path, err)
@@ -433,7 +403,7 @@ func createRandomFileSeeded(path string, maxSize int, seed int64) {
if bytes > cap(buffer) { if bytes > cap(buffer) {
bytes = cap(buffer) bytes = cap(buffer)
} }
r.Read(buffer[:bytes]) rand.Read(buffer[:bytes])
bytes, err = file.Write(buffer[:bytes]) bytes, err = file.Write(buffer[:bytes])
if err != nil { if err != nil {
LOG_ERROR("RANDOM_FILE", "Failed to write to %s: %v", path, err) LOG_ERROR("RANDOM_FILE", "Failed to write to %s: %v", path, err)
@@ -444,7 +414,7 @@ func createRandomFileSeeded(path string, maxSize int, seed int64) {
} }
func corruptFile(path string, start int, length int, seed int64) { func corruptFile(path string, start int, length int, seed int64) {
r := rand.New(rand.NewSource(seed)) rand.Seed(seed)
file, err := os.OpenFile(path, os.O_WRONLY, 0644) file, err := os.OpenFile(path, os.O_WRONLY, 0644)
if err != nil { if err != nil {
@@ -465,7 +435,7 @@ func corruptFile(path string, start int, length int, seed int64) {
} }
buffer := make([]byte, length) buffer := make([]byte, length)
r.Read(buffer) rand.Read(buffer)
_, err = file.Write(buffer) _, err = file.Write(buffer)
if err != nil { if err != nil {
@@ -508,9 +478,9 @@ func TestPersistRestore(t *testing.T) {
maxFileSize := 1000000 maxFileSize := 1000000
//maxFileSize := 200000 //maxFileSize := 200000
createRandomFileSeeded(testDir+"/repository1/file1", maxFileSize, 1) createRandomFileSeeded(testDir+"/repository1/file1", maxFileSize,1)
createRandomFileSeeded(testDir+"/repository1/file2", maxFileSize, 2) createRandomFileSeeded(testDir+"/repository1/file2", maxFileSize,2)
createRandomFileSeeded(testDir+"/repository1/dir1/file3", maxFileSize, 3) createRandomFileSeeded(testDir+"/repository1/dir1/file3", maxFileSize,3)
threads := 1 threads := 1
@@ -560,83 +530,85 @@ func TestPersistRestore(t *testing.T) {
// do unencrypted backup // do unencrypted backup
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
unencBackupManager := CreateBackupManager("host1", unencStorage, testDir, "", nil) unencBackupManager := CreateBackupManager("host1", unencStorage, testDir, "", "", "", false)
unencBackupManager.SetupSnapshotCache("default") unencBackupManager.SetupSnapshotCache("default")
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
unencBackupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024) unencBackupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024)
time.Sleep(time.Duration(delay) * time.Second) time.Sleep(time.Duration(delay) * time.Second)
// do encrypted backup // do encrypted backup
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
encBackupManager := CreateBackupManager("host1", storage, testDir, password, nil) encBackupManager := CreateBackupManager("host1", storage, testDir, password, "", "", false)
encBackupManager.SetupSnapshotCache("default") encBackupManager.SetupSnapshotCache("default")
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
encBackupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024) encBackupManager.Backup(testDir+"/repository1" /*quickMode=*/, true, threads, "first", false, false, 0, false, 1024, 1024)
time.Sleep(time.Duration(delay) * time.Second) time.Sleep(time.Duration(delay) * time.Second)
// check snapshots // check snapshots
unencBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1} /*tag*/, "", unencBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{1}, /*tag*/ "",
/*showStatistics*/ true /*showTabular*/, false /*checkFiles*/, true /*checkChunks*/, false, /*showStatistics*/ true, /*showTabular*/ false, /*checkFiles*/ true, /*checkChunks*/ false,
/*searchFossils*/ false /*resurrect*/, false /*rewiret*/, false, 1 /*allowFailures*/, false) /*searchFossils*/ false, /*resurrect*/ false, /*rewiret*/ false, 1, /*allowFailures*/ false)
encBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1} /*tag*/, "",
/*showStatistics*/ true /*showTabular*/, false /*checkFiles*/, true /*checkChunks*/, false,
/*searchFossils*/ false /*resurrect*/, false /*rewiret*/, false, 1 /*allowFailures*/, false)
encBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{1}, /*tag*/ "",
/*showStatistics*/ true, /*showTabular*/ false, /*checkFiles*/ true, /*checkChunks*/ false,
/*searchFossils*/ false, /*resurrect*/ false, /*rewiret*/ false, 1, /*allowFailures*/ false)
// check functions // check functions
checkAllUncorrupted := func(cmpRepository string) { checkAllUncorrupted := func(cmpRepository string) {
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
if _, err := os.Stat(testDir + cmpRepository + "/" + f); os.IsNotExist(err) { if _, err := os.Stat(testDir + cmpRepository + "/" + f); os.IsNotExist(err) {
t.Errorf("File %s does not exist", f) t.Errorf("File %s does not exist", f)
continue continue
} }
hash1 := getFileHash(testDir + "/repository1/" + f) hash1 := getFileHash(testDir + "/repository1/" + f)
hash2 := getFileHash(testDir + cmpRepository + "/" + f) hash2 := getFileHash(testDir + cmpRepository + "/" + f)
if hash1 != hash2 { if hash1 != hash2 {
t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2) t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2)
} }
} }
} }
checkMissingFile := func(cmpRepository string, expectMissing string) { checkMissingFile := func(cmpRepository string, expectMissing string) {
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
_, err := os.Stat(testDir + cmpRepository + "/" + f) _, err := os.Stat(testDir + cmpRepository + "/" + f)
if err == nil { if err==nil {
if f == expectMissing { if f==expectMissing {
t.Errorf("File %s exists, expected to be missing", f) t.Errorf("File %s exists, expected to be missing", f)
}
continue
} }
continue if os.IsNotExist(err) {
} if f!=expectMissing {
if os.IsNotExist(err) { t.Errorf("File %s does not exist", f)
if f != expectMissing { }
t.Errorf("File %s does not exist", f) continue
} }
continue
}
hash1 := getFileHash(testDir + "/repository1/" + f) hash1 := getFileHash(testDir + "/repository1/" + f)
hash2 := getFileHash(testDir + cmpRepository + "/" + f) hash2 := getFileHash(testDir + cmpRepository + "/" + f)
if hash1 != hash2 { if hash1 != hash2 {
t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2) t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2)
} }
} }
} }
checkCorruptedFile := func(cmpRepository string, expectCorrupted string) { checkCorruptedFile := func(cmpRepository string, expectCorrupted string) {
for _, f := range []string{"file1", "file2", "dir1/file3"} { for _, f := range []string{"file1", "file2", "dir1/file3"} {
if _, err := os.Stat(testDir + cmpRepository + "/" + f); os.IsNotExist(err) { if _, err := os.Stat(testDir + cmpRepository + "/" + f); os.IsNotExist(err) {
t.Errorf("File %s does not exist", f) t.Errorf("File %s does not exist", f)
continue continue
} }
hash1 := getFileHash(testDir + "/repository1/" + f) hash1 := getFileHash(testDir + "/repository1/" + f)
hash2 := getFileHash(testDir + cmpRepository + "/" + f) hash2 := getFileHash(testDir + cmpRepository + "/" + f)
if f == expectCorrupted { if (f==expectCorrupted) {
if hash1 == hash2 { if hash1 == hash2 {
t.Errorf("File %s has same hashes, expected to be corrupted: %s vs %s", f, hash1, hash2) t.Errorf("File %s has same hashes, expected to be corrupted: %s vs %s", f, hash1, hash2)
} }
} else { } else {
if hash1 != hash2 { if hash1 != hash2 {
t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2) t.Errorf("File %s has different hashes: %s vs %s", f, hash1, hash2)
@@ -647,35 +619,27 @@ func TestPersistRestore(t *testing.T) {
// test restore all uncorrupted to repository3 // test restore all uncorrupted to repository3
SetDuplicacyPreferencePath(testDir + "/repository3/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository3/.duplicacy")
failedFiles := unencBackupManager.Restore(testDir+"/repository3", 1, &RestoreOptions{ failedFiles := unencBackupManager.Restore(testDir+"/repository3", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, false)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
checkAllUncorrupted("/repository3") checkAllUncorrupted("/repository3")
// test for corrupt files and -persist // test for corrupt files and -persist
// corrupt a chunk // corrupt a chunk
chunkToCorrupt1 := "/4d/538e5dfd2b08e782bfeb56d1360fb5d7eb9d8c4b2531cc2fca79efbaec910c" chunkToCorrupt1 := "/4d/538e5dfd2b08e782bfeb56d1360fb5d7eb9d8c4b2531cc2fca79efbaec910c"
// this should affect file1 // this should affect file1
chunkToCorrupt2 := "/2b/f953a766d0196ce026ae259e76e3c186a0e4bcd3ce10f1571d17f86f0a5497" chunkToCorrupt2 := "/2b/f953a766d0196ce026ae259e76e3c186a0e4bcd3ce10f1571d17f86f0a5497"
// this should affect dir1/file3 // this should affect dir1/file3
for i := 0; i < 2; i++ { for i := 0; i < 2; i++ {
if i == 0 { if i==0 {
// test corrupt chunks // test corrupt chunks
corruptFile(testDir+"/unenc_storage"+"/chunks"+chunkToCorrupt1, 128, 128, 4) corruptFile(testDir+"/unenc_storage"+"/chunks"+chunkToCorrupt1, 128, 128, 4)
corruptFile(testDir+"/enc_storage"+"/chunks"+chunkToCorrupt2, 128, 128, 4) corruptFile(testDir+"/enc_storage"+"/chunks"+chunkToCorrupt2, 128, 128, 4)
} else { } else {
// test missing chunks // test missing chunks
os.Remove(testDir + "/unenc_storage" + "/chunks" + chunkToCorrupt1) os.Remove(testDir+"/unenc_storage"+"/chunks"+chunkToCorrupt1)
os.Remove(testDir + "/enc_storage" + "/chunks" + chunkToCorrupt2) os.Remove(testDir+"/enc_storage"+"/chunks"+chunkToCorrupt2)
} }
// This is to make sure that allowFailures is set to true. Note that this is not needed // This is to make sure that allowFailures is set to true. Note that this is not needed
@@ -690,44 +654,30 @@ func TestPersistRestore(t *testing.T) {
// check snapshots with --persist (allowFailures == true) // check snapshots with --persist (allowFailures == true)
// this would cause a panic and os.Exit from duplicacy_log if allowFailures == false // this would cause a panic and os.Exit from duplicacy_log if allowFailures == false
unencBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1} /*tag*/, "", unencBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{1}, /*tag*/ "",
/*showStatistics*/ true /*showTabular*/, false /*checkFiles*/, true /*checkChunks*/, false, /*showStatistics*/ true, /*showTabular*/ false, /*checkFiles*/ true, /*checkChunks*/ false,
/*searchFossils*/ false /*resurrect*/, false /*rewrite*/, false, 1 /*allowFailures*/, true) /*searchFossils*/ false, /*resurrect*/ false, /*rewrite*/ false, 1, /*allowFailures*/ true)
encBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1} /*tag*/, "", encBackupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1", /*revisions*/ []int{1}, /*tag*/ "",
/*showStatistics*/ true /*showTabular*/, false /*checkFiles*/, true /*checkChunks*/, false, /*showStatistics*/ true, /*showTabular*/ false, /*checkFiles*/ true, /*checkChunks*/ false,
/*searchFossils*/ false /*resurrect*/, false /*rewrite*/, false, 1 /*allowFailures*/, true) /*searchFossils*/ false, /*resurrect*/ false, /*rewrite*/ false, 1, /*allowFailures*/ true)
// test restore corrupted, inPlace = true, corrupted files will have hash failures // test restore corrupted, inPlace = true, corrupted files will have hash failures
os.RemoveAll(testDir + "/repository2") os.RemoveAll(testDir+"/repository2")
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = unencBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = unencBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 1) assertRestoreFailures(t, failedFiles, 1)
// check restore, expect file1 to be corrupted // check restore, expect file1 to be corrupted
checkCorruptedFile("/repository2", "file1") checkCorruptedFile("/repository2", "file1")
os.RemoveAll(testDir + "/repository2")
os.RemoveAll(testDir+"/repository2")
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = encBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = encBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 1) assertRestoreFailures(t, failedFiles, 1)
// check restore, expect file3 to be corrupted // check restore, expect file3 to be corrupted
@@ -735,35 +685,20 @@ func TestPersistRestore(t *testing.T) {
//SetLoggingLevel(DEBUG) //SetLoggingLevel(DEBUG)
// test restore corrupted, inPlace = false, corrupted files will be missing // test restore corrupted, inPlace = false, corrupted files will be missing
os.RemoveAll(testDir + "/repository2") os.RemoveAll(testDir+"/repository2")
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = unencBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = unencBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, false /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: false,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 1) assertRestoreFailures(t, failedFiles, 1)
// check restore, expect file1 to be corrupted // check restore, expect file1 to be corrupted
checkMissingFile("/repository2", "file1") checkMissingFile("/repository2", "file1")
os.RemoveAll(testDir + "/repository2")
os.RemoveAll(testDir+"/repository2")
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
failedFiles = encBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = encBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, false /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: false,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 1) assertRestoreFailures(t, failedFiles, 1)
// check restore, expect file3 to be corrupted // check restore, expect file3 to be corrupted
@@ -772,60 +707,28 @@ func TestPersistRestore(t *testing.T) {
// test restore corrupted files from different backups, inPlace = true // test restore corrupted files from different backups, inPlace = true
// with overwrite=true, corrupted file1 from unenc will be restored correctly from enc // with overwrite=true, corrupted file1 from unenc will be restored correctly from enc
// the latter will not touch the existing file3 with correct hash // the latter will not touch the existing file3 with correct hash
os.RemoveAll(testDir + "/repository2") os.RemoveAll(testDir+"/repository2")
failedFiles = unencBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = unencBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, false,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: false,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 1) assertRestoreFailures(t, failedFiles, 1)
failedFiles = encBackupManager.Restore(testDir+"/repository2", 1, &RestoreOptions{ failedFiles = encBackupManager.Restore(testDir+"/repository2", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
checkAllUncorrupted("/repository2") checkAllUncorrupted("/repository2")
// restore to repository3, with overwrite and allowFailures (true/false), quickMode = false (use hashes) // restore to repository3, with overwrite and allowFailures (true/false), quickMode = false (use hashes)
// should always succeed as uncorrupted files already exist with correct hash, so these will be ignored // should always succeed as uncorrupted files already exist with correct hash, so these will be ignored
SetDuplicacyPreferencePath(testDir + "/repository3/.duplicacy") SetDuplicacyPreferencePath(testDir + "/repository3/.duplicacy")
failedFiles = unencBackupManager.Restore(testDir+"/repository3", 1, &RestoreOptions{ failedFiles = unencBackupManager.Restore(testDir+"/repository3", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, false)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: false,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
checkAllUncorrupted("/repository3") checkAllUncorrupted("/repository3")
failedFiles = unencBackupManager.Restore(testDir+"/repository3", 1, &RestoreOptions{ failedFiles = unencBackupManager.Restore(testDir+"/repository3", threads /*inPlace=*/, true /*quickMode=*/, false, threads /*overwrite=*/, true,
Threads: threads, /*deleteMode=*/ false /*setowner=*/, false /*showStatistics=*/, false /*patterns=*/, nil /*allowFailures=*/, true)
Patterns: nil,
InPlace: true,
QuickMode: false,
Overwrite: true,
DeleteMode: false,
ShowStatistics: false,
AllowFailures: true,
})
assertRestoreFailures(t, failedFiles, 0) assertRestoreFailures(t, failedFiles, 0)
checkAllUncorrupted("/repository3") checkAllUncorrupted("/repository3")
} }
} }

View File

@@ -85,8 +85,8 @@ type Config struct {
FileKey []byte `json:"-"` FileKey []byte `json:"-"`
// for erasure coding // for erasure coding
DataShards int `json:"data-shards"` DataShards int `json:'data-shards'`
ParityShards int `json:"parity-shards"` ParityShards int `json:'parity-shards'`
// for RSA encryption // for RSA encryption
rsaPrivateKey *rsa.PrivateKey rsaPrivateKey *rsa.PrivateKey

View File

@@ -18,14 +18,15 @@ import (
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
"syscall"
"time" "time"
"github.com/vmihailenco/msgpack" "github.com/vmihailenco/msgpack"
) )
const ( const (
entryHardLinkRootChunkMarker = -9 entrySymHardLinkRootChunkMarker = -72
entryHardLinkTargetChunkMarker = -10 entrySymHardLinkTargetChunkMarker = -73
) )
// This is the hidden directory in the repository for storing various files. // This is the hidden directory in the repository for storing various files.
@@ -98,7 +99,7 @@ func CreateEntryFromFileInfo(fileInfo os.FileInfo, directory string) *Entry {
Mode: uint32(mode), Mode: uint32(mode),
} }
GetOwner(entry, fileInfo) GetOwner(entry, &fileInfo)
return entry return entry
} }
@@ -125,21 +126,12 @@ func (entry *Entry) Copy() *Entry {
} }
func (entry *Entry) HardLinkTo(target *Entry, startChunk int, endChunk int) *Entry { func (entry *Entry) HardLinkTo(target *Entry, startChunk int, endChunk int) *Entry {
endOffset := target.EndOffset
link := entry.Link
if !target.IsFile() {
startChunk = target.StartChunk
endChunk = entry.EndChunk
endOffset = entry.EndOffset
link = target.Link
}
return &Entry{ return &Entry{
Path: entry.Path, Path: entry.Path,
Size: target.Size, Size: target.Size,
Time: target.Time, Time: target.Time,
Mode: target.Mode, Mode: target.Mode,
Link: link, Link: entry.Link,
Hash: target.Hash, Hash: target.Hash,
UID: target.UID, UID: target.UID,
@@ -148,7 +140,7 @@ func (entry *Entry) HardLinkTo(target *Entry, startChunk int, endChunk int) *Ent
StartChunk: startChunk, StartChunk: startChunk,
StartOffset: target.StartOffset, StartOffset: target.StartOffset,
EndChunk: endChunk, EndChunk: endChunk,
EndOffset: endOffset, EndOffset: target.EndOffset,
Attributes: target.Attributes, Attributes: target.Attributes,
} }
@@ -524,30 +516,34 @@ func (entry *Entry) IsLink() bool {
} }
func (entry *Entry) IsSpecial() bool { func (entry *Entry) IsSpecial() bool {
return entry.Mode&uint32(os.ModeNamedPipe|os.ModeDevice|os.ModeCharDevice|os.ModeSocket) != 0 return entry.Mode&uint32(os.ModeNamedPipe|os.ModeDevice|os.ModeCharDevice) != 0
}
func (entry *Entry) IsFileOrSpecial() bool {
return entry.Mode&uint32(os.ModeDir|os.ModeSymlink|os.ModeIrregular) == 0
} }
func (entry *Entry) IsComplete() bool { func (entry *Entry) IsComplete() bool {
return entry.Size >= 0 return entry.Size >= 0
} }
func (entry *Entry) IsHardLinkChild() bool { func (entry *Entry) IsHardlinkedFrom() bool {
return (entry.IsFile() && len(entry.Link) > 0 && entry.Link != "/") || (!entry.IsDir() && entry.EndChunk == entryHardLinkTargetChunkMarker) return (entry.IsFileOrSpecial() && len(entry.Link) > 0 && entry.Link != "/") || (entry.IsLink() && entry.StartChunk == entrySymHardLinkTargetChunkMarker)
} }
func (entry *Entry) IsHardLinkRoot() bool { func (entry *Entry) IsHardlinkRoot() bool {
return (entry.IsFile() && entry.Link == "/") || (!entry.IsDir() && entry.EndChunk == entryHardLinkRootChunkMarker) return (entry.IsFileOrSpecial() && entry.Link == "/") || (entry.IsLink() && entry.StartChunk == entrySymHardLinkRootChunkMarker)
} }
func (entry *Entry) GetHardLinkId() (int, error) { func (entry *Entry) GetHardlinkId() (int, error) {
if entry.IsFile() { if entry.IsLink() {
if entry.StartChunk != entrySymHardLinkTargetChunkMarker {
return 0, errors.New("Symlink entry not marked as hardlinked")
}
return entry.StartOffset, nil
} else {
i, err := strconv.ParseUint(entry.Link, 16, 64) i, err := strconv.ParseUint(entry.Link, 16, 64)
return int(i), err return int(i), err
} else {
if entry.EndChunk != entryHardLinkTargetChunkMarker {
return 0, errors.New("Entry not marked as hard link child")
}
return entry.EndOffset, nil
} }
} }
@@ -582,66 +578,47 @@ func (entry *Entry) String(maxSizeDigits int) string {
return fmt.Sprintf("%*d %s %64s %s", maxSizeDigits, entry.Size, modifiedTime, entry.Hash, entry.Path) return fmt.Sprintf("%*d %s %64s %s", maxSizeDigits, entry.Size, modifiedTime, entry.Hash, entry.Path)
} }
type RestoreMetadataOptions struct { func (entry *Entry) RestoreMetadata(fullPath string, fileInfo *os.FileInfo, setOwner bool) bool {
SetOwner bool
ExcludeXattrs bool
NormalizeXattrs bool
IncludeFileFlags bool
FileFlagsMask uint32
}
func (entry *Entry) RestoreMetadata(fullPath string, fileInfo os.FileInfo,
options RestoreMetadataOptions) bool {
if fileInfo == nil { if fileInfo == nil {
var err error stat, err := os.Lstat(fullPath)
fileInfo, err = os.Lstat(fullPath) fileInfo = &stat
if err != nil { if err != nil {
LOG_ERROR("RESTORE_STAT", "Failed to retrieve the file info on %s: %v", entry.Path, err) LOG_ERROR("RESTORE_STAT", "Failed to retrieve the file info: %v", err)
return false return false
} }
} }
if !options.ExcludeXattrs {
err := entry.SetAttributesToFile(fullPath, options.NormalizeXattrs)
if err != nil {
LOG_WARN("RESTORE_ATTR", "Failed to set extended attributes on %s: %v", entry.Path, err)
}
}
// Note that chown can remove setuid/setgid bits so should be called before chmod // Note that chown can remove setuid/setgid bits so should be called before chmod
if options.SetOwner { if setOwner {
if !SetOwner(fullPath, entry, fileInfo) { if !SetOwner(fullPath, entry, fileInfo) {
return false return false
} }
} }
// Only set the permission if the file is not a symlink // Only set the permission if the file is not a symlink
if !entry.IsLink() && fileInfo.Mode()&fileModeMask != entry.GetPermissions() { if !entry.IsLink() && (*fileInfo).Mode()&fileModeMask != entry.GetPermissions() {
err := os.Chmod(fullPath, entry.GetPermissions()) err := os.Chmod(fullPath, entry.GetPermissions())
if err != nil { if err != nil {
LOG_ERROR("RESTORE_CHMOD", "Failed to set the file permissions on %s: %v", entry.Path, err) LOG_ERROR("RESTORE_CHMOD", "Failed to set the file permissions: %v", err)
return false return false
} }
} }
if entry.Attributes != nil && len(*entry.Attributes) > 0 {
entry.SetAttributesToFile(fullPath)
}
// Only set the time if the file is not a symlink // Only set the time if the file is not a symlink
if !entry.IsLink() && fileInfo.ModTime().Unix() != entry.Time { if !entry.IsLink() && (*fileInfo).ModTime().Unix() != entry.Time {
modifiedTime := time.Unix(entry.Time, 0) modifiedTime := time.Unix(entry.Time, 0)
err := os.Chtimes(fullPath, modifiedTime, modifiedTime) err := os.Chtimes(fullPath, modifiedTime, modifiedTime)
if err != nil { if err != nil {
LOG_ERROR("RESTORE_CHTIME", "Failed to set the modification time on %s: %v", entry.Path, err) LOG_ERROR("RESTORE_CHTIME", "Failed to set the modification time: %v", err)
return false return false
} }
} }
if options.IncludeFileFlags {
err := entry.RestoreLateFileFlags(fullPath, fileInfo, options.FileFlagsMask)
if err != nil {
LOG_WARN("RESTORE_FLAGS", "Failed to set file flags on %s: %v", entry.Path, err)
}
}
return true return true
} }
@@ -772,39 +749,27 @@ func (files FileInfoCompare) Less(i, j int) bool {
} }
} }
type EntryListerOptions struct { type listEntryLinkKey struct {
Patterns []string dev uint64
NoBackupFile string ino uint64
ExcludeByAttribute bool
ExcludeXattrs bool
NormalizeXattr bool
IncludeFileFlags bool
IncludeSpecials bool
} }
type EntryLister interface { type ListingState struct {
ListDir(top string, path string, listingChannel chan *Entry, options *EntryListerOptions) (directoryList []*Entry, skippedFiles []string, err error)
}
type LocalDirectoryLister struct {
linkIndex int linkIndex int
linkTable map[listEntryLinkKey]int // map unique inode details to initially found path linkTable map[listEntryLinkKey]int // map unique inode details to initially found path
} }
func NewLocalDirectoryLister() *LocalDirectoryLister { func NewListingState() *ListingState {
return &LocalDirectoryLister{ return &ListingState{
linkTable: make(map[listEntryLinkKey]int), linkTable: make(map[listEntryLinkKey]int),
} }
} }
// ListDir returns a list of entries representing file and subdirectories under the directory 'path'. // ListEntries returns a list of entries representing file and subdirectories under the directory 'path'. Entry paths
// Entry paths are normalized as relative to 'top'. // are normalized as relative to 'top'. 'patterns' are used to exclude or include certain files.
func (dl *LocalDirectoryLister) ListDir(top string, path string, listingChannel chan *Entry, func ListEntries(top string, path string, patterns []string, nobackupFile string, excludeByAttribute bool,
options *EntryListerOptions) (directoryList []*Entry, skippedFiles []string, err error) { listingState *ListingState,
listingChannel chan *Entry) (directoryList []*Entry, skippedFiles []string, err error) {
if options == nil {
options = &EntryListerOptions{}
}
LOG_DEBUG("LIST_ENTRIES", "Listing %s", path) LOG_DEBUG("LIST_ENTRIES", "Listing %s", path)
@@ -817,12 +782,10 @@ func (dl *LocalDirectoryLister) ListDir(top string, path string, listingChannel
return directoryList, nil, err return directoryList, nil, err
} }
patterns := options.Patterns
// This binary search works because ioutil.ReadDir returns files sorted by Name() by default // This binary search works because ioutil.ReadDir returns files sorted by Name() by default
if options.NoBackupFile != "" { if nobackupFile != "" {
ii := sort.Search(len(files), func(ii int) bool { return strings.Compare(files[ii].Name(), options.NoBackupFile) >= 0 }) ii := sort.Search(len(files), func(ii int) bool { return strings.Compare(files[ii].Name(), nobackupFile) >= 0 })
if ii < len(files) && files[ii].Name() == options.NoBackupFile { if ii < len(files) && files[ii].Name() == nobackupFile {
LOG_DEBUG("LIST_NOBACKUP", "%s is excluded due to nobackup file", path) LOG_DEBUG("LIST_NOBACKUP", "%s is excluded due to nobackup file", path)
return directoryList, skippedFiles, nil return directoryList, skippedFiles, nil
} }
@@ -844,44 +807,47 @@ func (dl *LocalDirectoryLister) ListDir(top string, path string, listingChannel
if f.Name() == DUPLICACY_DIRECTORY { if f.Name() == DUPLICACY_DIRECTORY {
continue continue
} }
if f.Mode()&os.ModeSocket != 0 {
continue
}
entry := CreateEntryFromFileInfo(f, normalizedPath) entry := CreateEntryFromFileInfo(f, normalizedPath)
if len(patterns) > 0 && !MatchPath(entry.Path, patterns) { if len(patterns) > 0 && !MatchPath(entry.Path, patterns) {
continue continue
} }
linkKey, isHardLinked := entry.getHardLinkKey(f) var linkKey *listEntryLinkKey
if isHardLinked {
if linkIndex, seen := dl.linkTable[linkKey]; seen {
if linkIndex == -1 {
LOG_DEBUG("LIST_EXCLUDE", "%s was excluded or skipped (hard link)", entry.Path)
continue
}
entry.Size = 0 if runtime.GOOS != "windows" && !entry.IsDir() {
if entry.IsFile() { if stat := f.Sys().(*syscall.Stat_t); stat != nil && stat.Nlink > 1 {
entry.Link = strconv.FormatInt(int64(linkIndex), 16) k := listEntryLinkKey{dev: uint64(stat.Dev), ino: uint64(stat.Ino)}
if linkIndex, seen := listingState.linkTable[k]; seen {
if linkIndex == -1 {
LOG_DEBUG("LIST_EXCLUDE", "%s is excluded by attribute (hardlink)", entry.Path)
continue
}
entry.Size = 0
if entry.IsLink() {
entry.StartChunk = entrySymHardLinkTargetChunkMarker
entry.StartOffset = linkIndex
} else {
entry.Link = strconv.FormatInt(int64(linkIndex), 16)
}
} else { } else {
entry.EndChunk = entryHardLinkTargetChunkMarker if entry.IsLink() {
entry.EndOffset = linkIndex entry.StartChunk = entrySymHardLinkRootChunkMarker
} else {
entry.Link = "/"
}
listingState.linkTable[k] = -1
linkKey = &k
} }
listingChannel <- entry
continue
} else {
if entry.IsFile() {
entry.Link = "/"
} else {
entry.EndChunk = entryHardLinkRootChunkMarker
}
dl.linkTable[linkKey] = -1
} }
} }
fullPath := joinPath(top, entry.Path)
if entry.IsLink() { if entry.IsLink() {
isRegular := false isRegular := false
isRegular, entry.Link, err = Readlink(fullPath) isRegular, entry.Link, err = Readlink(joinPath(top, entry.Path))
if err != nil { if err != nil {
LOG_WARN("LIST_LINK", "Failed to read the symlink %s: %v", entry.Path, err) LOG_WARN("LIST_LINK", "Failed to read the symlink %s: %v", entry.Path, err)
skippedFiles = append(skippedFiles, entry.Path) skippedFiles = append(skippedFiles, entry.Path)
@@ -891,7 +857,7 @@ func (dl *LocalDirectoryLister) ListDir(top string, path string, listingChannel
if isRegular { if isRegular {
entry.Mode ^= uint32(os.ModeSymlink) entry.Mode ^= uint32(os.ModeSymlink)
} else if path == "" && (filepath.IsAbs(entry.Link) || filepath.HasPrefix(entry.Link, `\\`)) && !strings.HasPrefix(entry.Link, normalizedTop) { } else if path == "" && (filepath.IsAbs(entry.Link) || filepath.HasPrefix(entry.Link, `\\`)) && !strings.HasPrefix(entry.Link, normalizedTop) {
stat, err := os.Stat(fullPath) stat, err := os.Stat(joinPath(top, entry.Path))
if err != nil { if err != nil {
LOG_WARN("LIST_LINK", "Failed to read the symlink: %v", err) LOG_WARN("LIST_LINK", "Failed to read the symlink: %v", err)
skippedFiles = append(skippedFiles, entry.Path) skippedFiles = append(skippedFiles, entry.Path)
@@ -909,35 +875,24 @@ func (dl *LocalDirectoryLister) ListDir(top string, path string, listingChannel
} }
entry = newEntry entry = newEntry
} }
} else if options.IncludeSpecials && entry.IsSpecial() { } else if entry.IsSpecial() {
if err := entry.ReadSpecial(fullPath, f); err != nil { if !entry.ReadSpecial(f) {
LOG_WARN("LIST_DEV", "Failed to save device node %s: %v", entry.Path, err) LOG_WARN("LIST_DEV", "Failed to save device node %s", entry.Path)
skippedFiles = append(skippedFiles, entry.Path) skippedFiles = append(skippedFiles, entry.Path)
continue continue
} }
} }
if !options.ExcludeXattrs { entry.ReadAttributes(top)
if err := entry.ReadAttributes(f, fullPath, false); err != nil {
LOG_WARN("LIST_ATTR", "Failed to read xattrs on %s: %v", entry.Path, err)
}
}
// if the flags are already in the FileInfo we can keep them if excludeByAttribute && entry.Attributes != nil && excludedByAttribute(*entry.Attributes) {
if !entry.GetFileFlags(f) && options.IncludeFileFlags {
if err := entry.ReadFileFlags(f, fullPath); err != nil {
LOG_WARN("LIST_ATTR", "Failed to read file flags on %s: %v", entry.Path, err)
}
}
if options.ExcludeByAttribute && entry.Attributes != nil && excludedByAttribute(*entry.Attributes) {
LOG_DEBUG("LIST_EXCLUDE", "%s is excluded by attribute", entry.Path) LOG_DEBUG("LIST_EXCLUDE", "%s is excluded by attribute", entry.Path)
continue continue
} }
if isHardLinked { if linkKey != nil {
dl.linkTable[linkKey] = dl.linkIndex listingState.linkTable[*linkKey] = listingState.linkIndex
dl.linkIndex++ listingState.linkIndex++
} }
if entry.IsDir() { if entry.IsDir() {

View File

@@ -7,6 +7,7 @@ package duplicacy
import ( import (
"bytes" "bytes"
"encoding/json" "encoding/json"
"io/ioutil"
"math/rand" "math/rand"
"os" "os"
"path/filepath" "path/filepath"
@@ -165,14 +166,12 @@ func TestEntryOrder(t *testing.T) {
continue continue
} }
err := os.WriteFile(fullPath, []byte(file), 0700) err := ioutil.WriteFile(fullPath, []byte(file), 0700)
if err != nil { if err != nil {
t.Errorf("WriteFile(%s) returned an error: %s", fullPath, err) t.Errorf("WriteFile(%s) returned an error: %s", fullPath, err)
} }
} }
lister := NewLocalDirectoryLister()
directories := make([]*Entry, 0, 4) directories := make([]*Entry, 0, 4)
directories = append(directories, CreateEntry("", 0, 0, 0)) directories = append(directories, CreateEntry("", 0, 0, 0))
@@ -183,7 +182,7 @@ func TestEntryOrder(t *testing.T) {
for len(directories) > 0 { for len(directories) > 0 {
directory := directories[len(directories)-1] directory := directories[len(directories)-1]
directories = directories[:len(directories)-1] directories = directories[:len(directories)-1]
subdirectories, _, err := lister.ListDir(testDir, directory.Path, entryChannel, nil) subdirectories, _, err := ListEntries(testDir, directory.Path, nil, "", false, entryChannel)
if err != nil { if err != nil {
t.Errorf("ListEntries(%s, %s) returned an error: %s", testDir, directory.Path, err) t.Errorf("ListEntries(%s, %s) returned an error: %s", testDir, directory.Path, err)
} }
@@ -241,24 +240,16 @@ func TestEntryExcludeByAttribute(t *testing.T) {
if runtime.GOOS == "darwin" { if runtime.GOOS == "darwin" {
excludeAttrName = "com.apple.metadata:com_apple_backup_excludeItem" excludeAttrName = "com.apple.metadata:com_apple_backup_excludeItem"
excludeAttrValue = []byte("com.apple.backupd") excludeAttrValue = []byte("com.apple.backupd")
} else if runtime.GOOS == "linux" { } else if runtime.GOOS == "linux" || runtime.GOOS == "freebsd" || runtime.GOOS == "netbsd" || runtime.GOOS == "solaris" {
excludeAttrName = "user.duplicacy_exclude" excludeAttrName = "user.duplicacy_exclude"
} else if runtime.GOOS == "freebsd" || runtime.GOOS == "netbsd" {
excludeAttrName = "duplicacy_exclude"
} else { } else {
t.Skip("skipping test, not darwin, linux, freebsd, or netbsd") t.Skip("skipping test, not darwin, linux, freebsd, netbsd, or solaris")
} }
tmpDir := "" testDir := filepath.Join(os.TempDir(), "duplicacy_test")
// on linux TempDir is usually a tmpfs which does not support xattrs
if runtime.GOOS == "linux" { os.RemoveAll(testDir)
tmpDir = "." os.MkdirAll(testDir, 0700)
}
testDir, err := os.MkdirTemp(tmpDir, "duplicacy_test")
if err != nil {
t.Errorf("Mkdirtmp() failed: %v", err)
return
}
// Files or folders named with "exclude" below will have the exclusion attribute set on them // Files or folders named with "exclude" below will have the exclusion attribute set on them
// When ListEntries is called with excludeByAttribute true, they should be excluded. // When ListEntries is called with excludeByAttribute true, they should be excluded.
@@ -282,7 +273,7 @@ func TestEntryExcludeByAttribute(t *testing.T) {
continue continue
} }
err := os.WriteFile(fullPath, []byte(file), 0700) err := ioutil.WriteFile(fullPath, []byte(file), 0700)
if err != nil { if err != nil {
t.Errorf("WriteFile(%s) returned an error: %s", fullPath, err) t.Errorf("WriteFile(%s) returned an error: %s", fullPath, err)
} }
@@ -297,8 +288,6 @@ func TestEntryExcludeByAttribute(t *testing.T) {
for _, excludeByAttribute := range [2]bool{true, false} { for _, excludeByAttribute := range [2]bool{true, false} {
t.Logf("testing excludeByAttribute: %t", excludeByAttribute) t.Logf("testing excludeByAttribute: %t", excludeByAttribute)
lister := NewLocalDirectoryLister()
directories := make([]*Entry, 0, 4) directories := make([]*Entry, 0, 4)
directories = append(directories, CreateEntry("", 0, 0, 0)) directories = append(directories, CreateEntry("", 0, 0, 0))
@@ -309,11 +298,7 @@ func TestEntryExcludeByAttribute(t *testing.T) {
for len(directories) > 0 { for len(directories) > 0 {
directory := directories[len(directories)-1] directory := directories[len(directories)-1]
directories = directories[:len(directories)-1] directories = directories[:len(directories)-1]
subdirectories, _, err := lister.ListDir(testDir, directory.Path, entryChannel, subdirectories, _, err := ListEntries(testDir, directory.Path, nil, "", excludeByAttribute, entryChannel)
&EntryListerOptions{
ExcludeByAttribute: excludeByAttribute,
})
if err != nil { if err != nil {
t.Errorf("ListEntries(%s, %s) returned an error: %s", testDir, directory.Path, err) t.Errorf("ListEntries(%s, %s) returned an error: %s", testDir, directory.Path, err)
} }
@@ -362,9 +347,10 @@ func TestEntryExcludeByAttribute(t *testing.T) {
} }
if tmpDir != "" || !t.Failed() { if !t.Failed() {
os.RemoveAll(testDir) os.RemoveAll(testDir)
} }
} }
func TestEntryEncoding(t *testing.T) { func TestEntryEncoding(t *testing.T) {

View File

@@ -6,7 +6,7 @@ package duplicacy
import ( import (
"encoding/json" "encoding/json"
"os" "io/ioutil"
"syscall" "syscall"
"unsafe" "unsafe"
) )
@@ -86,7 +86,7 @@ func keyringGet(key string) (value string) {
return "" return ""
} }
description, err := os.ReadFile(keyringFile) description, err := ioutil.ReadFile(keyringFile)
if err != nil { if err != nil {
LOG_DEBUG("KEYRING_READ", "Keyring file not read: %v", err) LOG_DEBUG("KEYRING_READ", "Keyring file not read: %v", err)
return "" return ""
@@ -125,7 +125,7 @@ func keyringSet(key string, value string) bool {
keyring := make(map[string][]byte) keyring := make(map[string][]byte)
description, err := os.ReadFile(keyringFile) description, err := ioutil.ReadFile(keyringFile)
if err == nil { if err == nil {
err = json.Unmarshal(description, &keyring) err = json.Unmarshal(description, &keyring)
if err != nil { if err != nil {
@@ -160,7 +160,7 @@ func keyringSet(key string, value string) bool {
return false return false
} }
err = os.WriteFile(keyringFile, description, 0600) err = ioutil.WriteFile(keyringFile, description, 0600)
if err != nil { if err != nil {
LOG_DEBUG("KEYRING_WRITE", "Failed to save the keyring storage to file %s: %v", keyringFile, err) LOG_DEBUG("KEYRING_WRITE", "Failed to save the keyring storage to file %s: %v", keyringFile, err)
return false return false

View File

@@ -6,57 +6,27 @@ package duplicacy
import ( import (
"encoding/json" "encoding/json"
"fmt" "io/ioutil"
"os" "os"
"path" "path"
"reflect" "reflect"
"strconv"
"strings" "strings"
) )
type flagsMask uint32
func (f flagsMask) MarshalJSON() ([]byte, error) {
return json.Marshal(fmt.Sprintf("0x%.8x", f))
}
func (f *flagsMask) UnmarshalJSON(data []byte) error {
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
if str[0] == '0' && (str[1] == 'x' || str[1] == 'X') {
str = str[2:]
}
v, err := strconv.ParseUint(string(str), 16, 32)
if err != nil {
return err
}
*f = flagsMask(v)
return nil
}
// Preference stores options for each storage. // Preference stores options for each storage.
type Preference struct { type Preference struct {
Name string `json:"name"` Name string `json:"name"`
SnapshotID string `json:"id"` SnapshotID string `json:"id"`
RepositoryPath string `json:"repository"` RepositoryPath string `json:"repository"`
StorageURL string `json:"storage"` StorageURL string `json:"storage"`
Encrypted bool `json:"encrypted"` Encrypted bool `json:"encrypted"`
BackupProhibited bool `json:"no_backup"` BackupProhibited bool `json:"no_backup"`
RestoreProhibited bool `json:"no_restore"` RestoreProhibited bool `json:"no_restore"`
DoNotSavePassword bool `json:"no_save_password"` DoNotSavePassword bool `json:"no_save_password"`
NobackupFile string `json:"nobackup_file"` NobackupFile string `json:"nobackup_file"`
Keys map[string]string `json:"keys"` Keys map[string]string `json:"keys"`
FiltersFile string `json:"filters"` FiltersFile string `json:"filters"`
ExcludeOwner bool `json:"exclude_owner"` ExcludeByAttribute bool `json:"exclude_by_attribute"`
ExcludeByAttribute bool `json:"exclude_by_attribute"`
ExcludeXattrs bool `json:"exclude_xattrs"`
NormalizeXattrs bool `json:"normalize_xattrs"`
IncludeFileFlags bool `json:"include_file_flags"`
IncludeSpecials bool `json:"include_specials"`
FileFlagsMask flagsMask `json:"file_flags_mask"`
} }
var preferencePath string var preferencePath string
@@ -73,7 +43,7 @@ func LoadPreferences(repository string) bool {
} }
if !stat.IsDir() { if !stat.IsDir() {
content, err := os.ReadFile(preferencePath) content, err := ioutil.ReadFile(preferencePath)
if err != nil { if err != nil {
LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to locate the preference path: %v", err) LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to locate the preference path: %v", err)
return false return false
@@ -91,7 +61,7 @@ func LoadPreferences(repository string) bool {
preferencePath = realPreferencePath preferencePath = realPreferencePath
} }
description, err := os.ReadFile(path.Join(preferencePath, "preferences")) description, err := ioutil.ReadFile(path.Join(preferencePath, "preferences"))
if err != nil { if err != nil {
LOG_ERROR("PREFERENCE_OPEN", "Failed to read the preference file from repository %s: %v", repository, err) LOG_ERROR("PREFERENCE_OPEN", "Failed to read the preference file from repository %s: %v", repository, err)
return false return false
@@ -140,7 +110,7 @@ func SavePreferences() bool {
} }
preferenceFile := path.Join(GetDuplicacyPreferencePath(), "preferences") preferenceFile := path.Join(GetDuplicacyPreferencePath(), "preferences")
err = os.WriteFile(preferenceFile, description, 0600) err = ioutil.WriteFile(preferenceFile, description, 0600)
if err != nil { if err != nil {
LOG_ERROR("PREFERENCE_WRITE", "Failed to save the preference file %s: %v", preferenceFile, err) LOG_ERROR("PREFERENCE_WRITE", "Failed to save the preference file %s: %v", preferenceFile, err)
return false return false

View File

@@ -10,6 +10,7 @@ package duplicacy
import ( import (
"context" "context"
"errors" "errors"
"io/ioutil"
"os" "os"
"os/exec" "os/exec"
"regexp" "regexp"
@@ -135,7 +136,7 @@ func CreateShadowCopy(top string, shadowCopy bool, timeoutInSeconds int) (shadow
} }
// Create mount point // Create mount point
snapshotPath, err = os.MkdirTemp("/tmp/", "snp_") snapshotPath, err = ioutil.TempDir("/tmp/", "snp_")
if err != nil { if err != nil {
LOG_ERROR("VSS_CREATE", "Failed to create temporary mount directory") LOG_ERROR("VSS_CREATE", "Failed to create temporary mount directory")
return top return top

View File

@@ -9,13 +9,15 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"sort"
"strings" "strings"
"time" "time"
"sort"
"github.com/vmihailenco/msgpack"
"github.com/vmihailenco/msgpack"
) )
// Snapshot represents a backup of the repository. // Snapshot represents a backup of the repository.
@@ -58,41 +60,20 @@ func CreateEmptySnapshot(id string) (snapshto *Snapshot) {
type DirectoryListing struct { type DirectoryListing struct {
directory string directory string
files *[]Entry files *[]Entry
} }
type ListFilesOptions struct { func (snapshot *Snapshot) ListLocalFiles(top string, nobackupFile string,
NoBackupFile string filtersFile string, excludeByAttribute bool, listingChannel chan *Entry,
FiltersFile string skippedDirectories *[]string, skippedFiles *[]string) {
ExcludeByAttribute bool
ExcludeXattrs bool
NormalizeXattr bool
IncludeFileFlags bool
IncludeSpecials bool
}
func NewListFilesOptions(p *Preference) *ListFilesOptions { var patterns []string
return &ListFilesOptions{ listingState := NewListingState()
NoBackupFile: p.NobackupFile,
FiltersFile: p.FiltersFile, if filtersFile == "" {
ExcludeByAttribute: p.ExcludeByAttribute, filtersFile = joinPath(GetDuplicacyPreferencePath(), "filters")
ExcludeXattrs: p.ExcludeXattrs,
NormalizeXattr: p.NormalizeXattrs,
IncludeFileFlags: p.IncludeFileFlags,
IncludeSpecials: p.IncludeSpecials,
} }
} patterns = ProcessFilters(filtersFile)
func (snapshot *Snapshot) ListLocalFiles(top string,
listingChannel chan *Entry, skippedDirectories *[]string, skippedFiles *[]string,
options *ListFilesOptions) {
if options.FiltersFile == "" {
options.FiltersFile = joinPath(GetDuplicacyPreferencePath(), "filters")
}
patterns := ProcessFilters(options.FiltersFile)
lister := NewLocalDirectoryLister()
directories := make([]*Entry, 0, 256) directories := make([]*Entry, 0, 256)
directories = append(directories, CreateEntry("", 0, 0, 0)) directories = append(directories, CreateEntry("", 0, 0, 0))
@@ -101,16 +82,7 @@ func (snapshot *Snapshot) ListLocalFiles(top string,
directory := directories[len(directories)-1] directory := directories[len(directories)-1]
directories = directories[:len(directories)-1] directories = directories[:len(directories)-1]
subdirectories, skipped, err := lister.ListDir(top, directory.Path, listingChannel, subdirectories, skipped, err := ListEntries(top, directory.Path, patterns, nobackupFile, excludeByAttribute, listingState, listingChannel)
&EntryListerOptions{
Patterns: patterns,
NoBackupFile: options.NoBackupFile,
ExcludeByAttribute: options.ExcludeByAttribute,
ExcludeXattrs: options.ExcludeXattrs,
NormalizeXattr: options.NormalizeXattr,
IncludeFileFlags: options.IncludeFileFlags,
IncludeSpecials: options.IncludeSpecials,
})
if err != nil { if err != nil {
if directory.Path == "" { if directory.Path == "" {
LOG_ERROR("LIST_FAILURE", "Failed to list the repository root: %v", err) LOG_ERROR("LIST_FAILURE", "Failed to list the repository root: %v", err)
@@ -133,7 +105,7 @@ func (snapshot *Snapshot) ListLocalFiles(top string,
close(listingChannel) close(listingChannel)
} }
func (snapshot *Snapshot) ListRemoteFiles(config *Config, chunkOperator *ChunkOperator, entryOut func(*Entry) bool) { func (snapshot *Snapshot)ListRemoteFiles(config *Config, chunkOperator *ChunkOperator, entryOut func(*Entry) bool) {
var chunks []string var chunks []string
for _, chunkHash := range snapshot.FileSequence { for _, chunkHash := range snapshot.FileSequence {
@@ -153,12 +125,12 @@ func (snapshot *Snapshot) ListRemoteFiles(config *Config, chunkOperator *ChunkOp
if chunk != nil { if chunk != nil {
config.PutChunk(chunk) config.PutChunk(chunk)
} }
}() } ()
// Normally if Version is 0 then the snapshot is created by CLI v2 but unfortunately CLI 3.0.1 does not set the // Normally if Version is 0 then the snapshot is created by CLI v2 but unfortunately CLI 3.0.1 does not set the
// version bit correctly when copying old backups. So we need to check the first byte -- if it is '[' then it is // version bit correctly when copying old backups. So we need to check the first byte -- if it is '[' then it is
// the old format. The new format starts with a string encoded in msgpack and the first byte can't be '['. // the old format. The new format starts with a string encoded in msgpack and the first byte can't be '['.
if snapshot.Version == 0 || reader.GetFirstByte() == '[' { if snapshot.Version == 0 || reader.GetFirstByte() == '['{
LOG_INFO("SNAPSHOT_VERSION", "snapshot %s at revision %d is encoded in an old version format", snapshot.ID, snapshot.Revision) LOG_INFO("SNAPSHOT_VERSION", "snapshot %s at revision %d is encoded in an old version format", snapshot.ID, snapshot.Revision)
files := make([]*Entry, 0) files := make([]*Entry, 0)
decoder := json.NewDecoder(reader) decoder := json.NewDecoder(reader)
@@ -229,7 +201,7 @@ func (snapshot *Snapshot) ListRemoteFiles(config *Config, chunkOperator *ChunkOp
} else { } else {
LOG_ERROR("SNAPSHOT_VERSION", "snapshot %s at revision %d is encoded in unsupported version %d format", LOG_ERROR("SNAPSHOT_VERSION", "snapshot %s at revision %d is encoded in unsupported version %d format",
snapshot.ID, snapshot.Revision, snapshot.Version) snapshot.ID, snapshot.Revision, snapshot.Version)
return return
} }
@@ -272,7 +244,7 @@ func ProcessFilterFile(patternFile string, includedFiles []string) (patterns []s
} }
includedFiles = append(includedFiles, patternFile) includedFiles = append(includedFiles, patternFile)
LOG_INFO("SNAPSHOT_FILTER", "Parsing filter file %s", patternFile) LOG_INFO("SNAPSHOT_FILTER", "Parsing filter file %s", patternFile)
patternFileContent, err := os.ReadFile(patternFile) patternFileContent, err := ioutil.ReadFile(patternFile)
if err == nil { if err == nil {
patternFileLines := strings.Split(string(patternFileContent), "\n") patternFileLines := strings.Split(string(patternFileContent), "\n")
patterns = ProcessFilterLines(patternFileLines, includedFiles) patterns = ProcessFilterLines(patternFileLines, includedFiles)
@@ -292,7 +264,7 @@ func ProcessFilterLines(patternFileLines []string, includedFiles []string) (patt
if patternIncludeFile == "" { if patternIncludeFile == "" {
continue continue
} }
if !filepath.IsAbs(patternIncludeFile) { if ! filepath.IsAbs(patternIncludeFile) {
basePath := "" basePath := ""
if len(includedFiles) == 0 { if len(includedFiles) == 0 {
basePath, _ = os.Getwd() basePath, _ = os.Getwd()
@@ -519,3 +491,4 @@ func encodeSequence(sequence []string) []string {
return sequenceInHex return sequenceInHex
} }

View File

@@ -18,10 +18,10 @@ import (
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
"sync"
"sync/atomic"
"text/tabwriter" "text/tabwriter"
"time" "time"
"sync"
"sync/atomic"
"github.com/aryann/difflib" "github.com/aryann/difflib"
) )
@@ -191,7 +191,7 @@ type SnapshotManager struct {
fileChunk *Chunk fileChunk *Chunk
snapshotCache *FileStorage snapshotCache *FileStorage
chunkOperator *ChunkOperator chunkOperator *ChunkOperator
} }
// CreateSnapshotManager creates a snapshot manager // CreateSnapshotManager creates a snapshot manager
@@ -738,7 +738,7 @@ func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList
totalFileSize := int64(0) totalFileSize := int64(0)
lastChunk := 0 lastChunk := 0
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry)bool {
if file.IsFile() { if file.IsFile() {
totalFiles++ totalFiles++
totalFileSize += file.Size totalFileSize += file.Size
@@ -753,7 +753,7 @@ func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList
return true return true
}) })
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry)bool {
if file.IsFile() { if file.IsFile() {
LOG_INFO("SNAPSHOT_FILE", "%s", file.String(maxSizeDigits)) LOG_INFO("SNAPSHOT_FILE", "%s", file.String(maxSizeDigits))
} }
@@ -908,7 +908,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
_, exist, _, err := manager.storage.FindChunk(0, chunkID, false) _, exist, _, err := manager.storage.FindChunk(0, chunkID, false)
if err != nil { if err != nil {
LOG_WARN("SNAPSHOT_VALIDATE", "Failed to check the existence of chunk %s: %v", LOG_WARN("SNAPSHOT_VALIDATE", "Failed to check the existence of chunk %s: %v",
chunkID, err) chunkID, err)
} else if exist { } else if exist {
LOG_INFO("SNAPSHOT_VALIDATE", "Chunk %s is confirmed to exist", chunkID) LOG_INFO("SNAPSHOT_VALIDATE", "Chunk %s is confirmed to exist", chunkID)
continue continue
@@ -1031,7 +1031,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
if err != nil { if err != nil {
LOG_WARN("SNAPSHOT_VERIFY", "Failed to save the verified chunks file: %v", err) LOG_WARN("SNAPSHOT_VERIFY", "Failed to save the verified chunks file: %v", err)
} else { } else {
LOG_INFO("SNAPSHOT_VERIFY", "Added %d chunks to the list of verified chunks", len(verifiedChunks)-numberOfVerifiedChunks) LOG_INFO("SNAPSHOT_VERIFY", "Added %d chunks to the list of verified chunks", len(verifiedChunks) - numberOfVerifiedChunks)
} }
} }
} }
@@ -1073,7 +1073,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
defer CatchLogException() defer CatchLogException()
for { for {
chunkIndex, ok := <-chunkChannel chunkIndex, ok := <- chunkChannel
if !ok { if !ok {
wg.Done() wg.Done()
return return
@@ -1093,14 +1093,14 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
elapsedTime := time.Now().Sub(startTime).Seconds() elapsedTime := time.Now().Sub(startTime).Seconds()
speed := int64(float64(downloadedChunkSize) / elapsedTime) speed := int64(float64(downloadedChunkSize) / elapsedTime)
remainingTime := int64(float64(totalChunks-downloadedChunks) / float64(downloadedChunks) * elapsedTime) remainingTime := int64(float64(totalChunks - downloadedChunks) / float64(downloadedChunks) * elapsedTime)
percentage := float64(downloadedChunks) / float64(totalChunks) * 100.0 percentage := float64(downloadedChunks) / float64(totalChunks) * 100.0
LOG_INFO("VERIFY_PROGRESS", "Verified chunk %s (%d/%d), %sB/s %s %.1f%%", LOG_INFO("VERIFY_PROGRESS", "Verified chunk %s (%d/%d), %sB/s %s %.1f%%",
chunkID, downloadedChunks, totalChunks, PrettySize(speed), PrettyTime(remainingTime), percentage) chunkID, downloadedChunks, totalChunks, PrettySize(speed), PrettyTime(remainingTime), percentage)
manager.config.PutChunk(chunk) manager.config.PutChunk(chunk)
} }
}() } ()
} }
for chunkIndex := range chunkHashes { for chunkIndex := range chunkHashes {
@@ -1289,10 +1289,10 @@ func (manager *SnapshotManager) PrintSnapshot(snapshot *Snapshot) bool {
} }
// Don't print the ending bracket // Don't print the ending bracket
fmt.Printf("%s", string(description[:len(description)-2])) fmt.Printf("%s", string(description[:len(description) - 2]))
fmt.Printf(",\n \"files\": [\n") fmt.Printf(",\n \"files\": [\n")
isFirstFile := true isFirstFile := true
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func (file *Entry) bool {
fileDescription, _ := json.MarshalIndent(file.convertToObject(false), "", " ") fileDescription, _ := json.MarshalIndent(file.convertToObject(false), "", " ")
@@ -1322,7 +1322,7 @@ func (manager *SnapshotManager) VerifySnapshot(snapshot *Snapshot) bool {
} }
files := make([]*Entry, 0) files := make([]*Entry, 0)
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(file *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func (file *Entry) bool {
if file.IsFile() && file.Size != 0 { if file.IsFile() && file.Size != 0 {
file.Attributes = nil file.Attributes = nil
files = append(files, file) files = append(files, file)
@@ -1426,7 +1426,7 @@ func (manager *SnapshotManager) RetrieveFile(snapshot *Snapshot, file *Entry, la
func (manager *SnapshotManager) FindFile(snapshot *Snapshot, filePath string, suppressError bool) *Entry { func (manager *SnapshotManager) FindFile(snapshot *Snapshot, filePath string, suppressError bool) *Entry {
var found *Entry var found *Entry
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(entry *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func (entry *Entry) bool {
if entry.Path == filePath { if entry.Path == filePath {
found = entry found = entry
return false return false
@@ -1479,8 +1479,8 @@ func (manager *SnapshotManager) PrintFile(snapshotID string, revision int, path
file := manager.FindFile(snapshot, path, false) file := manager.FindFile(snapshot, path, false)
if !manager.RetrieveFile(snapshot, file, nil, func(chunk []byte) { if !manager.RetrieveFile(snapshot, file, nil, func(chunk []byte) {
fmt.Printf("%s", chunk) fmt.Printf("%s", chunk)
}) { }) {
LOG_ERROR("SNAPSHOT_RETRIEVE", "File %s is corrupted in snapshot %s at revision %d", LOG_ERROR("SNAPSHOT_RETRIEVE", "File %s is corrupted in snapshot %s at revision %d",
path, snapshot.ID, snapshot.Revision) path, snapshot.ID, snapshot.Revision)
return false return false
@@ -1491,8 +1491,7 @@ func (manager *SnapshotManager) PrintFile(snapshotID string, revision int, path
// Diff compares two snapshots, or two revision of a file if the file argument is given. // Diff compares two snapshots, or two revision of a file if the file argument is given.
func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []int, func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []int,
filePath string, compareByHash bool, filePath string, compareByHash bool, nobackupFile string, filtersFile string, excludeByAttribute bool) bool {
options *ListFilesOptions) bool {
LOG_DEBUG("DIFF_PARAMETERS", "top: %s, id: %s, revision: %v, path: %s, compareByHash: %t", LOG_DEBUG("DIFF_PARAMETERS", "top: %s, id: %s, revision: %v, path: %s, compareByHash: %t",
top, snapshotID, revisions, filePath, compareByHash) top, snapshotID, revisions, filePath, compareByHash)
@@ -1501,7 +1500,7 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
defer func() { defer func() {
manager.chunkOperator.Stop() manager.chunkOperator.Stop()
manager.chunkOperator = nil manager.chunkOperator = nil
}() } ()
var leftSnapshot *Snapshot var leftSnapshot *Snapshot
var rightSnapshot *Snapshot var rightSnapshot *Snapshot
@@ -1517,11 +1516,11 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
localListingChannel := make(chan *Entry) localListingChannel := make(chan *Entry)
go func() { go func() {
defer CatchLogException() defer CatchLogException()
rightSnapshot.ListLocalFiles(top, localListingChannel, nil, nil, options) rightSnapshot.ListLocalFiles(top, nobackupFile, filtersFile, excludeByAttribute, localListingChannel, nil, nil)
}() } ()
for entry := range localListingChannel { for entry := range localListingChannel {
entry.Attributes = nil // attributes are not compared entry.Attributes = nil // attributes are not compared
rightSnapshotFiles = append(rightSnapshotFiles, entry) rightSnapshotFiles = append(rightSnapshotFiles, entry)
} }
@@ -1726,7 +1725,7 @@ func (manager *SnapshotManager) ShowHistory(top string, snapshotID string, revis
defer func() { defer func() {
manager.chunkOperator.Stop() manager.chunkOperator.Stop()
manager.chunkOperator = nil manager.chunkOperator = nil
}() } ()
var err error var err error
@@ -1822,16 +1821,15 @@ func (manager *SnapshotManager) resurrectChunk(fossilPath string, chunkID string
// PruneSnapshots deletes snapshots by revisions, tags, or a retention policy. The main idea is two-step // PruneSnapshots deletes snapshots by revisions, tags, or a retention policy. The main idea is two-step
// fossil collection. // fossil collection.
// 1. Delete snapshots specified by revision, retention policy, with a tag. Find any resulting unreferenced
// chunks, and mark them as fossils (by renaming). After that, create a fossil collection file containing
// fossils collected during current run, and temporary files encountered. Also in the file is the latest
// revision for each snapshot id. Save this file to a local directory.
// //
// 1. Delete snapshots specified by revision, retention policy, with a tag. Find any resulting unreferenced // 2. On next run, check if there is any new revision for each snapshot. Or if the lastest revision is too
// chunks, and mark them as fossils (by renaming). After that, create a fossil collection file containing // old, for instance, more than 7 days. This step is to identify snapshots that were being created while
// fossils collected during current run, and temporary files encountered. Also in the file is the latest // step 1 is in progress. For each fossil reference by any of these snapshots, move them back to the
// revision for each snapshot id. Save this file to a local directory. // normal chunk directory.
//
// 2. On next run, check if there is any new revision for each snapshot. Or if the lastest revision is too
// old, for instance, more than 7 days. This step is to identify snapshots that were being created while
// step 1 is in progress. For each fossil reference by any of these snapshots, move them back to the
// normal chunk directory.
// //
// Note that a snapshot being created when step 2 is in progress may reference a fossil. To avoid this // Note that a snapshot being created when step 2 is in progress may reference a fossil. To avoid this
// problem, never remove the lastest revision (unless exclusive is true), and only cache chunks referenced // problem, never remove the lastest revision (unless exclusive is true), and only cache chunks referenced
@@ -1855,7 +1853,7 @@ func (manager *SnapshotManager) PruneSnapshots(selfID string, snapshotID string,
defer func() { defer func() {
manager.chunkOperator.Stop() manager.chunkOperator.Stop()
manager.chunkOperator = nil manager.chunkOperator = nil
}() } ()
prefPath := GetDuplicacyPreferencePath() prefPath := GetDuplicacyPreferencePath()
logDir := path.Join(prefPath, "logs") logDir := path.Join(prefPath, "logs")
@@ -2546,7 +2544,7 @@ func (manager *SnapshotManager) CheckSnapshot(snapshot *Snapshot) (err error) {
numberOfChunks, len(snapshot.ChunkLengths)) numberOfChunks, len(snapshot.ChunkLengths))
} }
snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func(entry *Entry) bool { snapshot.ListRemoteFiles(manager.config, manager.chunkOperator, func (entry *Entry) bool {
if lastEntry != nil && lastEntry.Compare(entry) >= 0 && !strings.Contains(lastEntry.Path, "\ufffd") { if lastEntry != nil && lastEntry.Compare(entry) >= 0 && !strings.Contains(lastEntry.Path, "\ufffd") {
err = fmt.Errorf("The entry %s appears before the entry %s", lastEntry.Path, entry.Path) err = fmt.Errorf("The entry %s appears before the entry %s", lastEntry.Path, entry.Path)
@@ -2570,7 +2568,7 @@ func (manager *SnapshotManager) CheckSnapshot(snapshot *Snapshot) (err error) {
} }
if entry.EndChunk < entry.StartChunk { if entry.EndChunk < entry.StartChunk {
err = fmt.Errorf("The file %s starts at chunk %d and ends at chunk %d", fmt.Errorf("The file %s starts at chunk %d and ends at chunk %d",
entry.Path, entry.StartChunk, entry.EndChunk) entry.Path, entry.StartChunk, entry.EndChunk)
return false return false
} }
@@ -2600,7 +2598,7 @@ func (manager *SnapshotManager) CheckSnapshot(snapshot *Snapshot) (err error) {
if entry.Size != fileSize { if entry.Size != fileSize {
err = fmt.Errorf("The file %s has a size of %d but the total size of chunks is %d", err = fmt.Errorf("The file %s has a size of %d but the total size of chunks is %d",
entry.Path, entry.Size, fileSize) entry.Path, entry.Size, fileSize)
return false return false
} }
return true return true
@@ -2649,7 +2647,7 @@ func (manager *SnapshotManager) DownloadFile(path string, derivationKey string)
err = manager.storage.UploadFile(0, path, newChunk.GetBytes()) err = manager.storage.UploadFile(0, path, newChunk.GetBytes())
if err != nil { if err != nil {
LOG_WARN("DOWNLOAD_REWRITE", "Failed to re-uploaded the file %s: %v", path, err) LOG_WARN("DOWNLOAD_REWRITE", "Failed to re-uploaded the file %s: %v", path, err)
} else { } else{
LOG_INFO("DOWNLOAD_REWRITE", "The file %s has been re-uploaded", path) LOG_INFO("DOWNLOAD_REWRITE", "The file %s has been re-uploaded", path)
} }
} }

View File

@@ -756,8 +756,6 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
LOG_ERROR("STORAGE_CREATE", "Failed to load the Storj storage at %s: %v", storageURL, err) LOG_ERROR("STORAGE_CREATE", "Failed to load the Storj storage at %s: %v", storageURL, err)
return nil return nil
} }
SavePassword(preference, "storj_key", apiKey)
SavePassword(preference, "storj_passphrase", passphrase)
return storjStorage return storjStorage
} else if matched[1] == "smb" { } else if matched[1] == "smb" {
server := matched[3] server := matched[3]

View File

@@ -0,0 +1,94 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
//go:build freebsd || netbsd || darwin
// +build freebsd netbsd darwin
package duplicacy
import (
"bytes"
"encoding/binary"
"os"
"path/filepath"
"syscall"
"github.com/pkg/xattr"
)
const bsdFileFlagsKey = "\x00bf"
func (entry *Entry) ReadAttributes(top string) {
fullPath := filepath.Join(top, entry.Path)
fileInfo, err := os.Lstat(fullPath)
if err != nil {
return
}
if !entry.IsSpecial() {
attributes, _ := xattr.LList(fullPath)
if len(attributes) > 0 {
entry.Attributes = &map[string][]byte{}
for _, name := range attributes {
attribute, err := xattr.LGet(fullPath, name)
if err == nil {
(*entry.Attributes)[name] = attribute
}
}
}
}
if err := entry.readFileFlags(fileInfo); err != nil {
LOG_INFO("ATTR_BACKUP", "Could not backup flags for file %s: %v", fullPath, err)
}
}
func (entry *Entry) SetAttributesToFile(fullPath string) {
if !entry.IsSpecial() {
names, _ := xattr.LList(fullPath)
for _, name := range names {
newAttribute, found := (*entry.Attributes)[name]
if found {
oldAttribute, _ := xattr.LGet(fullPath, name)
if !bytes.Equal(oldAttribute, newAttribute) {
xattr.LSet(fullPath, name, newAttribute)
}
delete(*entry.Attributes, name)
} else {
xattr.LRemove(fullPath, name)
}
}
for name, attribute := range *entry.Attributes {
if len(name) > 0 && name[0] == '\x00' {
continue
}
xattr.LSet(fullPath, name, attribute)
}
}
if err := entry.restoreLateFileFlags(fullPath); err != nil {
LOG_DEBUG("ATTR_RESTORE", "Could not restore flags for file %s: %v", fullPath, err)
}
}
func (entry *Entry) readFileFlags(fileInfo os.FileInfo) error {
stat, ok := fileInfo.Sys().(*syscall.Stat_t)
if ok && stat.Flags != 0 {
if entry.Attributes == nil {
entry.Attributes = &map[string][]byte{}
}
v := make([]byte, 4)
binary.LittleEndian.PutUint32(v, stat.Flags)
(*entry.Attributes)[bsdFileFlagsKey] = v
LOG_DEBUG("ATTR_READ", "Read flags 0x%x for %s", stat.Flags, entry.Path)
}
return nil
}
func (entry *Entry) RestoreEarlyDirFlags(path string) error {
return nil
}
func (entry *Entry) RestoreEarlyFileFlags(f *os.File) error {
return nil
}

View File

@@ -8,31 +8,26 @@ import (
"encoding/binary" "encoding/binary"
"os" "os"
"strings" "strings"
"syscall"
"golang.org/x/sys/unix"
) )
func excludedByAttribute(attributes map[string][]byte) bool { func excludedByAttribute(attributes map[string][]byte) bool {
value, ok := attributes["com.apple.metadata:com_apple_backup_excludeItem"] value, ok := attributes["com.apple.metadata:com_apple_backup_excludeItem"]
excluded := ok && strings.Contains(string(value), "com.apple.backupd") return ok && strings.Contains(string(value), "com.apple.backupd")
if !excluded {
flags, ok := attributes[darwinFileFlagsKey]
excluded = ok && (binary.LittleEndian.Uint32(flags)&unix.UF_NODUMP) != 0
}
return excluded
} }
func (entry *Entry) RestoreSpecial(fullPath string) error { func (entry *Entry) restoreLateFileFlags(path string) error {
mode := entry.Mode & uint32(fileModeMask) if entry.Attributes == nil {
if entry.Mode&uint32(os.ModeNamedPipe) != 0 {
mode |= unix.S_IFIFO
} else if entry.Mode&uint32(os.ModeCharDevice) != 0 {
mode |= unix.S_IFCHR
} else if entry.Mode&uint32(os.ModeDevice) != 0 {
mode |= unix.S_IFBLK
} else {
return nil return nil
} }
return unix.Mknod(fullPath, mode, int(entry.GetRdev())) if v, have := (*entry.Attributes)[bsdFileFlagsKey]; have {
f, err := os.OpenFile(path, os.O_RDONLY|syscall.O_SYMLINK, 0)
if err != nil {
return err
}
err = syscall.Fchflags(int(f.Fd()), int(binary.LittleEndian.Uint32(v)))
f.Close()
return err
}
return nil
} }

View File

@@ -5,34 +5,221 @@
package duplicacy package duplicacy
import ( import (
"bytes"
"encoding/binary" "encoding/binary"
"os" "os"
"path/filepath"
"syscall"
"unsafe"
"golang.org/x/sys/unix" "github.com/pkg/xattr"
) )
func excludedByAttribute(attributes map[string][]byte) bool { const (
_, excluded := attributes["user.duplicacy_exclude"] linux_FS_SECRM_FL = 0x00000001 /* Secure deletion */
if !excluded { linux_FS_UNRM_FL = 0x00000002 /* Undelete */
flags, ok := attributes[linuxFileFlagsKey] linux_FS_COMPR_FL = 0x00000004 /* Compress file */
excluded = ok && (binary.LittleEndian.Uint32(flags)&linux_FS_NODUMP_FL) != 0 linux_FS_SYNC_FL = 0x00000008 /* Synchronous updates */
linux_FS_IMMUTABLE_FL = 0x00000010 /* Immutable file */
linux_FS_APPEND_FL = 0x00000020 /* writes to file may only append */
linux_FS_NODUMP_FL = 0x00000040 /* do not dump file */
linux_FS_NOATIME_FL = 0x00000080 /* do not update atime */
linux_FS_NOCOMP_FL = 0x00000400 /* Don't compress */
linux_FS_JOURNAL_DATA_FL = 0x00004000 /* Reserved for ext3 */
linux_FS_NOTAIL_FL = 0x00008000 /* file tail should not be merged */
linux_FS_DIRSYNC_FL = 0x00010000 /* dirsync behaviour (directories only) */
linux_FS_TOPDIR_FL = 0x00020000 /* Top of directory hierarchies*/
linux_FS_NOCOW_FL = 0x00800000 /* Do not cow file */
linux_FS_PROJINHERIT_FL = 0x20000000 /* Create with parents projid */
linux_FS_IOC_GETFLAGS uintptr = 0x80086601
linux_FS_IOC_SETFLAGS uintptr = 0x40086602
linuxIocFlagsFileEarly = linux_FS_SECRM_FL | linux_FS_UNRM_FL | linux_FS_COMPR_FL | linux_FS_NODUMP_FL | linux_FS_NOATIME_FL | linux_FS_NOCOMP_FL | linux_FS_JOURNAL_DATA_FL | linux_FS_NOTAIL_FL | linux_FS_NOCOW_FL
linuxIocFlagsDirEarly = linux_FS_TOPDIR_FL | linux_FS_PROJINHERIT_FL
linuxIocFlagsLate = linux_FS_SYNC_FL | linux_FS_IMMUTABLE_FL | linux_FS_APPEND_FL | linux_FS_DIRSYNC_FL
linuxFileFlagsKey = "\x00lf"
)
func ioctl(f *os.File, request uintptr, attrp *uint32) error {
argp := uintptr(unsafe.Pointer(attrp))
if _, _, errno := syscall.Syscall(syscall.SYS_IOCTL, f.Fd(), request, argp); errno != 0 {
return os.NewSyscallError("ioctl", errno)
} }
return excluded return nil
} }
func (entry *Entry) RestoreSpecial(fullPath string) error { type xattrHandle struct {
mode := entry.Mode & uint32(fileModeMask) f *os.File
fullPath string
}
if entry.Mode&uint32(os.ModeNamedPipe) != 0 { func (x xattrHandle) list() ([]string, error) {
mode |= unix.S_IFIFO if x.f != nil {
} else if entry.Mode&uint32(os.ModeCharDevice) != 0 { return xattr.FList(x.f)
mode |= unix.S_IFCHR
} else if entry.Mode&uint32(os.ModeDevice) != 0 {
mode |= unix.S_IFBLK
} else if entry.Mode&uint32(os.ModeSocket) != 0 {
mode |= unix.S_IFSOCK
} else { } else {
return xattr.LList(x.fullPath)
}
}
func (x xattrHandle) get(name string) ([]byte, error) {
if x.f != nil {
return xattr.FGet(x.f, name)
} else {
return xattr.LGet(x.fullPath, name)
}
}
func (x xattrHandle) set(name string, value []byte) error {
if x.f != nil {
return xattr.FSet(x.f, name, value)
} else {
return xattr.LSet(x.fullPath, name, value)
}
}
func (x xattrHandle) remove(name string) error {
if x.f != nil {
return xattr.FRemove(x.f, name)
} else {
return xattr.LRemove(x.fullPath, name)
}
}
func (entry *Entry) ReadAttributes(top string) {
fullPath := filepath.Join(top, entry.Path)
x := xattrHandle{nil, fullPath}
if !entry.IsLink() {
var err error
x.f, err = os.OpenFile(fullPath, os.O_RDONLY|syscall.O_NOFOLLOW|syscall.O_NONBLOCK, 0)
if err != nil {
// FIXME: We really should return errors for failure to read
return
}
}
attributes, _ := x.list()
if len(attributes) > 0 {
entry.Attributes = &map[string][]byte{}
}
for _, name := range attributes {
attribute, err := x.get(name)
if err == nil {
(*entry.Attributes)[name] = attribute
}
}
if entry.IsFile() || entry.IsDir() {
if err := entry.readFileFlags(x.f); err != nil {
LOG_INFO("ATTR_BACKUP", "Could not backup flags for file %s: %v", fullPath, err)
}
}
x.f.Close()
}
func (entry *Entry) SetAttributesToFile(fullPath string) {
x := xattrHandle{nil, fullPath}
if !entry.IsLink() {
var err error
x.f, err = os.OpenFile(fullPath, os.O_RDONLY|syscall.O_NOFOLLOW, 0)
if err != nil {
return
}
}
names, _ := x.list()
for _, name := range names {
newAttribute, found := (*entry.Attributes)[name]
if found {
oldAttribute, _ := x.get(name)
if !bytes.Equal(oldAttribute, newAttribute) {
x.set(name, newAttribute)
}
delete(*entry.Attributes, name)
} else {
x.remove(name)
}
}
for name, attribute := range *entry.Attributes {
if len(name) > 0 && name[0] == '\x00' {
continue
}
x.set(name, attribute)
}
if entry.IsFile() || entry.IsDir() {
if err := entry.restoreLateFileFlags(x.f); err != nil {
LOG_DEBUG("ATTR_RESTORE", "Could not restore flags for file %s: %v", fullPath, err)
}
}
x.f.Close()
}
func (entry *Entry) readFileFlags(f *os.File) error {
var flags uint32
if err := ioctl(f, linux_FS_IOC_GETFLAGS, &flags); err != nil {
return err
}
if flags != 0 {
if entry.Attributes == nil {
entry.Attributes = &map[string][]byte{}
}
v := make([]byte, 4)
binary.LittleEndian.PutUint32(v, flags)
(*entry.Attributes)[linuxFileFlagsKey] = v
LOG_DEBUG("ATTR_READ", "Read flags 0x%x for %s", flags, entry.Path)
}
return nil
}
func (entry *Entry) RestoreEarlyDirFlags(path string) error {
if entry.Attributes == nil {
return nil return nil
} }
return unix.Mknod(fullPath, mode, int(entry.GetRdev())) if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags := binary.LittleEndian.Uint32(v) & linuxIocFlagsDirEarly
f, err := os.OpenFile(path, os.O_RDONLY|syscall.O_DIRECTORY, 0)
if err != nil {
return err
}
LOG_DEBUG("ATTR_RESTORE", "Restore dir flags (early) 0x%x for %s", flags, entry.Path)
err = ioctl(f, linux_FS_IOC_SETFLAGS, &flags)
f.Close()
return err
}
return nil
}
func (entry *Entry) RestoreEarlyFileFlags(f *os.File) error {
if entry.Attributes == nil {
return nil
}
if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags := binary.LittleEndian.Uint32(v) & linuxIocFlagsFileEarly
LOG_DEBUG("ATTR_RESTORE", "Restore flags (early) 0x%x for %s", flags, entry.Path)
return ioctl(f, linux_FS_IOC_SETFLAGS, &flags)
}
return nil
}
func (entry *Entry) restoreLateFileFlags(f *os.File) error {
if entry.Attributes == nil {
return nil
}
if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags := binary.LittleEndian.Uint32(v) & (linuxIocFlagsFileEarly | linuxIocFlagsDirEarly | linuxIocFlagsLate)
LOG_DEBUG("ATTR_RESTORE", "Restore flags (late) 0x%x for %s", flags, entry.Path)
return ioctl(f, linux_FS_IOC_SETFLAGS, &flags)
}
return nil
}
func excludedByAttribute(attributes map[string][]byte) bool {
_, ok := attributes["user.duplicacy_exclude"]
return ok
} }

View File

@@ -8,7 +8,6 @@
package duplicacy package duplicacy
import ( import (
"fmt"
"os" "os"
"path" "path"
"syscall" "syscall"
@@ -21,19 +20,24 @@ func Readlink(path string) (isRegular bool, s string, err error) {
return false, s, err return false, s, err
} }
func GetOwner(entry *Entry, fileInfo os.FileInfo) { func GetOwner(entry *Entry, fileInfo *os.FileInfo) {
stat := fileInfo.Sys().(*syscall.Stat_t) stat, ok := (*fileInfo).Sys().(*syscall.Stat_t)
entry.UID = int(stat.Uid) if ok && stat != nil {
entry.GID = int(stat.Gid) entry.UID = int(stat.Uid)
entry.GID = int(stat.Gid)
} else {
entry.UID = -1
entry.GID = -1
}
} }
func SetOwner(fullPath string, entry *Entry, fileInfo os.FileInfo) bool { func SetOwner(fullPath string, entry *Entry, fileInfo *os.FileInfo) bool {
stat := fileInfo.Sys().(*syscall.Stat_t) stat, ok := (*fileInfo).Sys().(*syscall.Stat_t)
if (int(stat.Uid) != entry.UID || int(stat.Gid) != entry.GID) { if ok && stat != nil && (int(stat.Uid) != entry.UID || int(stat.Gid) != entry.GID) {
if entry.UID != -1 && entry.GID != -1 { if entry.UID != -1 && entry.GID != -1 {
err := os.Lchown(fullPath, entry.UID, entry.GID) err := os.Lchown(fullPath, entry.UID, entry.GID)
if err != nil { if err != nil {
LOG_ERROR("RESTORE_CHOWN", "Failed to change uid or gid on %s: %v", entry.Path, err) LOG_ERROR("RESTORE_CHOWN", "Failed to change uid or gid: %v", err)
return false return false
} }
} }
@@ -42,63 +46,46 @@ func SetOwner(fullPath string, entry *Entry, fileInfo os.FileInfo) bool {
return true return true
} }
type listEntryLinkKey struct { func (entry *Entry) ReadSpecial(fileInfo os.FileInfo) bool {
dev uint64
ino uint64
}
func (entry *Entry) getHardLinkKey(f os.FileInfo) (key listEntryLinkKey, linked bool) {
if entry.IsDir() {
return
}
stat := f.Sys().(*syscall.Stat_t)
if stat.Nlink <= 1 {
return
}
key.dev = uint64(stat.Dev)
key.ino = uint64(stat.Ino)
linked = true
return
}
func (entry *Entry) ReadSpecial(fullPath string, fileInfo os.FileInfo) error {
if fileInfo.Mode()&(os.ModeDevice|os.ModeCharDevice) == 0 { if fileInfo.Mode()&(os.ModeDevice|os.ModeCharDevice) == 0 {
return nil return true
}
stat := fileInfo.Sys().(*syscall.Stat_t)
if stat == nil {
return false
} }
rdev := uint64(fileInfo.Sys().(*syscall.Stat_t).Rdev)
entry.Size = 0 entry.Size = 0
rdev := uint64(stat.Rdev)
entry.StartChunk = int(rdev & 0xFFFFFFFF) entry.StartChunk = int(rdev & 0xFFFFFFFF)
entry.StartOffset = int(rdev >> 32) entry.StartOffset = int(rdev >> 32)
return nil return true
} }
func (entry *Entry) GetRdev() uint64 { func (entry *Entry) GetRdev() uint64 {
return uint64(entry.StartChunk) | uint64(entry.StartOffset)<<32 return uint64(entry.StartChunk) | uint64(entry.StartOffset)<<32
} }
func (entry *Entry) IsSameSpecial(fileInfo os.FileInfo) bool { func (entry *Entry) RestoreSpecial(fullPath string) error {
stat := fileInfo.Sys().(*syscall.Stat_t) mode := entry.Mode & uint32(fileModeMask)
return (uint32(fileInfo.Mode()) == entry.Mode) && (uint64(stat.Rdev) == entry.GetRdev())
if entry.Mode&uint32(os.ModeNamedPipe) != 0 {
mode |= syscall.S_IFIFO
} else if entry.Mode&uint32(os.ModeCharDevice) != 0 {
mode |= syscall.S_IFCHR
} else if entry.Mode&uint32(os.ModeDevice) != 0 {
mode |= syscall.S_IFBLK
} else {
return nil
}
return syscall.Mknod(fullPath, mode, int(entry.GetRdev()))
} }
func (entry *Entry) FmtSpecial() string { func (entry *Entry) IsSameSpecial(fileInfo os.FileInfo) bool {
var c string stat := fileInfo.Sys().(*syscall.Stat_t)
mode := entry.Mode & uint32(os.ModeType) if stat == nil {
return false
if mode&uint32(os.ModeNamedPipe) != 0 {
c = "p"
} else if mode&uint32(os.ModeCharDevice) != 0 {
c = "c"
} else if mode&uint32(os.ModeDevice) != 0 {
c = "b"
} else if mode&uint32(os.ModeSocket) != 0 {
c = "s"
} else {
return ""
} }
return (uint32(fileInfo.Mode()) == entry.Mode) && (uint64(stat.Rdev) == entry.GetRdev())
rdev := entry.GetRdev()
return fmt.Sprintf("%s (%d, %d)", c, unix.Major(rdev), unix.Minor(rdev))
} }
func MakeHardlink(source string, target string) error { func MakeHardlink(source string, target string) error {

View File

@@ -0,0 +1,13 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
//go:build freebsd || netbsd || solaris
// +build freebsd netbsd solaris
package duplicacy
func excludedByAttribute(attributes map[string][]byte) bool {
_, ok := attributes["user.duplicacy_exclude"]
return ok
}

View File

@@ -56,11 +56,7 @@ const (
// Readlink returns the destination of the named symbolic link. // Readlink returns the destination of the named symbolic link.
func Readlink(path string) (isRegular bool, s string, err error) { func Readlink(path string) (isRegular bool, s string, err error) {
pPath, err := syscall.UTF16PtrFromString(path) fd, err := syscall.CreateFile(syscall.StringToUTF16Ptr(path), FILE_READ_ATTRIBUTES,
if err != nil {
return false, "", err
}
fd, err := syscall.CreateFile(pPath, FILE_READ_ATTRIBUTES,
syscall.FILE_SHARE_READ, nil, syscall.OPEN_EXISTING, syscall.FILE_SHARE_READ, nil, syscall.OPEN_EXISTING,
syscall.FILE_FLAG_OPEN_REPARSE_POINT|syscall.FILE_FLAG_BACKUP_SEMANTICS, 0) syscall.FILE_FLAG_OPEN_REPARSE_POINT|syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
if err != nil { if err != nil {
@@ -105,15 +101,30 @@ func Readlink(path string) (isRegular bool, s string, err error) {
return false, s, nil return false, s, nil
} }
func GetOwner(entry *Entry, fileInfo os.FileInfo) { func GetOwner(entry *Entry, fileInfo *os.FileInfo) {
entry.UID = -1 entry.UID = -1
entry.GID = -1 entry.GID = -1
} }
func SetOwner(fullPath string, entry *Entry, fileInfo os.FileInfo) bool { func SetOwner(fullPath string, entry *Entry, fileInfo *os.FileInfo) bool {
return true return true
} }
func (entry *Entry) ReadAttributes(top string) {
}
func (entry *Entry) SetAttributesToFile(fullPath string) {
}
func (entry *Entry) ReadDeviceNode(fileInfo os.FileInfo) bool {
return nil
}
func (entry *Entry) RestoreSpecial(fullPath string) error {
return nil
}
func MakeHardlink(source string, target string) error { func MakeHardlink(source string, target string) error {
return os.Link(source, target) return os.Link(source, target)
} }
@@ -133,28 +144,18 @@ func SplitDir(fullPath string) (dir string, file string) {
return fullPath[:i+1], fullPath[i+1:] return fullPath[:i+1], fullPath[i+1:]
} }
func excludedByAttribute(attributes map[string][]byte) bool { func (entry *Entry) ReadFileFlags(f *os.File) error {
return false
}
type listEntryLinkKey struct{}
func (entry *Entry) getHardLinkKey(f os.FileInfo) (key listEntryLinkKey, linked bool) {
return
}
func (entry *Entry) ReadSpecial(fullPath string, fileInfo os.FileInfo) error {
return nil return nil
} }
func (entry *Entry) IsSameSpecial(fileInfo os.FileInfo) bool { func (entry *Entry) RestoreEarlyDirFlags(path string) error {
return false
}
func (entry *Entry) RestoreSpecial(fullPath string) error {
return nil return nil
} }
func (entry *Entry) FmtSpecial() string { func (entry *Entry) RestoreEarlyFileFlags(f *os.File) error {
return "" return nil
}
func (entry *Entry) RestoreLateFileFlags(f *os.File) error {
return nil
} }

View File

@@ -2,37 +2,29 @@
// Free for personal use and commercial trial // Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com // Commercial use requires per-user licenses available from https://duplicacy.com
//go:build freebsd //go:build freebsd || netbsd
// +build freebsd // +build freebsd netbsd
package duplicacy package duplicacy
import ( import (
"bytes"
"encoding/binary" "encoding/binary"
"os" "os"
"path/filepath"
"syscall" "syscall"
"github.com/pkg/xattr"
) )
func excludedByAttribute(attributes map[string][]byte) bool { func (entry *Entry) restoreLateFileFlags(path string) error {
_, excluded := attributes["duplicacy_exclude"] if entry.Attributes == nil {
if !excluded {
flags, ok := attributes[bsdFileFlagsKey]
excluded = ok && (binary.LittleEndian.Uint32(flags)&bsd_UF_NODUMP) != 0
}
return excluded
}
func (entry *Entry) RestoreSpecial(fullPath string) error {
mode := entry.Mode & uint32(fileModeMask)
if entry.Mode&uint32(os.ModeNamedPipe) != 0 {
mode |= syscall.S_IFIFO
} else if entry.Mode&uint32(os.ModeCharDevice) != 0 {
mode |= syscall.S_IFCHR
} else if entry.Mode&uint32(os.ModeDevice) != 0 {
mode |= syscall.S_IFBLK
} else {
return nil return nil
} }
return syscall.Mknod(fullPath, mode, entry.GetRdev()) if v, have := (*entry.Attributes)[bsdFileFlagsKey]; have {
if _, _, errno := syscall.Syscall(syscall.SYS_LCHFLAGS, uintptr(unsafe.Pointer(syscall.StringBytePtr(path))), uintptr(v), 0); errno != 0 {
return os.NewSyscallError("lchflags", errno)
}
}
return nil
} }

View File

@@ -1,35 +0,0 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
package duplicacy
import "os"
func (entry *Entry) ReadAttributes(fi os.FileInfo, fullPath string, normalize bool) error {
return entry.readAttributes(fi, fullPath, normalize)
}
func (entry *Entry) GetFileFlags(fileInfo os.FileInfo) bool {
return entry.getFileFlags(fileInfo)
}
func (entry *Entry) ReadFileFlags(fileInfo os.FileInfo, fullPath string) error {
return entry.readFileFlags(fileInfo, fullPath)
}
func (entry *Entry) RestoreEarlyDirFlags(fullPath string, mask uint32) error {
return entry.restoreEarlyDirFlags(fullPath, mask)
}
func (entry *Entry) RestoreEarlyFileFlags(f *os.File, mask uint32) error {
return entry.restoreEarlyFileFlags(f, mask)
}
func (entry *Entry) RestoreLateFileFlags(fullPath string, fileInfo os.FileInfo, mask uint32) error {
return entry.restoreLateFileFlags(fullPath, fileInfo, mask)
}
func (entry *Entry) SetAttributesToFile(fullPath string, normalize bool) error {
return entry.setAttributesToFile(fullPath, normalize)
}

View File

@@ -1,149 +0,0 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
package duplicacy
import (
"bytes"
"encoding/binary"
"errors"
"math"
"os"
"syscall"
"github.com/pkg/xattr"
"golang.org/x/sys/unix"
)
const (
darwinFileFlagsKey = "\x00bf"
)
var darwinIsSuperUser bool
func init() {
darwinIsSuperUser = unix.Geteuid() == 0
}
func (entry *Entry) readAttributes(fi os.FileInfo, fullPath string, normalize bool) error {
if entry.IsSpecial() {
return nil
}
attributes, err := xattr.LList(fullPath)
if err != nil {
return err
}
if len(attributes) > 0 {
entry.Attributes = &map[string][]byte{}
}
var allErrors error
for _, name := range attributes {
value, err := xattr.LGet(fullPath, name)
if err != nil {
allErrors = errors.Join(allErrors, err)
} else {
(*entry.Attributes)[name] = value
}
}
return allErrors
}
func (entry *Entry) getFileFlags(fileInfo os.FileInfo) bool {
stat := fileInfo.Sys().(*syscall.Stat_t)
if stat.Flags != 0 {
if entry.Attributes == nil {
entry.Attributes = &map[string][]byte{}
}
v := make([]byte, 4)
binary.LittleEndian.PutUint32(v, stat.Flags)
(*entry.Attributes)[darwinFileFlagsKey] = v
}
return true
}
func (entry *Entry) readFileFlags(fileInfo os.FileInfo, fullPath string) error {
return nil
}
func (entry *Entry) setAttributesToFile(fullPath string, normalize bool) error {
if entry.Attributes == nil || len(*entry.Attributes) == 0 || entry.IsSpecial() {
return nil
}
attributes := *entry.Attributes
if _, haveFlags := attributes[darwinFileFlagsKey]; haveFlags && len(attributes) <= 1 {
return nil
}
names, err := xattr.LList(fullPath)
if err != nil {
return err
}
for _, name := range names {
newAttribute, found := attributes[name]
if found {
oldAttribute, _ := xattr.LGet(fullPath, name)
if !bytes.Equal(oldAttribute, newAttribute) {
err = errors.Join(err, xattr.LSet(fullPath, name, newAttribute))
}
delete(attributes, name)
} else {
err = errors.Join(err, xattr.LRemove(fullPath, name))
}
}
for name, attribute := range attributes {
if len(name) > 0 && name[0] == '\x00' {
continue
}
err = errors.Join(err, xattr.LSet(fullPath, name, attribute))
}
return err
}
func (entry *Entry) restoreEarlyDirFlags(fullPath string, mask uint32) error {
return nil
}
func (entry *Entry) restoreEarlyFileFlags(f *os.File, mask uint32) error {
return nil
}
func (entry *Entry) restoreLateFileFlags(fullPath string, fileInfo os.FileInfo, mask uint32) error {
if mask == math.MaxUint32 {
return nil
}
if darwinIsSuperUser {
mask |= ^uint32(unix.UF_SETTABLE | unix.SF_SETTABLE)
} else {
mask |= ^uint32(unix.UF_SETTABLE)
}
var flags uint32
if entry.Attributes != nil {
if v, have := (*entry.Attributes)[darwinFileFlagsKey]; have {
flags = binary.LittleEndian.Uint32(v)
}
}
stat := fileInfo.Sys().(*syscall.Stat_t)
flags = (flags & ^mask) | (stat.Flags & mask)
if flags != stat.Flags {
f, err := os.OpenFile(fullPath, os.O_RDONLY|unix.O_SYMLINK, 0)
if err != nil {
return err
}
err = unix.Fchflags(int(f.Fd()), int(flags))
f.Close()
return err
}
return nil
}

View File

@@ -1,234 +0,0 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
package duplicacy
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"math"
"os"
"unsafe"
"github.com/pkg/xattr"
"golang.org/x/sys/unix"
)
const (
linux_FS_SECRM_FL = 0x00000001 /* Secure deletion */
linux_FS_UNRM_FL = 0x00000002 /* Undelete */
linux_FS_COMPR_FL = 0x00000004 /* Compress file */
linux_FS_SYNC_FL = 0x00000008 /* Synchronous updates */
linux_FS_IMMUTABLE_FL = 0x00000010 /* Immutable file */
linux_FS_APPEND_FL = 0x00000020 /* writes to file may only append */
linux_FS_NODUMP_FL = 0x00000040 /* do not dump file */
linux_FS_NOATIME_FL = 0x00000080 /* do not update atime */
linux_FS_NOCOMP_FL = 0x00000400 /* Don't compress */
linux_FS_JOURNAL_DATA_FL = 0x00004000 /* Reserved for ext3 */
linux_FS_NOTAIL_FL = 0x00008000 /* file tail should not be merged */
linux_FS_DIRSYNC_FL = 0x00010000 /* dirsync behaviour (directories only) */
linux_FS_TOPDIR_FL = 0x00020000 /* Top of directory hierarchies*/
linux_FS_NOCOW_FL = 0x00800000 /* Do not cow file */
linux_FS_PROJINHERIT_FL = 0x20000000 /* Create with parents projid */
linuxIocFlagsFileEarly = linux_FS_SECRM_FL | linux_FS_UNRM_FL | linux_FS_COMPR_FL | linux_FS_NODUMP_FL | linux_FS_NOATIME_FL | linux_FS_NOCOMP_FL | linux_FS_JOURNAL_DATA_FL | linux_FS_NOTAIL_FL | linux_FS_NOCOW_FL
linuxIocFlagsDirEarly = linux_FS_TOPDIR_FL | linux_FS_PROJINHERIT_FL
linuxIocFlagsLate = linux_FS_SYNC_FL | linux_FS_IMMUTABLE_FL | linux_FS_APPEND_FL | linux_FS_DIRSYNC_FL
linuxFileFlagsKey = "\x00lf"
)
var (
errENOTTY error = unix.ENOTTY
)
func ignoringEINTR(fn func() error) (err error) {
for {
err = fn()
if err != unix.EINTR {
break
}
}
return err
}
func ioctl(f *os.File, request uintptr, attrp *uint32) error {
return ignoringEINTR(func() error {
argp := uintptr(unsafe.Pointer(attrp))
_, _, errno := unix.Syscall(unix.SYS_IOCTL, f.Fd(), request, argp)
if errno == 0 {
return nil
} else if errno == unix.ENOTTY {
return errENOTTY
}
return errno
})
}
func (entry *Entry) readAttributes(fi os.FileInfo, fullPath string, normalize bool) error {
attributes, err := xattr.LList(fullPath)
if err != nil {
return err
}
if len(attributes) > 0 {
entry.Attributes = &map[string][]byte{}
}
var allErrors error
for _, name := range attributes {
value, err := xattr.LGet(fullPath, name)
if err != nil {
allErrors = errors.Join(allErrors, err)
} else {
(*entry.Attributes)[name] = value
}
}
return allErrors
}
func (entry *Entry) getFileFlags(fileInfo os.FileInfo) bool {
return false
}
func (entry *Entry) readFileFlags(fileInfo os.FileInfo, fullPath string) error {
// the linux file flags interface is quite depressing. The half assed attempt at statx
// doesn't even cover the flags we're usually interested in for btrfs
if !(entry.IsFile() || entry.IsDir()) {
return nil
}
f, err := os.OpenFile(fullPath, os.O_RDONLY|unix.O_NONBLOCK|unix.O_NOFOLLOW|unix.O_NOATIME, 0)
if err != nil {
return err
}
var flags uint32
err = ioctl(f, unix.FS_IOC_GETFLAGS, &flags)
f.Close()
if err != nil {
// inappropriate ioctl for device means flags aren't a thing on that FS
if err == unix.ENOTTY {
return nil
}
return err
}
if flags != 0 {
if entry.Attributes == nil {
entry.Attributes = &map[string][]byte{}
}
v := make([]byte, 4)
binary.LittleEndian.PutUint32(v, flags)
(*entry.Attributes)[linuxFileFlagsKey] = v
}
return nil
}
func (entry *Entry) setAttributesToFile(fullPath string, normalize bool) error {
if entry.Attributes == nil || len(*entry.Attributes) == 0 {
return nil
}
attributes := *entry.Attributes
if _, haveFlags := attributes[linuxFileFlagsKey]; haveFlags && len(attributes) <= 1 {
return nil
}
names, err := xattr.LList(fullPath)
if err != nil {
return err
}
for _, name := range names {
newAttribute, found := (*entry.Attributes)[name]
if found {
oldAttribute, _ := xattr.LGet(fullPath, name)
if !bytes.Equal(oldAttribute, newAttribute) {
err = errors.Join(err, xattr.LSet(fullPath, name, newAttribute))
}
delete(*entry.Attributes, name)
} else {
err = errors.Join(err, xattr.LRemove(fullPath, name))
}
}
for name, attribute := range *entry.Attributes {
if len(name) > 0 && name[0] == '\x00' {
continue
}
err = errors.Join(err, xattr.LSet(fullPath, name, attribute))
}
return err
}
func (entry *Entry) restoreEarlyDirFlags(fullPath string, mask uint32) error {
if entry.Attributes == nil || mask == math.MaxUint32 {
return nil
}
var flags uint32
if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags = binary.LittleEndian.Uint32(v) & linuxIocFlagsDirEarly & ^mask
}
if flags != 0 {
f, err := os.OpenFile(fullPath, os.O_RDONLY|unix.O_DIRECTORY, 0)
if err != nil {
return err
}
err = ioctl(f, unix.FS_IOC_SETFLAGS, &flags)
f.Close()
if err != nil {
return fmt.Errorf("Set flags 0x%.8x failed: %w", flags, err)
}
}
return nil
}
func (entry *Entry) restoreEarlyFileFlags(f *os.File, mask uint32) error {
if entry.Attributes == nil || mask == math.MaxUint32 {
return nil
}
var flags uint32
if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags = binary.LittleEndian.Uint32(v) & linuxIocFlagsFileEarly & ^mask
}
if flags != 0 {
err := ioctl(f, unix.FS_IOC_SETFLAGS, &flags)
if err != nil {
return fmt.Errorf("Set flags 0x%.8x failed: %w", flags, err)
}
}
return nil
}
func (entry *Entry) restoreLateFileFlags(fullPath string, fileInfo os.FileInfo, mask uint32) error {
if entry.IsLink() || entry.Attributes == nil || mask == math.MaxUint32 {
return nil
}
var flags uint32
if v, have := (*entry.Attributes)[linuxFileFlagsKey]; have {
flags = binary.LittleEndian.Uint32(v) & (linuxIocFlagsFileEarly | linuxIocFlagsDirEarly | linuxIocFlagsLate) & ^mask
}
if flags != 0 {
f, err := os.OpenFile(fullPath, os.O_RDONLY|unix.O_NOFOLLOW, 0)
if err != nil {
return err
}
err = ioctl(f, unix.FS_IOC_SETFLAGS, &flags)
f.Close()
if err != nil {
return fmt.Errorf("Set flags 0x%.8x failed: %w", flags, err)
}
}
return nil
}

View File

@@ -1,35 +0,0 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
package duplicacy
import "os"
func (entry *Entry) readAttributes(fi os.FileInfo, fullPath string, normalize bool) error {
return nil
}
func (entry *Entry) getFileFlags(fileInfo os.FileInfo) bool {
return true
}
func (entry *Entry) readFileFlags(fileInfo os.FileInfo, fullPath string) error {
return nil
}
func (entry *Entry) setAttributesToFile(fullPath string, normalize bool) error {
return nil
}
func (entry *Entry) restoreEarlyDirFlags(fullPath string, mask uint32) error {
return nil
}
func (entry *Entry) restoreEarlyFileFlags(f *os.File, mask uint32) error {
return nil
}
func (entry *Entry) restoreLateFileFlags(fullPath string, fileInfo os.FileInfo, mask uint32) error {
return nil
}

View File

@@ -1,155 +0,0 @@
// Copyright (c) Acrosync LLC. All rights reserved.
// Free for personal use and commercial trial
// Commercial use requires per-user licenses available from https://duplicacy.com
//go:build freebsd || netbsd
// +build freebsd netbsd
package duplicacy
import (
"bytes"
"encoding/binary"
"errors"
"math"
"os"
"syscall"
"unsafe"
"github.com/pkg/xattr"
)
const (
bsd_UF_NODUMP = 0x1
bsd_SF_SETTABLE = 0xffff0000
bsd_UF_SETTABLE = 0x0000ffff
bsdFileFlagsKey = "\x00bf"
)
var bsdIsSuperUser bool
func init() {
bsdIsSuperUser = syscall.Geteuid() == 0
}
func (entry *Entry) readAttributes(fi os.FileInfo, fullPath string, normalize bool) error {
if entry.IsSpecial() {
return nil
}
attributes, err := xattr.LList(fullPath)
if err != nil {
return err
}
if len(attributes) > 0 {
entry.Attributes = &map[string][]byte{}
}
var allErrors error
for _, name := range attributes {
value, err := xattr.LGet(fullPath, name)
if err != nil {
allErrors = errors.Join(allErrors, err)
} else {
(*entry.Attributes)[name] = value
}
}
return allErrors
}
func (entry *Entry) getFileFlags(fileInfo os.FileInfo) bool {
stat := fileInfo.Sys().(*syscall.Stat_t)
if stat.Flags != 0 {
if entry.Attributes == nil {
entry.Attributes = &map[string][]byte{}
}
v := make([]byte, 4)
binary.LittleEndian.PutUint32(v, stat.Flags)
(*entry.Attributes)[bsdFileFlagsKey] = v
}
return true
}
func (entry *Entry) readFileFlags(fileInfo os.FileInfo, fullPath string) error {
return nil
}
func (entry *Entry) setAttributesToFile(fullPath string, normalize bool) error {
if entry.Attributes == nil || len(*entry.Attributes) == 0 || entry.IsSpecial() {
return nil
}
attributes := *entry.Attributes
if _, haveFlags := attributes[bsdFileFlagsKey]; haveFlags && len(attributes) <= 1 {
return nil
}
names, err := xattr.LList(fullPath)
if err != nil {
return err
}
for _, name := range names {
newAttribute, found := attributes[name]
if found {
oldAttribute, _ := xattr.LGet(fullPath, name)
if !bytes.Equal(oldAttribute, newAttribute) {
err = errors.Join(err, xattr.LSet(fullPath, name, newAttribute))
}
delete(attributes, name)
} else {
err = errors.Join(err, xattr.LRemove(fullPath, name))
}
}
for name, attribute := range attributes {
if len(name) > 0 && name[0] == '\x00' {
continue
}
err = errors.Join(err, xattr.LSet(fullPath, name, attribute))
}
return err
}
func (entry *Entry) restoreEarlyDirFlags(fullPath string, mask uint32) error {
return nil
}
func (entry *Entry) restoreEarlyFileFlags(f *os.File, mask uint32) error {
return nil
}
func (entry *Entry) restoreLateFileFlags(fullPath string, fileInfo os.FileInfo, mask uint32) error {
if mask == math.MaxUint32 {
return nil
}
if bsdIsSuperUser {
mask |= ^uint32(bsd_UF_SETTABLE | bsd_SF_SETTABLE)
} else {
mask |= ^uint32(bsd_UF_SETTABLE)
}
var flags uint32
if entry.Attributes != nil {
if v, have := (*entry.Attributes)[bsdFileFlagsKey]; have {
flags = binary.LittleEndian.Uint32(v)
}
}
stat := fileInfo.Sys().(*syscall.Stat_t)
flags = (flags & ^mask) | (stat.Flags & mask)
if flags != stat.Flags {
pPath, _ := syscall.BytePtrFromString(fullPath)
if _, _, errno := syscall.Syscall(syscall.SYS_LCHFLAGS,
uintptr(unsafe.Pointer(pPath)),
uintptr(flags), 0); errno != 0 {
return os.NewSyscallError("lchflags", errno)
}
}
return nil
}