it works for public and private buckets after configuration |
rclone is an immensely powerful tool for copying and synchronising data from one location to another - here we only consider object storage, but it is capable of far more. See the rclone website & documentation linked at the bottom of this page.
1.Run rclone config
and follow the interactive configuration for a new remote (basically the location of the bucket on the internet, accessible from a certain endpoint and with specific credentials if private):
Then insert n and follow the instructions:
name
: s3
storage option
: 5
s3 provider
: 22
env_auth
: leave default
access_key_id
: your access key
secret_key_id
: your secret key
region
: leave default
endpoint
: s3 bucket endpint URL https://s3.waw3-1.cloudferro.com
location_constraint
: leave default
acl
: select the one that best fits your use case
Finalize it and save
Edit ~/.config/rclone/rclone.conf with your editor of choice and insert a block like this:
[EWC-objectstorage] type = s3 provider = Other env_auth = false endpoint = https://s3.waw3-1.cloudferro.com access_key_id = ACCESS secret_access_key = HUNTER2 |
Putting your own access and secret keys in the appropriate fields.
You should then be able to perform basic commands as below, e.g. rclone ls EWC-objectstorage:bucketname/path/
[EWC-objectstorage-public] type = s3 provider = Other endpoint = https://s3.waw3-1.cloudferro.com access_key_id = secret_access_key = |
The following sections describe some commands that you can perform with rclone once you configure your remote see above. In particular:
REMOTE_NAME is the name you gave to a remote created with the rclone config command (see above). You can check your configured remotes using the following command:
rclone listremotes |
Note: Rclone configuration should be available at the following path: ~/.config/rclone/rclone.conf
rclone ls is used to list the objects in the path with size and path. Use the command below:
rclone ls <REMOTE_NAME>:<PATH_DIRECTORY> |
rclone mkdir is used to create a directory on the path you want in your bucket. Note that the concept of a directory isn't entirely real on object storage - see https://medium.com/datamindedbe/the-mystery-of-folders-on-aws-s3-78d3428803cb for a good explanation.
rclone mkdir <REMOTE_NAME>:<PATH_DIRECTORY> |
rclone lsd is used to list directories in the remote. This is also useful for seeing what buckets are on the remote (only the ones you created with these credentials - there may be others you didn't create but were granted access to)
rclone lsd <REMOTE_NAME>: |
rclone copy is used to copy files or whole directory structures from your local machine to the remote bucket. It's very similar to rsync -av
and mirrors the local structure up to the remote, only copying when there are differences.
Consider copying a file to remote:
rclone copy file.txt <REMOTE_NAME>:<PATH_DIRECTORY> |
rclone sync <SOURCE_DIRECTORY>/ <REMOTE_NAME>:<PATH_DIRECTORY> |
where SOURCE_DIRECTORY is a local directory with files you want to sync to remote. This is similar to copy, but also deletes files on the remote that aren't present on the local directory (e.g. like rsync -av --delete
)
rclone mount --daemon <REMOTE_NAME>:<PATH_DIRECTORY> /path/on/my/filesystem |
This will give you a directory structure that you can use normal filesystem tools on to move around your object storage as if it were a filesystem. Note that this is not especially performant, as object storage isn't a file system and thus will not handle very well when used as one, but it's ok for small scale activities.
This form of the command only allows the user running it to see the files - the option --allow-other
(potentially in combination with others listed on https://rclone.org/commands/rclone_mount/) will allow others access.
rclone sync -P --s3-chunk-size=256M --transfers=24 <SOURCE_DIRECTORY>/ <REMOTE_NAME>:<PATH_DIRECTORY> |
where: