I previously created a post that walked through how to configure rclone to easily access objects within the Oracle Cloud Object Store.
Object Store access with rclone
This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.
In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.
I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.
1) Create ACFS mount point
Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.
- Use ASMCMD to create a volume on the data disk group of 20 TB
- Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.
- Create the volume "volume1" on the "data" disk group
ASMCMD [+] > volcreate -G data -s 20G volume1
- Use ASMCMD to list the volume information NOTE: my volume name is volume1-123
ASMCMD [+] > volinfo -G data volume1
Diskgroup Name: DATA
Volume Name: VOLUME1
Volume Device: /dev/asm/volume1-123
State: ENABLED
...
SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME
WHERE volume_name ='VOLUME1';
VOLUME_NAME VOLUME_DEVICE
----------------- --------------------------------------
VOLUME1 /dev/asm/volume1-123
- Create the file system with mkfs from the volume "/dev/asm/volume1-123"
$ /sbin/mkfs -t acfs /dev/asm/volume1-123
mkfs.acfs: version = 19.0.0.0.0
mkfs.acfs: on-disk version = 46.0
mkfs.acfs: volume = /dev/asm/volume1-123
mkfs.acfs: volume size = 21474836480 ( 20.00 GB )
mkfs.acfs: Format complete.
- Register the file system with srvctl
# srvctl add filesystem -device /dev/asm/volume1-123 -path /acfsmounts/acfs2
-user oracle -mounttowner oracle -mountgroup dba -mountperm 755
NOTE: This will mount the filesystem on /acfsmounts/acfs2
- Start the filesystem with srvctl
$ srvctl start filesystem -device /dev/asm/volume1-123
- Change the ownership to oracle
chown -R oracle:dba /acfsmounts/acfs2
2) Use rclone to view objects
The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
"rclone {command} [connection name]:{bucket/partial object name - optional}.
NOTE: For all examples my connection name is oci_s3
I am going to start with the simplest command list buckets (lsd).
NOTE: We are using the s3 interface to view the objects in the namespace. There is a single namespace space for the entire tenancy. With OCI there is the concept of "compartments" which can be used to separate applications and users. The S3 interface does not have this concept, which means that all buckets are visible.
- rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
./rclone lsd oci_s3:
-1 2021-02-22 15:33:06 -1 Backups
-1 2021-02-16 21:31:05 -1 MyCloudBucket
-1 2020-09-23 22:21:36 -1 Test-20200923-1719
-1 2021-07-20 20:03:27 -1 ZDM_bucket
-1 2020-11-23 23:47:03 -1 archive
-1 2021-01-21 13:03:33 -1 bsgbucket
-1 2021-02-02 15:35:18 -1 bsgbuckets3
-1 2021-03-03 11:42:13 -1 osctransfer
-1 2021-03-19 19:57:16 -1 repo
-1 2021-01-21 19:35:24 -1 short_retention
-1 2020-11-12 13:41:48 -1 jsmithPublicBucket
-1 2020-11-04 14:10:33 -1 jsmith_top_bucket
-1 2020-11-04 11:43:55 -1 zfsrepl
-1 2020-09-25 16:56:01 -1 zs-oci-bucket
If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.
./rclone lsd oci_s3:bsgbucket
0 2021-08-14 23:58:02 -1 file_chunk
0 2021-08-14 23:58:02 -1 sbt_catalog
- rclone tree - command will list what is within my bucket as a tree structure.
[opc@rlcone-test rclone]$ ./rclone tree oci_s3:bsgbucket
/
├── expdat.dmp
├── file_chunk
│ └── 2985366474
│ └── MYDB
│ └── backuppiece
│ └── 2021-06-14
│ ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
│ │ └── yHqtjSE51L3B
│ │ ├── 0000000001
│ │ └── metadata.xml
│ └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
│ └── lS9Sdnka2nD0
│ ├── 0000000001
│ └── metadata.xml
└── sbt_catalog
├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
│ └── metadata.xml
└── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
└── metadata.xml
- rclone lsl- command will list what is within my bucket as a long listing with more detail
[opc@rlcone-test rclone]$ ./rclone lsl oci_s3:bsgbucket
311296 2021-01-21 13:04:05.000000000 expdat.dmp
337379328 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001
1841 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml
36175872 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001
1840 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml
1841 2021-06-14 19:48:46.000000000 sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml
1840 2021-06-14 19:49:10.000000000 sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml
3) Use rclone to copy the objects to my local file system.
There are 2 command you can use to copy the files from the object store to the local file system.
- copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
- sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.
In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.
Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).
./rclone -vv sync -P --multi-thread-streams 12 --transfers 64 oci_s3:bsgbucket /home/opc/acfs
To break down the command.
- -vv This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
- -P This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
- --multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
- --transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
- oci-s3:bsgbucket - This is the source to copy/sync
- /home/opc/acfs - this is the destination to copy/.sync with
Finally, this is the what the command looks like when it is executing.
opc@rlcone-test rclone]$ ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64 oci_s3:bsgbucket /home/opc/acfs
2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone" "-vv" "sync" "-P" "--multi-thread-streams" "12" "--transfers" "64" "oci_s3:bsgbucket" "/home/opc/acfs"]
2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
2021-08-15 00:15:33 INFO : expdat.dmp: Copied (new)
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
Transferred: 333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
Transferred: 6 / 7, 86%
Elapsed time: 2.0s
Transferring:
NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously. At the end you can see the file that it was in the middle of transferring.
Conclusion.
rclone is great alternative to the OCI CLI to manage your objects and download them. It has more intuitive commands (like "rclone ls"). And the best part is that it doesn't require python and special privleges to install.
rclone can access oci object storage natively rather than s3 api way with credentials.
ReplyDeletehttps://rclone.org/oracleobjectstorage/