It’s quick and easy to get started with your ObjectiveFS file system. Just a few steps to get your new file system up and running. (see Get Started)
ObjectiveFS runs on your Linux and macOS machines, and implements a log-structured file system with a cloud object store backend. Your data is encrypted before leaving your machine and stays encrypted until it returns to your machine.
This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.
What you need
sudo mount.objectivefs config [-i] <object store> [<directory>]
sudo mount.objectivefs create [-l <region>] <filesystem>
sudo mount.objectivefs list [-asvz] [<filesystem>[@<time>]
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground:
sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
sudo umount <dir>
sudo mount.objectivefs destroy <filesystem>
This section covers the detailed description and usage for each ObjectiveFS command.
SUMMARY: Sets up the required environment variables to run ObjectiveFS
USAGE:
sudo mount.objectivefs config [-i] <object store> [<directory>]
DESCRIPTION:
Config is a one-time operation that sets up the required credentials, such as object store keys and your license, as environment variables in a directory.
You can also optionally set your default region. See the Object Store Setup section for the detail setup steps for each object store.
-i
<object store>
Your object store from the first column in this table:
Object Store | Description |
---|---|
Public Cloud | |
az:// | Azure |
az-cn:// | Azure China |
do:// | DigitalOcean |
gs:// | Google Cloud |
ibm:// | IBM Cloud |
ocs:// | Oracle Cloud |
s3:// | AWS |
s3-cn:// | AWS China |
scw:// | Scaleway |
wasabi:// | Wasabi |
Gov Cloud | |
az-gov:// | Azure GovCloud |
ocs-gov:// | Oracle Cloud GovCloud |
ocs-ukgov:// | Oracle UK GovCloud |
s3:// | AWS GovCloud |
On Premise | |
ceph:// | Ceph |
cos:// | IBM Cloud Object Store |
minio:// | Minio |
objectstore:// | Other S3-compatible Object Store |
<directory>
/etc/objectivefs.env
WHAT YOU NEED:
DETAILS:
Config sets up the object store specific environment variables in /etc/objectivefs.env
(if no directory is specified) or in the directory specified. Here are some commonly created environment variables.
AWS_ACCESS_KEY_ID (AWS) or ACCESS_KEY (others)
AWS_SECRET_ACCESS_KEY (AWS) or SECRET_KEY (others)
AWS_DEFAULT_REGION (AWS) or REGION (others)
OBJECTSTORE
OBJECTIVEFS_LICENSE
EXAMPLES:
A. Configuration for Microsoft Azure in the default directory /etc/objectivefs.env
$ sudo mount.objectivefs config az://
Creating config for Microsoft in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Storage Account Name: <your storage account name>
Enter Secret Key: <Access key from your Azure storage account>
B. Configuration with IAM role for S3 in the default directory /etc/objectivefs.env
$ sudo mount.objectivefs config -i s3://
Creating config for Amazon in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Default Region (optional): <your S3 region>
C. Configuration for Google Cloud Storage in the user-specified directory /etc/gs.env
$ sudo mount.objectivefs config gs:// /etc/gs.env
Creating config for Google in /etc/gs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key: <your access key>
Enter Secret Key: <your secret key>
Enter Region: <your region, e.g. us>
TIPS:
/etc/objectivefs.env/AWS_ACCESS_KEY_ID
)./etc/objectivefs.env
, you can also set the environment variables directly on the command line. See environment variables on command line./etc/objectivefs.env
to another server to replicate the setup on that server.AWS_SECRET_ACCESS_KEY
or AWS_ACCESS_KEY_ID
environment variables.SUMMARY: Creates a new file system
USAGE:
sudo mount.objectivefs create [-l <region>] <filesystem>
DESCRIPTION:
This command creates a new file system in your S3, GCS or on-premise object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.
<filesystem>
s3://myfs
.gs://myfs
.http://s3.example.com/foo
-l <region>
AWS_DEFAULT_REGION
or REGION
environment variable (if set). Otherwise, S3’s default is us-east-1 and GCS’s default is based on your server’s location (us, eu or asia).WHAT YOU NEED:
DETAILS:
This command creates a new filesystem in your object store. You can specify the region to create your filesystem by using the -l <region>
option or by setting the AWS_DEFAULT_REGION
or REGION
environment variable.
ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the Filesystem Pool section for details.
EXAMPLES:
A. Create a file system in the default region
$ sudo mount.objectivefs create myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>
B. Create an S3 file system in a user-specified region (e.g. eu-central-1)
$ sudo mount.objectivefs create -l eu-central-1 s3://myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>
C. Create a GCS file system in a user-specified region (e.g. us)
$ sudo mount.objectivefs create -l us gs://myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs): <your passphrase>
TIPS:
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
. Please verify this file’s permission is restricted to root only./home/ubuntu/.ofs.env
: $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs create myfs
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
.SUMMARY: Lists your file systems, snapshots and buckets
USAGE:
sudo mount.objectivefs list [-asvz] [<filesystem>[@<time>]]
DESCRIPTION:
This command lists your file systems, snapshots or buckets in your object store. The output includes the file system name, filesystem kind (regular filesystem or pool), snapshot (automatic, checkpoint, enabled status), region and location.
-a
-s
-v
-z
<filesystem>
<filesystem>@<time>
-z
) or local time, in the ISO8601 format (e.g. 2016-12-31T15:40:00
).default
WHAT YOU NEED:
DETAILS:
The list command has several options to list your filesystems, snapshots and buckets in your object store.
By default, it lists all your ObjectiveFS filesystems and pools. It can also list all buckets, including non-ObjectiveFS buckets, with the -a
option. To list only a specific filesystem or filesystem pool, you can provide the filesystem name.
For a description of snapshot listing, see Snapshots section.
The output of the list command shows the filesystem name, filesystem kind, snapshot type, region and location.
Example filesystem list output:
NAME KIND SNAP REGION LOCATION
s3://myfs-1 ofs - eu-central-1 EU (Frankfurt)
s3://myfs-2 ofs - us-west-2 US West (Oregon)
s3://myfs-pool pool - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsa ofs - us-east-1 US East (N. Virginia)
Example snapshot list output:
NAME KIND SNAP REGION LOCATION
s3://myfs@2017-01-10T11:10:00 ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T11:17:00 ofs manual eu-west-2 EU (London)
s3://myfs@2017-01-10T11:20:00 ofs auto eu-west-2 EU (London)
s3://myfs ofs on eu-west-2 EU (London)
Filesystem Kind | Description |
---|---|
ofs | ObjectiveFS filesystem |
pool | ObjectiveFS filesystem pool |
- | Non-ObjectiveFS bucket |
? | Error while querying the bucket |
access | No permission to access the bucket |
Snapshot type | Applicable for | Description |
---|---|---|
auto | snapshot | Automatic snapshot |
manual | snapshot | Checkpoint (or manual) snapshot |
on | filesystem | Snapshots are activated on this filesystem |
- | filesystem | Snapshots are not activated |
EXAMPLES:
A. List all ObjectiveFS filessytem.
$ sudo mount.objectivefs list
NAME KIND SNAP REGION LOCATION
s3://myfs-1 ofs - eu-central-1 EU (Frankfurt)
s3://myfs-2 ofs on us-west-2 US West (Oregon)
s3://myfs-3 ofs on eu-west-2 EU (London)
s3://myfs-pool pool - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsa ofs - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsb ofs - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsc ofs - us-east-1 US East (N. Virginia)
B. List a specific file system, e.g. s3://myfs-3
$ sudo mount.objectivefs list s3://myfs-3
NAME KIND SNAP REGION LOCATION
s3://myfs-3 ofs - eu-west-2 EU (London)
C. List everything, including non-ObjectiveFS buckets. In this example, my-bucket
is a non-ObjectiveFS bucket.
$ sudo mount.objectivefs list -a
NAME KIND SNAP REGION LOCATION
gs://my-bucket - - EU European Union
gs://myfs-a ofs - US United States
gs://myfs-b ofs on EU European Union
gs://myfs-c ofs - ASIA Asia Pacific
D. List snapshots for myfs
that match 2017-01-10T12
in UTC
$ sudo mount.objectivefs list -sz myfs@2017-01-10T12
NAME KIND SNAP REGION LOCATION
s3://myfs@2017-01-10T12:10:00Z ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T12:17:00Z ofs manual eu-west-2 EU (London)
s3://myfs@2017-01-10T12:20:00Z ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T12:30:00Z ofs auto eu-west-2 EU (London)
TIPS:
<filesystem>@<time prefix>
, e.g. myfs@2017-01-10T12
./home/ubuntu/.ofs.env
: $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs list
SUMMARY: Mounts your file system on your Linux or macOS machines.
USAGE:
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground:
sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
DESCRIPTION:
This command mounts your file system on a directory on your Linux or macOS machine. After the file system is mounted, you can read and write to it just like a local disk.
You can mount the same file system on as many Linux or macOS machines as you need. Your license will always scale if you need more mounts, and it is not limited to the number of included licenses on your plan.
NOTE: The mount command needs to run as root. It runs in the foreground if “mount” is provided, and runs in the background otherwise.
<filesystem>
s3://myfs
.gs://myfs
.http://s3.example.com/foo
.@<timestamp>
for mounting snapshots.<dir>
General Mount Options
-o env=<dir>
Mount Point Options
-o acl | noacl
-o dev | nodev
-o diratime | nodiratime
-o exec | noexec
-o export | noexport
-o filehole=<size>
-o fsavail=<size>
-o nonempty
-o rdirplus | nordirplus
-o ro | rw
-o strictatime | relatime | noatime
-o suid | nosuid
-o writedelay | nowritedelay
File System Mount Options
-o bulkdata | nobulkdata
-o clean[=1|=2] | noclean
-o compact[=<level>] | nocompact
-o fuse_conn=<NUM>
-o freebw | nofreebw | autofreebw
freebw
when the network bandwidth is free (e.g. on-premise object store or ec2 instance connecting directly to an s3 bucket in the same region). Use nofreebw
when the network bandwidth is charged. autofreebw
will enable freebw
when it detects that it is on an ec2 instance in the same region as the S3 bucket.freebw
and autofreebw
, verify that there is no network routing through a paid network route such as a NAT gateway to avoid incurring bandwidth charges. Enabling freebw
will incur extra AWS data transfer charges when running outside of the S3 bucket region. (Default: nofreebw
) [6.8 release and newer]-o hpc | nohpc
-o mboost[=<minutes>] | nomboost
-o mkdir[=<mode>]
-o mt | mtplus | nomt | cputhreads=<N> | iothreads=<N>
-o nomem=<spin|stop>
spin
is set, the mount will wait until memory is available. If stop
is set, the mount will exit and accesses to the mount will return error. In both cases, a log message will be sent to syslog. Spin is useful in situations where memory can be freed (e.g. OOM killer) or added (e.g. swap). Stop may be useful for generic nodes behind a load balancer. (Default: spin)-o ocache | noocache
-o oob | nooob
-o oom | nooom
oom
twice. (Default: Enable)-o ratelimit | noratelimit
-o retry=<seconds>
-o snapshots | nosnapshots
WHAT YOU NEED:
EXAMPLES:
Assumptions:
1. /ofs
is an existing empty directory
2. Your passphrase is in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
A. Mount an S3 file system in the foreground
$ sudo mount.objectivefs mount myfs /ofs
B. Mount a GCS file system with a different env directory, e.g. /home/ubuntu/.ofs.env
$ sudo mount.objectivefs mount -o env=/home/ubuntu/.ofs.env gs://myfs /ofs
C. Mount an S3 file system in the background
$ sudo mount.objectivefs s3://myfs /ofs
D. Mount a GCS file system with non-default options
Assumption: /etc/objectivefs.env
contains GCS keys.
$ sudo mount.objectivefs -o nosuid,nodev,noexec,noatime gs://myfs /ofs
TIPS:
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
.
Please verify that your passphrase file’s permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE
)./home/ubuntu/.ofs.env
: $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs mount myfs /ofs
ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.
WHAT YOU NEED:
A. Linux
Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
exists so you can mount your filesystem without needing to enter the passphrase.
If it doesn’t exist, create the file with your passphrase as the content.(see details)
Add a line to /etc/fstab
with:
<filesystem> <mount dir> objectivefs auto,_netdev 0 0
_netdev
is used by many Linux distributions to mark the file system as a network file system.
/etc/fstab
, add the mount options separated by commas after _netdev
.<filesystem> <mount dir> objectivefs auto,_netdev,mt,ocache 0 0
B. macOS
macOS can use launchd
to mount on boot. See Mount on Boot Setup Guide for macOS for details.
SUMMARY: Unmounts your files system on your Linux or macOS machines
USAGE:
sudo umount <dir>
DESCRIPTION:
To unmount your filesystem, run umount
with the mount directory.
Typing Control-C
in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount
to properly unmount the filesystem.
WHAT YOU NEED:
EXAMPLE:
$ sudo umount /ofs
IMPORTANT: After a destroy, there is no way to recover any files or data.
SUMMARY: Destroys your file system. This is an irreversible operation.
USAGE:
sudo mount.objectivefs destroy <filesystem>
DESCRIPTION:
This command deletes your file system from your object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.
You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.
<filesystem>
NOTE:
Your file system should be unmounted from all of your machines before running destroy
.
WHAT YOU NEED:
EXAMPLE:
$ sudo mount.objectivefs destroy s3://myfs
*** WARNING ***
The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>
This section covers the options you can run ObjectiveFS with.
ObjectiveFS supports all regions and endpoints of supported object stores. You can specify the region when creating your filesystem.
RELATED COMMANDS:
REFERENCE:
AWS S3 regions and endpoints, GCS regions
ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.
The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: manual rekeying).
A. Environment Variables from Directory
ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir
tool from the daemontools package.
Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.
SETUP:
The Config command sets up your environment directory with three common environment variables:
AWS_ACCESS_KEY_ID
or ACCESS_KEY
AWS_SECRET_ACCESS_KEY
or SECRET_KEY
OBJECTIVEFS_LICENSE
You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.
EXAMPLE:
$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY OBJECTIVEFS_LICENSE OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
your_objectivefs_passphrase
B. Environment Variables on Command Line
You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.
USAGE:
sudo [<ENV VAR>='<value>'] mount.objectivefs
EXAMPLE:
$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs
SUPPORTED ENVIRONMENT VARIABLES
To enable a feature, set the corresponding environment variable and remount your filesystem (exception: manual rekeying).
REGION
.AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_SECURITY_TOKEN
.30%
) or an absolute value (e.g. 500M
or 1G
). (Default: 20%) (see Memory Cache section)http[s]://example.com[:port]
[7.0 release and newer]#!<full-path-to-program>
(see details) (Default: will prompt) azure
will also use Azure API. Options: v2
(or s3v2
), v4
(or s3v4
), azure
. (Default: based on object store) [7.0 release and newer]":"
. If set to empty, do not include OS standard certificate directories. [7.0 release and newer]secure
(or default
), compat
, legacy
, insecure
(or all
). Can also use the openssl cipher list format with cipher strings to specify more detailed cipher list. [7.0 release and newer]tlsv1.0
, tls1.1
, tlsv1.2
, tls1.3
, all
(or legacy
), secure
(or default
). Using "!"
in front of an entry removes instead of adds. [7.0 release and newer]DESCRIPTION:
ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.
USAGE:
Set the CACHESIZE
environment variable to one of the following:
30%
)500M
or 2G
)The actual memory cache size can be specified using SI/IEC prefixes and can be specified as a decimal value (5.5 release or newer):
M
, MB
, G
, GB
, T
, TB
Mi
, MiB
, Gi
, GiB
, Ti
, TiB
A minimum of 64MiB will be used for CACHESIZE
.
DEFAULT VALUE:
If CACHESIZE
is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.
DETAILS:
The cache size is applicable per mount. If you have multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of the CACHESIZE
values for all mounts.
The memory cache is one component of the ObjectiveFS memory usage. The total memory used by ObjectiveFS is the sum of:
a. memory cache usage (set by CACHESIZE
)
b. index memory usage (based on the number of S3 objects/filesystem size), and
c. kernel memory usage
The caching statistics such as cache hit rate, kernel memory usage is sent to the log. The memory cache setting is also sent to the log upon file system mount.
EXAMPLES:
A. Set memory cache size to 30%
$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs
B. Set memory cache size to 2GB
$ sudo CACHESIZE=2G mount.objectivefs myfs /ofs
RELATED INFO:
DESCRIPTION:
ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.
The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.
Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.
ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.
Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.
A file called CACHEDIR.TAG
is created by default in the DISKCACHE_PATH directory upon filesystem mount. Standard backup software will exclude the disk cache directory when this file is present. If you wish to back up the disk cache directory, you can overwrite the content of the CACHEDIR.TAG
file.
RECOMMENDATION:
We recommend enabling disk cache when local SSD or harddrive is available. For EC2 instances, we recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size. (See how to mount an instance store on EC2 for disk cache).
USAGE:
The disk cache uses DISKCACHE_SIZE
and DISKCACHE_PATH
environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE
.
DISKCACHE_SIZE
<DISK CACHE SIZE>[:<FREE SPACE>]
.<DISK CACHE SIZE>
:
20G
or 1T
). In 6.9 release and newer, it can also be specified as a percentage.1P
), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.<FREE SPACE>
(optional):
5G
). In 6.9 release and newer, it can also be specified as a percentage.3G
.0G
, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the DISK_CACHE_SIZE
or (total disk space - FREE_SPACE
).
Both disk cache size and free space values can be specified using SI/IEC prefixes and as decimal values:
M
, MB
, G
, GB
, T
, TB
, P
, PB
Mi
, MiB
, Gi
, GiB
, Ti
, TiB
, Pi
, PiB
DISKCACHE_PATH
DEFAULT VALUE:
Disk cache is disabled when DISKCACHE_SIZE
is not specified.
EXAMPLES:
A. Set disk cache size to 20GB and use default free space (3GB)
$ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs
B. Use as much space as possible for disk cache and keep 10GB free space
$ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs
C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path
$ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs
D. Use the entire space for disk cache (when using dedicated volume for disk cache)
$ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs
WHAT YOU NEED:
DISKCACHE_PATH
(default:/var/cache/objectivefs
)TIPS:
DISKCACHE_PATH
.DISKCACHE_SIZE
values and point to the same DISKCACHE_PATH
, ObjectiveFS will use the minimum disk cache size and the maximum free space value.RELATED INFO:
ObjectiveFS supports two types of snapshots: automatic snapshots and checkpoint snapshots. Automatic snapshots are managed by ObjectiveFS and are taken based on the snapshot schedule while your filesystem is mounted. Checkpoint snapshots can be taken at anytime and the filesystem does not need to be mounted. Snapshots can be mounted as a read-only filesystem to access your filesystem as it was at that point in time. This is useful to recover accidentally deleted or modified files or as the source for creating a consistent point-in-time backup.
Snapshots are not backups since they are part of the same filesystem and are stored in the same bucket. A backup is an independent copy of your data stored in a different location.
Snapshots can be useful for backup. To create a consistent point-in-time backup, you can mount a recent snapshot and use it as the source for backup, instead of running backup from your live filesystem. This way, your data will not change while the backup is in progress.
Snapshots are fast to take. The storage used for each snapshot is incremental, only the differences between two snapshots are stored. If there are multiple mounts of the same filesystem, only one automatic snapshot will be generated at a given scheduled time.
Snapshots are automatically activated upon initial mount of the filesystem, unless the nosnapshots
mount option is used.
Legacy filesystems note: For older filesystems created before ObjectiveFS 5.0, snapshots are activated when the filesystem is mounted with ObjectiveFS 5.0 or newer. Upon activation, ObjectiveFS will perform a one-time id entification of existing snapshots and activate them if available.
You can use the mount.objectivefs list
command to identify filesystems with activated snapshots by checking the SNAP
field of the output. If snapshots have been activated on this filesystem, the SNAP
column will show
on
. Otherwise, it will show -
.
A. Create Automatic Snapshots
Automatic snapshots are managed by ObjectiveFS and are taken based on the snapshot schedule while your filesystem is mounted. If your filesystem is not mounted, automatic snapshots will not be taken since there are no chan
ges to the filesystem.
Automatic snapshots are taken and retained based on the snapshot schedule. Older automatic snapshots are automatically removed to maintain the number of snapshots per interval.
A.1. Snapshot Schedule Format
The snapshot schedule can be specified as zero or more snapshot intervals, each specifying the number of snapshots at a specific interval, with the following format:
<number of snapshots>@<interval length><interval unit>
Multiple snapshot intervals can be specified separated by space, with the intervals in increasing order. The number of snapshots and intervals should be positive integers.
The supported snapshot units include minutes, hours, days and weeks. For months, quarters and years, both ‘Simple’ units (4-week based) and ‘Average’ units (365.25-day based) are supported. If a calendar-based schedule is needed (e.g. last day of each month), please see the Snapshots FAQ.
Snapshot Interval Unit | Description | |
---|---|---|
m | Minute | |
h | Hour | |
d | Day | |
w | Week | |
n | Simple month (4 weeks) | |
q | Simple quarter (12 weeks) | |
y | Simple year (48 weeks) | |
M | Average month (365.25 days / 12) | |
Q | Average quarter (365.25 days / 4) | |
Y | Average year (365.25 days) |
See the Snapshots Reference Guide for snapshot schedule examples.
A.2. Default Snapshot Schedule
The table below shows the default automatic snapshot schedule. This schedule would be specified as 72@10m 72@1h 32@1d 16@1w
.
Snapshot Interval | Number of Snapshots | ||
---|---|---|---|
10-minute | 72 | ||
hourly | 72 | ||
daily | 32 | ||
weekly | 16 |
A.3. Custom Snapshot Schedule
Team, Corporate and Enterprise Plans Feature
You can specify a custom automatic snapshot schedule with ObjectiveFS 7.2 and newer. The custom schedule can be set using the following command:
mount.objectivefs snapshot -s "<schedule>" <filesystem>
A new custom schedule will remove any existing automatic snapshots that do not match the new schedule. If you wish to save some of the automatic snapshots, you can convert them into checkpoint snapshots by mounting the relevant snapshots.
Custom snapshots are compatible with older ObjectiveFS versions and the snapshot schedule can be updated without restarting any mounts. ObjectiveFS 7.2 is only needed to set the custom schedule. Existing mounts will pick up the new custom schedule within a few hours.
Custom snapshots can use any combination of intervals in increasing order, as long as the total number of snapshots does not exceed 900.
See the Snapshots Reference Guide for snapshot schedule examples.
A.4. Useful Snapshot Schedule Commands
See the Snapshots Reference Guide for more details and examples.
a. Show current snapshot schedule of the filesystem
mount.objectivefs snapshot -l <filesystem>
b. Validate that a snapshot schedule is valid (useful when developing a new snapshot schedule before deployment)
mount.objectivefs snapshot -s <schedule>
c. Generate a sample snapshot schedule
mount.objectivefs snapshot -vs <schedule>
A.5. Disabling Automatic Snapshots
To disable automatic snapshots, you can set the snapshot schedule to an empty string.
mount.objectivefs snapshot -s "" <filesystem>
B. Create Checkpoint Snapshots
Team, Corporate and Enterprise Plans Feature
sudo mount.objectivefs snapshot <filesystem>
Checkpoint snapshots are manual point-in-time snapshots that can be taken at anytime, even if your filesystem is not mounted. Checkpoint snapshots are useful for creating a snapshot right before making large changes to the filesystem. They can also be useful if you need snapshots at a specific time of the day or for compliance reasons.
There is no limit to the number of checkpoint snapshots you can take. The checkpoint snapshots are kept until they are explicitly removed by the destroy command.
DESCRIPTION:
You can list the snapshots for your filesystem by running the list -sz
(snapshot timestamp in UTC format, recommended) or list -s
command (snapshot timestamp in local time). The snapshot listing command shows both automatic (auto
in the SNAP
column) and checkpoint (manual
in the SNAP
column) snapshots.
Snapshots have the format <filesystem>@<time>
. You can list all snapshots for a filesystem or only snapshots matching a specific time prefix (see examples below).
USAGE:
mount.objectivefs list -sz [filesystem[@<time>]]
EXAMPLES:
a. List all snapshots for myfs
# mount.objectivefs list -sz myfs@
b. List all snapshots for myfs
that match 2024-02-21
in UTC
# mount.objectivefs list -sz myfs@2024-02-21
c. List all snapshots for myfs
that match 2024-02-21T12
in UTC
# mount.objectivefs list -sz myfs@2024-02-21T12
DESCRIPTION:
Snapshots can be mounted to access the filesystem as it was at that point in time. When a snapshot is mounted, it is accessible as a read-only filesystem.
You can mount both automatic and checkpoint snapshots. When an automatic snapshot is mounted, a checkpoint snapshot for the same timestamp will be created to prevent the snapshot from being automatically removed in case its retention schedule expires while it is mounted. These checkpoint snapshots, when created for data recovery purpose only, are also included in all plans.
Snapshots can be mounted using the same keys used for mounting the filesystem. If you choose to use a different key, you only need read permission for mounting checkpoint snapshots, but need read and write permissions for mounting automatic snapshots.
A snapshot mount is a regular mount and will be counted as an ObjectiveFS instance while it is mounted.
USAGE:
sudo mount.objectivefs [-o <options>] <filesystem>@<time> <dir>
<filesystem>@<time>
Z
) in the ISO8601 format. You can use the list snapshots command to get the list of available snapshots for your filesystem.<dir>
<options>
EXAMPLES:
A. Mount a snapshot without additional mount options on /ofssnap
$ sudo mount.objectivefs myfs@2024-02-08T00:30:00Z /ofssnap
B. Mount a snapshot with multithreading enabled
$ sudo mount.objectivefs -omt myfs@2024-02-08T00:30:00Z /ofssnap
Same as the regular unmount command.
To destroy a snapshot, use the regular destroy command with <filesystem>@<time>
. Time should be specified in ISO8601 format (e.g. 2024-02-10T10:10:00
) and can either be local time or UTC (ends with Z
). Both automatic and checkpoint snapshots matching the timestamp will be removed.
sudo mount.objectivefs destroy <filesystem>@<time>
EXAMPLE:
$ sudo mount.objectivefs destroy s3://myfs@2024-02-20T02:27:54Z
*** WARNING ***
The snapshot 's3://myfs@2024-02-20T02:27:54Z' will be destroyed. No other changes will be done to the filesystem.
Continue [y/n]? y
Snapshot 's3://myfs@2024-02-20T02:27:54Z' destroyed.
RELATED INFO:
ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.
If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST
to 169.254.169.254
to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY
and AWS_ACCESS_KEY_ID
environment variables.
You can also manually rekey by updating AWS_SECRET_ACCESS_KEY
(or SECRET_KEY
) and AWS_ACCESS_KEY_ID
(or ACCESS_KEY
) (and also AWS_SECURITY_TOKEN
if used) in your config directory and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload the updated files and start using the updated keys.
RELATED INFO:
DESCRIPTION:
ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.
USAGE:
You can specify the compaction rate by setting the mount option when mounting your filesystem.
Faster compaction increases bandwidth usage.
For more about mount options, see this section.
nocompact
- disable compactioncompact
- enable compaction level 1compact,compact
- enable compaction level 2 (uses more bandwidth than level 1
)compact,compact,compact
- enable compaction level 3 (uses more bandwidth than level 2
)compact=<level>
- set the compaction level (see below)COMPACTION LEVELS:
-o noratelimit
option to be enabled. For instructions on how to use levels 4 and 5, see this document for details.DEFAULT:
Compaction is enabled by default. If the filesystem is mounted on an EC2 machine accessing an S3 bucket, compaction level 2
is the default. Otherwise, compaction level 1
is the default.
EXAMPLES:
A. To enable compaction level 3
$ sudo mount.objectivefs -o compact=3 myfs /ofs
B. To disable compaction
$ sudo mount.objectivefs -o nocompact myfs /ofs
TIPS:
IUsed
column of df -i
.MONITORING PROGRESS (6.8 and newer):
The compaction and cleaning statistics for a mount can be printed in the objectivefs log by sending the USR1
signal to the ObjectiveFS process id. The reported statistics is an estimate based on the information currently available to this mount and can vary between mounts. If there are writes to the filesystem, the estimated number might vary.
Generating the statistics can slow down the mount between the time the signal is received until the time the statistics is logged. Therefore it is recommended only for dedicated cleaner mounts and not for production mounts.
Important: Do not send the USR1 signal to versions before 6.8.
Commands to print the current compaction statistics
$ ps -ef | grep mount.objectivefs
$ kill -USR1 <PID>
Command to print the compaction statistics every 10 minutes
$ while sleep 600; do kill -USR1 <PID>; done
For the compaction statistics output format, see Compaction Statistics in the logging section.
DESCRIPTION:
Multithreading is a performance feature that can lower latency and improve throughput for your workload.
ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, compaction, cleaning, disk cache accesses and updates.
USAGE:
Multithreading can be enabled using the mt
or mtplus
mount options. mt
sets the CPU threads to 4 and IO threads to 16. mtplus
sets the CPU threads to between 6 and 32 based on the number of cpus on the server and the IO threads to 64.
You can also directly specify the number of dedicated CPU threads and IO threads using the cputhreads
and iothreads
mount options. To disable multithreading, use the nomt
mount option.
The number of threads in the thread pools will scale up and down dynamically depending on the workload.
Mount options | Description |
---|---|
-o mt | sets cputhreads to 4 and iothreads to 16 |
-o mtplus | sets cputhreads to between 6 and 32 based on the number of cpus and iothreads to 64 |
-o cputhreads=<N> | sets the number of dedicated CPU threads to N (min:0, max:128) |
-o iothreads=<N> | sets the number of dedicated IO threads to N (min:0, max:128) |
-o nomt | sets cputhreads and iothreads to 0 |
DEFAULT VALUE:
By default, there are two IO threads, one compaction thread and no extra CPU threads.
EXAMPLE:
A. Enable default multithreading option (4 cputhreads, 16 iothreads)
$ sudo mount.objectivefs -o mt <filesystem> <dir>
B. Set CPU threads to 8 and IO threads to 16
$ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir>
C. Example fstab entry to enable multithreading plus
s3://<filesystem> <dir> objectivefs auto,_netdev,mtplus 0 0
DESCRIPTION:
Kernel cache is a performance feature that can improve re-read performance by reducing the FUSE overhead.
It is available in version 6.0 and newer and can be activated together with the multithreading feature.
In version 7.0 and newer, kcache+ is available for storing directories and symlinks in kernel cache. It is activated together with the multithreading feature for FUSE version 7.28 or newer.
USAGE:
To activate the kernel cache, enable the multithreading feature either by setting the mt
mount option or by specifying dedicated cpu and io threads (see multithreading section).
If your Linux kernel and FUSE driver supports it, the kernel cache will be enabled and kcache
will be printed on the starting line of the objectivefs log.
Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.
A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.
A file system in a pool is a regular file system. It has the same capabilities as a regular file system.
A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.
Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy
An example organization structure is:
|
|- myfs1 // one file system per bucket
|- myfs2 // one file system per bucket
|- mypool1 -+- /myfs1 // multiple file systems per bucket
| |- /myfs2 // multiple file systems per bucket
| |- /myfs3 // multiple file systems per bucket
|
|- mypool2 -+- /myfs1 // multiple file systems per bucket
| |- /myfs2 // multiple file systems per bucket
| |- /myfs3 // multiple file systems per bucket
:
To create a file system in a pool, use the regular create command with
<pool name>/<file system>
as the filesystem
argument.
sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>
NOTE:
-l <region>
specification.EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)
# Assumption: your /etc/objectivefs.env contains S3 keys
$ sudo mount.objectivefs create s3://mypool/myfs
B. Create a GCS file system in the default region
# Assumption: your /etc/objectivefs.env contains GCS keys
$ sudo mount.objectivefs create -l EU gs://mypool/myfs
When you list your file system, you can distinguish a pool in the KIND
column. A file system inside of a pool is listed with the pool prefix.
You can also list the file systems in a pool by specifying the pool name.
sudo mount.objectivefs list [<pool name>]
EXAMPLE:
A. In this example, there are two pools myfs-pool
and myfs-poolb
. The file systems in each pools are listed with the pool prefix.
$ sudo mount.objectivefs list
NAME KIND REGION
s3://myfs-1 ofs us-west-2
s3://myfs-2 ofs eu-central-1
s3://myfs-pool/ pool us-west-2
s3://myfs-pool/myfs-a ofs us-west-2
s3://myfs-pool/myfs-b ofs us-west-2
s3://myfs-pool/myfs-c ofs us-west-2
s3://myfs-poolb/ pool us-west-1
s3://myfs-poolb/foo ofs us-west-1
B. List all file systems under a pool, e.g. myfs-pool
$ sudo mount.objectivefs list myfs-pool
NAME KIND REGION
s3://myfs-pool/ pool us-west-2
s3://myfs-pool/myfs-a ofs us-west-2
s3://myfs-pool/myfs-b ofs us-west-2
s3://myfs-pool/myfs-c ofs us-west-2
To mount a file system in a pool, use the regular mount command with
<pool name>/<file system>
as the filesystem
argument.
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>
Run in foreground:
sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>
EXAMPLES:
A. Mount an S3 file system and run the process in background
$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs
B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env
and run the process in foreground
$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs
Same as the regular unmount command
To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system>
as the filesystem
argument.
sudo mount.objectivefs destroy <pool>/<filesystem>
NOTE:
EXAMPLE:
Destroying an S3 file system in a pool
$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a
*** WARNING ***
The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>
DESCRIPTION:
Standard unix permission and basic ACL are enabled by default. To enable the full ACL/extended ACL, mount your filesystem with the acl
mount option.
USAGE:
# mount.objectivefs -oacl <filesystem> <directory>
EXAMPLE:
A. Set extended ACL for file foo
$ setfacl -R -m u:ubuntu:rx foo
B. Get extended ACL for file foo
$ getfacl foo
Corporate and Enterprise Plans Feature
DESCRIPTION:
This feature lets you map local user ids and group ids to different ids in the remote filesystem.
The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.
When a uid is remapped and U*
is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody
).
Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G*
is not specified.
USAGE:
IDMAP="<Mapping>[:<Mapping>]"
where Mapping is:
U<local id or name> <remote id>
G<local id or name> <remote id>
U* <default id>
G* <default id>
Mapping Format
A. Single User Mapping: U<local id or name> <remote id>
Maps a local user id or local user name to a remote user id.
B. Single Group Mapping: G<local id or name> <remote id>
Maps a local group id or local group name to a remote group id.
C. Default User Mapping: U* <default id>
Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody
).
D. Default Group Mapping: G* <default id>
Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody
).
EXAMPLES:
A. UID mapping only
IDMAP="U600 350:Uec2-user 400:U* 800"
ec2-user
is mapped to remote uid 400, and vice versaB. GID mapping only
IDMAP="G800 225:Gstaff 400"
staff
is mapped to remote gid 400, and vice versanobody/nfsnobody
)nobody/nfsnobody
)C. UID and GID mapping
IDMAP="U600 350:G800 225"
nobody/nfsnobody
)nobody/nfsnobody
)Corporate and Enterprise Plans Feature
DESCRIPTION:
You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.
USAGE:
Set the http_proxy
environment variable to the proxy server’s address (see environment variables section for how to set environment variables).
DEFAULT VALUE:
If the http_proxy
environment is not set, this feature is disabled by default.
EXAMPLE:
Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:
$ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs
Alternatively, you can set the http_proxy in your /etc/objectivefs.env
directory
$ ls /etc/objectivefs.env
AWS_ACCESS_KEY_ID OBJECTIVEFS_PASSPHRASE
AWS_SECRET_ACCESS_KEY http_proxy
OBJECTIVEFS_LICENSE
$ cat /etc/objectivefs.env/http_proxy
http://localhost:3128
DESCRIPTION:
The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.
The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.
Operation | User Mode | Admin Mode |
---|---|---|
Create | Needs passphrase confirmation | No passphrase confirmation needed |
List | Allowed | Allowed |
Mount | Allowed | Not allowed |
Destroy | Needs authorization code and confirmation | Only confirmation needed |
USAGE:
For plans with the Admin Key feature, your account has an admin license key, in addition to the regular license key. Please contact support@objectivefs.com for this key.
To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env
. Please use your admin license key for OBJECTIVEFS_LICENSE
.
$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
OBJECTIVEFS_LICENSE OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
your_admin_license_key
You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env
, for each user to mount their individual filesystems.
EXAMPLES:
A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env
$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs
B. Mount the filesystem as user tom
in the background
$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs
Enterprise Plan Feature
DESCRIPTION:
While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.
USAGE:
Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.
Enterprise Plan Feature
DESCRIPTION:
ObjectiveFS supports AWS S3 Transfer Acceleration that enables fast transfers of files over long distances between your server and S3 bucket.
USAGE:
Set the AWS_TRANSFER_ACCELERATION
environment variable to 1 to enable S3 transfer acceleration (see environment variables section for how to set environment variables).
REQUIREMENT:
Your S3 bucket needs to be configured to enable Transfer Acceleration. This can be done from the AWS Console.
EXAMPLES:
Mount a filesystem called myfs with S3 Transfer Acceleration enabled
$ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs myfs /ofs
Enterprise Plan Feature
DESCRIPTION:
ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).
USAGE:
Use the AWS_SERVER_SIDE_ENCRYPTION
environment variable (see environment variables section for how to set environment variables).
The AWS_SERVER_SIDE_ENCRYPTION
environment variable can be set to:
AES256
(for Amazon S3-Managed Keys (SSE-S3))aws:kms
(for AWS KMS-Managed Keys (SSE-KMS) with default key)<your kms key>
(for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)REFERENCE:
EXAMPLES:
A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)
$ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs
B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)
$ sudo AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs
C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key
$ sudo AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs
D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key
$ sudo AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs
DESCRIPTION:
ObjectiveFS has built-in TLS / SSL that connects to the object store via https.
TLS adds an extra layer of encryption to the object store connection.
ObjectiveFS always encrypts your data using the client-side encryption, both in transit and at rest. All requests are also signed using the object store’s signing policy.
TLS is useful for cases where the object store only supports TLS, when corporate policy requires https or when https is preferred.
TLS is enabled by default for some object stores. The starting line of the objectivefs log will show the endpoint connection used for your filesystem: http
or https
.
The TLS implementation detects at runtime processor-specific instructions such as AES-NI opcodes and will use those instructions when possible. To check the cipher used for your connection, run with -v
(the verbose option) and check the TLS connection log messages.
USAGE:
Set the TLS
environment variable to 1
to explicitly enable TLS. Set the TLS
environment variable to 0
to disable TLS and use http connections. Leaving TLS unset will use the default for your object store
(see environment variables section for how to set environment variables).
RELATED VARIABLES:
See environment variables section for description of each variable
REQUIREMENT:
TLS is available in ObjectiveFS version 7.0 and newer.
EXAMPLE:
Mount a filesystem called myfs with TLS enabled
$ sudo TLS=1 mount.objectivefs myfs /ofs
Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On Linux, the log is typically at /var/log/messages
or /var/log/syslog
. On macOS, the log is typically at /var/log/system.log
.
Below is a list of common log messages. For error messages, please see troubleshooting section.
A. Initial mount message
SUMMARY: The message logged every time an ObjectiveFS filesystem is mounted and contains the settings information of this mount.
FORMAT:
objectivefs <version> starting [<settings>]
DESCRIPTION:
Setting | Description |
---|---|
acl | Extended ACL is enabled |
arm64 | ARM64 cpu architecture |
bulkdata | The bulkdata mount option is enabled |
cachesize <size> | The memory cache size set by CACHESIZE |
clean | The regular cleaner is enabled. Set by the clean mount option |
clean+ | The cleaner+ is enabled. Set by the clean=2 mount option |
compact <off | level> | Compaction status: off or compaction level |
cputhreads <N>/<M> | Number of CPU threads/detected CPUs (see multithreading) |
diskcache <path | off> | The disk cache path (if enabled) or off (if disabled) |
ec2 <region>/<type> | An EC2 instance type in region |
endpoint | The endpoint used to access your object store bucket |
export | The export mount option is enabled |
freebw | The freebw mount option is enabled |
fuse version <ver> | The FUSE protocol version that the kernel uses |
kcache | The kernel cache is enabled |
hpc | The hpc mount option is enabled |
iothreads <num> | Number of I/O threads (see multithreading) |
local license | Running with local license checking |
mboost | The mboost mount option is enabled |
nooom | The nooom mount option is set |
noratelimit | The noratelimit mount option is set |
nordirplus | The nordirplus mount option is set |
nosnapshots | The nosnapshots mount option is set |
ocache | The ocache mount option is enabled |
oob | The oob mount option is enabled |
oom+ | The oom oom mount option is set |
project <id> | Send Google Project ID header |
region | The region of your filesystem |
shani | SHA native instructions enabled |
sig <v2 | v4> | The object store endpoint signature version |
x86-64-v[1-4] | x86-64 cpu architecture version |
EXAMPLE:
A. A filesystem in us-west-2 mounted with no additional mount options
objectivefs 6.6 starting [fuse version 7.26, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MiB, diskcache off, clean, compact 1, cputhreads 0, iothreads 2]
B. A filesystem in eu-central-1 mounted with on an EC2 instance with disk cache enabled, compaction level 3 and multithreading
objectivefs 6.6 starting [fuse version 7.26, region eu-central-1, endpoint http://s3-eu-central-1.amazonaws.com, cachesize 2457MiB, diskcache on, ec2, clean, compact 3, cputhreads 4, iothreads 8]
B. Regular log message
SUMMARY: The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.
FORMAT:
<put> <list> <get> <delete> <bandwidth in> <bandwidth out> <clean>
DESCRIPTION:
<put> <list> <get> <delete>
<bandwidth in> <bandwidth out>
<clean> (when the storage cleaner is enabled)
EXAMPLE:
1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT
C. Caching Statistics
SUMMARY: Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.
FORMAT:
CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]
DESCRIPTION:
<cache hit>
<metadata>
<data>
<OS>
Disk [<hit>]
EXAMPLE:
CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]
D. Compaction Statistics
SUMMARY: Compaction progress estimate. See compaction section for how to generate this output.
FORMAT:
COMPACT <progress> <layout> <objects> <active> <cleaned>
DESCRIPTION:
progress=<metadata:data>
layout=<metadata:data>
objects=<multiples>
active=<ratio>
cleaned=<value>
EXAMPLE:
COMPACT progress=95.3%:88.5% layout=94.5%:73.1% objects=1.3x active=12.0:1.2 cleaned=55.38GB
E. Error messages
SUMMARY: Error response from S3 or GCS
FORMAT:
retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]
DESCRIPTION:
<operation>
PUT, GET, LIST, DELETE
operations that encountered the error message<endpoint>
<S3/GCS response>
<amz-id>
<amz-id2>
EXAMPLE:
retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn
DNSCACHEIP
environment variable is set.Initial Setup
chmod +x mount.objectivefs
During Operation
/usr/sbin/ntpdate pool.ntp.org
./usr/sbin/ntpdate pool.ntp.org
.noratelimit
mount option.Unmount
ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem.
RELATED INFO:
Don’t hesitate to give us a call at +1-415-997-9967, or send us an email at support@objectivefs.com.