User Guide

Overview

It’s quick and easy to get started with your ObjectiveFS file system. Just a few steps to get your new file system up and running. (see Get Started)

ObjectiveFS runs on your Linux and macOS machines, and implements a log-structured file system with a cloud object store backend. Your data is encrypted before leaving your machine and stays encrypted until it returns to your machine.

This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.

What you need


Commands

Config
Sets up the required environment variables to run ObjectiveFS (details)
sudo mount.objectivefs config [-i] <object store> [<directory>]
Create
Creates a new file system (details)
sudo mount.objectivefs create [-l <region>] <filesystem>
List
Lists your file systems in S3 or GCS (details)
sudo mount.objectivefs list [-asvz] [<filesystem>[@<time>]
Mount
Mounts your file system on your Linux or macOS machines (details)
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground: sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
Unmount
Unmounts your files system on your Linux or macOS machines (details)
sudo umount <dir>
Destroy
Destroys your file system (details)
sudo mount.objectivefs destroy <filesystem>

Reference

This section covers the detailed description and usage for each ObjectiveFS command.

Config

SUMMARY: Sets up the required environment variables to run ObjectiveFS

USAGE:

sudo mount.objectivefs config [-i] <object store> [<directory>]

DESCRIPTION:
Config is a one-time operation that sets up the required credentials, such as object store keys and your license, as environment variables in a directory. You can also optionally set your default region. See the Object Store Setup section for the detail setup steps for each object store.

-i
Use IAM role instead of keys (Amazon S3 only)
<object store>

Your object store from the first column in this table:

Object Store Description
Public Cloud
az:// Azure
az-cn:// Azure China
do:// DigitalOcean
gs:// Google Cloud
ibm:// IBM Cloud
ocs:// Oracle Cloud
s3:// AWS
s3-cn:// AWS China
scw:// Scaleway
wasabi:// Wasabi
Gov Cloud
az-gov:// Azure GovCloud
ocs-gov:// Oracle Cloud GovCloud
ocs-ukgov:// Oracle UK GovCloud
s3:// AWS GovCloud
On Premise
ceph:// Ceph
cos:// IBM Cloud Object Store
minio:// Minio
objectstore:// Other S3-compatible Object Store
<directory>
Directory to store your environment variables. This should be a new non-existing directory.
Default: /etc/objectivefs.env

WHAT YOU NEED:

  • Your object store access and secret keys or IAM role attached to your server
  • Your ObjectiveFS license (from your profile page)

DETAILS:

Config sets up the object store specific environment variables in /etc/objectivefs.env (if no directory is specified) or in the directory specified. Here are some commonly created environment variables.

  • AWS_ACCESS_KEY_ID (AWS) or ACCESS_KEY (others)
    Your object store access key
  • AWS_SECRET_ACCESS_KEY (AWS) or SECRET_KEY (others)
    Your object store secret key
  • AWS_DEFAULT_REGION (AWS) or REGION (others)
    The default region for your filesystems
  • OBJECTSTORE
    Your object store backend
  • OBJECTIVEFS_LICENSE
    Your ObjectiveFS license key

EXAMPLES:
A. Configuration for Microsoft Azure in the default directory /etc/objectivefs.env

$ sudo mount.objectivefs config az://
Creating config for Microsoft in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Storage Account Name: <your storage account name>
Enter Secret Key: <Access key from your Azure storage account>

B. Configuration with IAM role for S3 in the default directory /etc/objectivefs.env

$ sudo mount.objectivefs config -i s3://
Creating config for Amazon in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Default Region (optional): <your S3 region>

C. Configuration for Google Cloud Storage in the user-specified directory /etc/gs.env

$ sudo mount.objectivefs config gs:// /etc/gs.env
Creating config for Google in /etc/gs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key: <your access key>
Enter Secret Key: <your secret key>
Enter Region: <your region, e.g. us>

TIPS:

  • To make changes to your keys or default region, edit the files in the environment directory directly (e.g. /etc/objectivefs.env/AWS_ACCESS_KEY_ID).
  • If you don’t want to use the variables in /etc/objectivefs.env, you can also set the environment variables directly on the command line. See environment variables on command line.
  • You can also manually create the environment variables directory without using the config command. You can also copy the config directory /etc/objectivefs.env to another server to replicate the setup on that server.
  • If you have an attached AWS EC2 IAM role to your EC2 instance, you can automatically rekey with IAM roles (see live rekeying) and don’t need to have the AWS_SECRET_ACCESS_KEY or AWS_ACCESS_KEY_ID environment variables.

Create

SUMMARY: Creates a new file system

USAGE:

sudo mount.objectivefs create [-l <region>] <filesystem>

DESCRIPTION:
This command creates a new file system in your S3, GCS or on-premise object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.

<filesystem>
A globally unique, non-secret file system name. (Required)
The filesystem name maps to a new object store bucket, and S3/GCS requires globally unique namespace for buckets.
For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo
-l <region>
The region to store your file system in. (see region list)
Default: The region specified by your AWS_DEFAULT_REGION or REGION environment variable (if set). Otherwise, S3’s default is us-east-1 and GCS’s default is based on your server’s location (us, eu or asia).

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).

DETAILS:
This command creates a new filesystem in your object store. You can specify the region to create your filesystem by using the -l <region> option or by setting the AWS_DEFAULT_REGION or REGION environment variable.

ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the Filesystem Pool section for details.

EXAMPLES:
A. Create a file system in the default region

$ sudo mount.objectivefs create myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>

B. Create an S3 file system in a user-specified region (e.g. eu-central-1)

$ sudo mount.objectivefs create -l eu-central-1 s3://myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>

C. Create a GCS file system in a user-specified region (e.g. us)

$ sudo mount.objectivefs create -l us gs://myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs): <your passphrase>

TIPS:

  • You can store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify this file’s permission is restricted to root only.
  • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs create myfs
  • To create a filesystem without manually entering your passphrase (e.g. for scripting filesystem creation), you can use the admin mode and store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE.

List

SUMMARY: Lists your file systems, snapshots and buckets

USAGE:

sudo mount.objectivefs list [-asvz] [<filesystem>[@<time>]]

DESCRIPTION:
This command lists your file systems, snapshots or buckets in your object store. The output includes the file system name, filesystem kind (regular filesystem or pool), snapshot (automatic, checkpoint, enabled status), region and location.

-a
List all buckets in your object store, including non-ObjectiveFS buckets.
-s
Enable listing of snapshots.
-v
Enable verbose mode.
-z
Use UTC for snapshot timestamps.
<filesystem>
The filesystem name to list. If the filesystem doesn’t exist, nothing will be returned.
<filesystem>@<time>
The snapshot to list. The time specified can be in UTC (needs -z) or local time, in the ISO8601 format (e.g. 2016-12-31T15:40:00).
If a prefix of a time is given, a list of snapshots matching the time prefix will be listed.
default
All ObjectiveFS file systems are listed.

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).

DETAILS:

The list command has several options to list your filesystems, snapshots and buckets in your object store. By default, it lists all your ObjectiveFS filesystems and pools. It can also list all buckets, including non-ObjectiveFS buckets, with the -a option. To list only a specific filesystem or filesystem pool, you can provide the filesystem name. For a description of snapshot listing, see Snapshots section.

The output of the list command shows the filesystem name, filesystem kind, snapshot type, region and location.

Example filesystem list output:

NAME                KIND  SNAP REGION        LOCATION
s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
s3://myfs-2         ofs   -    us-west-2     US West (Oregon)
s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)

Example snapshot list output:

NAME                           KIND  SNAP    REGION     LOCATION
s3://myfs@2017-01-10T11:10:00  ofs   auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T11:17:00  ofs   manual  eu-west-2  EU (London)
s3://myfs@2017-01-10T11:20:00  ofs   auto    eu-west-2  EU (London)
s3://myfs                      ofs   on      eu-west-2  EU (London)
Filesystem Kind Description
ofs ObjectiveFS filesystem
pool ObjectiveFS filesystem pool
- Non-ObjectiveFS bucket
? Error while querying the bucket
access No permission to access the bucket
Snapshot type Applicable for Description
auto snapshot Automatic snapshot
manual snapshot Checkpoint (or manual) snapshot
on filesystem Snapshots are activated on this filesystem
- filesystem Snapshots are not activated

EXAMPLES:
A. List all ObjectiveFS filessytem.

$ sudo mount.objectivefs list
NAME                KIND  SNAP REGION        LOCATION
s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
s3://myfs-2         ofs   on   us-west-2     US West (Oregon)
s3://myfs-3         ofs   on   eu-west-2     EU (London)
s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsb  ofs   -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsc  ofs   -    us-east-1     US East (N. Virginia)

B. List a specific file system, e.g. s3://myfs-3

$ sudo mount.objectivefs list s3://myfs-3
NAME                KIND  SNAP REGION        LOCATION
s3://myfs-3         ofs   -    eu-west-2     EU (London)

C. List everything, including non-ObjectiveFS buckets. In this example, my-bucket is a non-ObjectiveFS bucket.

$ sudo mount.objectivefs list -a
NAME                KIND  SNAP REGION  LOCATION
gs://my-bucket      -     -    EU      European Union
gs://myfs-a         ofs   -    US      United States
gs://myfs-b         ofs   on   EU      European Union
gs://myfs-c         ofs   -    ASIA    Asia Pacific

D. List snapshots for myfs that match 2017-01-10T12 in UTC

$ sudo mount.objectivefs list -sz myfs@2017-01-10T12
NAME                            KIND SNAP    REGION     LOCATION
s3://myfs@2017-01-10T12:10:00Z  ofs  auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T12:17:00Z  ofs  manual  eu-west-2  EU (London)
s3://myfs@2017-01-10T12:20:00Z  ofs  auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T12:30:00Z  ofs  auto    eu-west-2  EU (London)

TIPS:

  • You can list partial snapshots by providing <filesystem>@<time prefix>, e.g. myfs@2017-01-10T12.
  • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs list

Mount

SUMMARY: Mounts your file system on your Linux or macOS machines.

USAGE:
Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>

DESCRIPTION:
This command mounts your file system on a directory on your Linux or macOS machine. After the file system is mounted, you can read and write to it just like a local disk.

You can mount the same file system on as many Linux or macOS machines as you need. Your license will always scale if you need more mounts, and it is not limited to the number of included licenses on your plan.

NOTE: The mount command needs to run as root. It runs in the foreground if “mount” is provided, and runs in the background otherwise.

<filesystem>
A globally unique, non-secret file system name. (Required)
For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo.
The filesystem can end with @<timestamp> for mounting snapshots.
<dir>
Directory (full path name) on your machine to mount your file system. (Required)
This directory should be an existing empty directory.

General Mount Options

-o env=<dir>
Load environment variables from directory <dir>. See environment variable section.

Mount Point Options

-o acl | noacl
Enable/disable Access Control Lists. (Default: Disable, see ACL section)
-o dev | nodev
Allow block and character devices. (Default: Allow)
-o diratime | nodiratime
Update / Don’t update directory access time. (Default: Update)
-o exec | noexec
Allow binary execution. (Default: Allow)
-o export | noexport
Enable / disable restart support for NFS or Samba exports. (Default: Disable)
-o filehole=<size>
Set the maximum allowed file hole size (mainly used for SMB export). Value can be specified using SI/IEC prefixes, e.g. 64GiB, 100GB, 1T etc. (Default: 64GiB)
-o fsavail=<size>
Set the reported available filesystem space. Value can be specified using SI/IEC prefixes, e.g. 100TB, 10.5TiB, 1.5PB, 3PiB, etc.
-o nonempty
Allow mounting on non-empty directory. (Default: Disable)
-o rdirplus | nordirplus
Enable / disable enhanced readdir where stat info is returned with readdir calls. (Default: Enable)
-o ro | rw
Read-only / Read-write file system. (Default: Read-write)
-o strictatime | relatime | noatime
Update / Smart update / Don’t update access time. (Default: Smart update)
-o suid | nosuid
Allow / Disallow suid bits. (Default: Allow)
-o writedelay | nowritedelay
Enable/disable delayed writes. Allow the kernel to delay writes to the filesystem. Mainly applicable for some single-mount backup workloads that correctly use fsyncs, such as RMAN. Not recommended for multi-mount setups or most workloads since it can cause slower write performance. (Default: Disable)

File System Mount Options

-o bulkdata | nobulkdata
Enable / disable bulk data mode. The bulk data mode improves the storage layout during high write activity to improve filesystem performance. To improve performance, the bulk data mode will use more PUT requests. The bulk data mode is useful during initial data upload and when there are lots of writes to the filesystem. (Default: Enable)
-o clean[=1|=2] | noclean
Set level for / disable storage cleaner. The storage cleaner reclaims storage from deleted snapshots together with the compaction process. 1 is for the standard cleaner and 2 is for the cleaner+. (Default: standard cleaner) (For cleaner+ details, see this doc)
-o compact[=<level>] | nocompact
Set level for / disable background index compaction. (Default: Enable) (details in Compaction section)
-o fuse_conn=<NUM>
Set the max background FUSE connections. Range is 1 to 1024. (Default: 96)
-o freebw | nofreebw | autofreebw
Regulates the network bandwidth usage for compaction. Use freebw when the network bandwidth is free (e.g. on-premise object store or ec2 instance connecting directly to an s3 bucket in the same region). Use nofreebw when the network bandwidth is charged. autofreebw will enable freebw when it detects that it is on an ec2 instance in the same region as the S3 bucket.
Important: when using freebw and autofreebw, verify that there is no network routing through a paid network route such as a NAT gateway to avoid incurring bandwidth charges. Enabling freebw will incur extra AWS data transfer charges when running outside of the S3 bucket region. (Default: nofreebw) [6.8 release and newer]
-o hpc | nohpc
Enable / disable high performance computing mode. Enabling hpc prefers throughput over latency when possible, e.g. by sending larger writes to the object store and doing larger read ahead steps. (Default: Disable)
-o mboost[=<minutes>] | nomboost
Enable / disable memory index reduction. This feature will trade off performance for lower memory usage for larger filesystems. The mboost setting specifies how long certain data is allowed to stay in memory before being considered for memory reduction. The range is 10 to 10080 minutes (60 if not specified). (Default: Disable)
-o mkdir[=<mode>]
Create mount directory if directory does not exist. Option to specify permission of directory in octal format subject to umask. Default permission: 755. Enabled by default on macOS when mounting on /Volumes. [7.0 release and newer]
-o mt | mtplus | nomt | cputhreads=<N> | iothreads=<N>
Specify multithreading options. (details in Multithreading section)
-o nomem=<spin|stop>
Set mount behavior when unable to allocate memory from the system. If spin is set, the mount will wait until memory is available. If stop is set, the mount will exit and accesses to the mount will return error. In both cases, a log message will be sent to syslog. Spin is useful in situations where memory can be freed (e.g. OOM killer) or added (e.g. swap). Stop may be useful for generic nodes behind a load balancer. (Default: spin)
-o ocache | noocache
Enable / disable caching in object store. When enabled, the object store cache (if generated) will shorten the mount time for medium to large filesystems. This cache will only be generated or updated on read-write mounts that have been mounted for at least 10 minutes. (Default: Enable)
-o oob | nooob
Enable / disable the out of band flag. Useful if your nodes communicate information about newly created files out of band and not through the filesystem.
-o oom | nooom
Linux only. Enable / disable oom protection to reduce the likelihood of being selected by the oom killer. To be exempt from the oom killer (use with care), you can specify oom twice. (Default: Enable)
-o ratelimit | noratelimit
Enable / disable the built-in request rate-limiter. The built-in request rate-limiter is designed to prevent runaway programs from running up the S3 bill. (Default: Enable)
-o retry=<seconds>
Duration in seconds to retry connection to the object store if the connection could not be established upon start up. Range is 0 (no retry) to 31536000 (Default: 60 for background mount, 3 for foreground mount)
-o snapshots | nosnapshots
Enable / disable generation of automatic snapshots from this mount point. (Default: Enable)

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).
  • The file system name (see Create section for creating a file system)
  • An empty directory to mount your file system

EXAMPLES:
Assumptions:
1. /ofs is an existing empty directory
2. Your passphrase is in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE

A. Mount an S3 file system in the foreground

$ sudo mount.objectivefs mount myfs /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/ubuntu/.ofs.env

$ sudo mount.objectivefs mount -o env=/home/ubuntu/.ofs.env gs://myfs /ofs

C. Mount an S3 file system in the background

$ sudo mount.objectivefs s3://myfs /ofs

D. Mount a GCS file system with non-default options
Assumption: /etc/objectivefs.env contains GCS keys.

$ sudo mount.objectivefs -o nosuid,nodev,noexec,noatime gs://myfs /ofs

TIPS:

  • Control-C on a foreground mount will stop the file system and try to unmount the file system. To properly unmount the filesystem, use unmount.
  • You can mount your filesystem without needing to manually enter your passphrase by storing your passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify that your passphrase file’s permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE).
  • If your machine sleeps while the file system is mounted, the file system will remain mounted and working after it wakes up, even if your network has changed. This is useful for a laptop that is used in multiple locations.
  • To run with another ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
$ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs mount myfs /ofs

Mount on Boot

ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.

WHAT YOU NEED:

A. Linux

  1. Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE exists so you can mount your filesystem without needing to enter the passphrase.
    If it doesn’t exist, create the file with your passphrase as the content.(see details)

  2. Add a line to /etc/fstab with:

    <filesystem> <mount dir> objectivefs auto,_netdev  0 0 
    _netdev is used by many Linux distributions to mark the file system as a network file system.

  3. To specify mount options in /etc/fstab, add the mount options separated by commas after _netdev.

    <filesystem> <mount dir> objectivefs auto,_netdev,mt,ocache 0 0 

  4. For more details, see Mount on Boot Setup Guide for Linux.

B. macOS

macOS can use launchd to mount on boot. See Mount on Boot Setup Guide for macOS for details.


Unmount

SUMMARY: Unmounts your files system on your Linux or macOS machines

USAGE:

sudo umount <dir>

DESCRIPTION:
To unmount your filesystem, run umount with the mount directory. Typing Control-C in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount to properly unmount the filesystem.

WHAT YOU NEED:

  • Not accessing any file or directory in the file system

EXAMPLE:

$ sudo umount /ofs

Destroy

IMPORTANT: After a destroy, there is no way to recover any files or data.

SUMMARY: Destroys your file system. This is an irreversible operation.

USAGE:

sudo mount.objectivefs destroy <filesystem>

DESCRIPTION:
This command deletes your file system from your object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.

You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.

<filesystem>
The file system that you want to destroy. (Required)

NOTE:
Your file system should be unmounted from all of your machines before running destroy.

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).
  • Your authorization code from your user profile page. (Note: the authorization code changes periodically. To get the latest code, please refresh your user profile page.)

EXAMPLE:

$ sudo mount.objectivefs destroy s3://myfs  
*** WARNING ***
The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

Settings

This section covers the options you can run ObjectiveFS with.

Regions and Endpoints

ObjectiveFS supports all regions and endpoints of supported object stores. You can specify the region when creating your filesystem.

RELATED COMMANDS:

REFERENCE:
AWS S3 regions and endpoints, GCS regions


Environment Variables

ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.

The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: manual rekeying).

A. Environment Variables from Directory
ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir tool from the daemontools package.

Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.

SETUP:
The Config command sets up your environment directory with three common environment variables:

  • AWS_ACCESS_KEY_ID or ACCESS_KEY
  • AWS_SECRET_ACCESS_KEY or SECRET_KEY
  • OBJECTIVEFS_LICENSE

You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.

EXAMPLE:

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID  AWS_SECRET_ACCESS_KEY  OBJECTIVEFS_LICENSE  OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
your_objectivefs_passphrase

B. Environment Variables on Command Line
You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.

USAGE:

sudo [<ENV VAR>='<value>'] mount.objectivefs 

EXAMPLE:

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

SUPPORTED ENVIRONMENT VARIABLES

To enable a feature, set the corresponding environment variable and remount your filesystem (exception: manual rekeying).

ACCESS_KEY
Synonym for AWS_ACCESS_KEY_ID [7.0 release and newer]
ACCOUNT
Username of the user to run as. Root will be dropped after startup.
AWS_ACCESS_KEY_ID
Your object store access key. Synonym for ACCESS_KEY. (Required or use AWS_METADATA_HOST)
AWS_DEFAULT_REGION
The default object store region to connect to. Synonym for REGION.
AWS_METADATA_HOST
AWS STS host publishing session keys (for EC2 set to “169.254.169.254”). Sets and rekeys AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SECURITY_TOKEN.
AWS_SECRET_ACCESS_KEY
Your secret object store key. (Required or use AWS_METADATA_HOST)
AWS_SECURITY_TOKEN
Session security token when using AWS STS.
AWS_SERVER_SIDE_ENCRYPTION
Server-side encryption with AWS KMS support. (Enterprise plan feature) (see Server-side Encryption section)
AWS_TRANSFER_ACCELERATION
Set to 1 to use the AWS S3 acceleration endpoint. (Enterprise plan feature) (see S3 Transfer Acceleration section)
CACHESIZE
Set cache size as a percentage of memory (e.g. 30%) or an absolute value (e.g. 500M or 1G). (Default: 20%) (see Memory Cache section)
DISKCACHE_SIZE
Enable and set disk cache size and optional free disk size. (see Disk Cache section)
DISKCACHE_PATH
Location of disk cache when disk cache is enabled. (see Disk Cache section)
DNSCACHEIP
IP address of recursive name resolver. (Default: use /etc/resolv.conf)
ENDPOINT
Specify the object store endpoint directly. Format: http[s]://example.com[:port] [7.0 release and newer]
http_proxy
HTTP proxy server address. (see HTTP Proxy section)
NAMESPACE
Specify namespace for object stores that use namespace [7.0 release and newer]
IDMAP
User ID and Group ID mapping. (see UID/GID Mapping section)
OBJECTIVEFS_ENV
Directory to read environment variables from. (Default: /etc/objectivefs.env)
OBJECTIVEFS_LICENSE
Your ObjectiveFS license key. (Required)
OBJECTIVEFS_PASSPHRASE
Passphrase for filesystem. You can also use a secrets management program to retrieve your passphrase by specifying #!<full-path-to-program> (see details) (Default: will prompt)
OBJECTSTORE
Specify the default object store (see table in the config section) [7.0 release and newer]
PATHSTYLE
Select path style addressing for object stores that do not use domain style addressing. 1 to enable, 0 to disable. (Default: based on object store) [7.0 release and newer]
REGION
Synonym for AWS_DEFAULT_REGION [7.0 release and newer]
SECRET_KEY
Synonym for AWS_SECRET_ACCESS_KEY [7.0 release and newer]
SIGNATURE
Specify the signature scheme to use with the object store. Selecting azure will also use Azure API. Options: v2 (or s3v2), v4 (or s3v4), azure. (Default: based on object store) [7.0 release and newer]
SSL_CERT_DIR
Specify directories to locate SSL certificates. Multiple directories can be specified separated by ":". If set to empty, do not include OS standard certificate directories. [7.0 release and newer]
SSL_CERT_FILE
Load SSL certificate from the specified file. If set to empty, do not include OS standard certificate files. [7.0 release and newer]
STUNNEL
Tunnel proxy support for object stores that do not work with proxied requests. Set this variable to 1 if your proxy has a fixed single destination.
TLS
Enable/Disable using https connection to object store. 1 to Enable, 0 to Disable. (Default: based on object store) [7.0 release and newer]
TLS_CIPHERS
Select which ciphers that can be used. Set to secure (or default), compat, legacy, insecure (or all). Can also use the openssl cipher list format with cipher strings to specify more detailed cipher list. [7.0 release and newer]
TLS_NORESUME
When set, disables TLS session resumption. (Default: resume TLS session if object store supports) [7.0 release and newer]
TLS_NOSTANDARD_CERT
When set, does not use standard object store root certificates. [7.0 release and newer]
TLS_NOVERIFY_CERT
When set, disables certificate verification. [7.0 release and newer]
TLS_NOVERIFY_NAME
When set, disables server name verification. [7.0 release and newer]
TLS_NOVERIFY_TIME
When set, disables certificate valid expiration verification. [7.0 release and newer]
TLS_PROTOCOL
Select which TLS protocol versions can be used. Set using a comma or colon separated list of tlsv1.0, tls1.1, tlsv1.2, tls1.3, all (or legacy), secure (or default). Using "!" in front of an entry removes instead of adds. [7.0 release and newer]

Features

Memory Cache

DESCRIPTION:
ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.

USAGE:
Set the CACHESIZE environment variable to one of the following:

  • as a percentage of memory (e.g. 30%)
  • the actual memory size (e.g. 500M or 2G)

The actual memory cache size can be specified using SI/IEC prefixes and can be specified as a decimal value (5.5 release or newer):

  • base-1000: M, MB, G, GB, T, TB
  • base-1024: Mi, MiB, Gi, GiB, Ti, TiB

A minimum of 64MiB will be used for CACHESIZE.

DEFAULT VALUE:
If CACHESIZE is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.

DETAILS:

The cache size is applicable per mount. If you have multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of the CACHESIZE values for all mounts.

The memory cache is one component of the ObjectiveFS memory usage. The total memory used by ObjectiveFS is the sum of:
a. memory cache usage (set by CACHESIZE)
b. index memory usage (based on the number of S3 objects/filesystem size), and
c. kernel memory usage

The caching statistics such as cache hit rate, kernel memory usage is sent to the log. The memory cache setting is also sent to the log upon file system mount.

EXAMPLES:
A. Set memory cache size to 30%

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

B. Set memory cache size to 2GB

$ sudo CACHESIZE=2G mount.objectivefs myfs /ofs

RELATED INFO:


Disk Cache

DESCRIPTION:
ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.

The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.

Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.

ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.

Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.

A file called CACHEDIR.TAG is created by default in the DISKCACHE_PATH directory upon filesystem mount. Standard backup software will exclude the disk cache directory when this file is present. If you wish to back up the disk cache directory, you can overwrite the content of the CACHEDIR.TAG file.

RECOMMENDATION:
We recommend enabling disk cache when local SSD or harddrive is available. For EC2 instances, we recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size. (See how to mount an instance store on EC2 for disk cache).

USAGE:
The disk cache uses DISKCACHE_SIZE and DISKCACHE_PATH environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE.

DISKCACHE_SIZE

  • Accepts values in the form <DISK CACHE SIZE>[:<FREE SPACE>].
  • <DISK CACHE SIZE>:

    • Set to the actual space you want ObjectiveFS to use (e.g. 20G or 1T). In 6.9 release and newer, it can also be specified as a percentage.
    • If this value is larger than your actual disk (e.g. 1P), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.
  • <FREE SPACE> (optional):

    • Set to the amount of free space you want to keep on the volume (e.g. 5G). In 6.9 release and newer, it can also be specified as a percentage.
    • The default value is 3G.
    • When it is set to 0G, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).
  • The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the DISK_CACHE_SIZE or (total disk space - FREE_SPACE).

  • Both disk cache size and free space values can be specified using SI/IEC prefixes and as decimal values:

    • base-1000: M, MB, G, GB, T, TB, P, PB
    • base-1024: Mi, MiB, Gi, GiB, Ti, TiB, Pi, PiB

DISKCACHE_PATH

  • Specifies the location of the disk cache.
  • Default location:
    macOS: /Library/Caches/ObjectiveFS
    Linux: /var/cache/objectivefs

DEFAULT VALUE:
Disk cache is disabled when DISKCACHE_SIZE is not specified.

EXAMPLES:
A. Set disk cache size to 20GB and use default free space (3GB)

$ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs

B. Use as much space as possible for disk cache and keep 10GB free space

$ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs

C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path

$ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs

D. Use the entire space for disk cache (when using dedicated volume for disk cache)

$ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs

WHAT YOU NEED:

  • Local instance store or SSD mounted on your DISKCACHE_PATH (default:/var/cache/objectivefs)

TIPS:

  • Different file systems on the same machine can point to the same disk cache. They can also point to different locations by setting different DISKCACHE_PATH.
  • The DISKCACHE_SIZE is per disk cache directory. If multiple file systems are mounted concurrently using different DISKCACHE_SIZE values and point to the same DISKCACHE_PATH, ObjectiveFS will use the minimum disk cache size and the maximum free space value.
  • A background disk cache clean-up will keep the disk cache size within the specified limits by removing the oldest data.
  • The disk cache warming guide describes several ways to warm the disk cache.

RELATED INFO:


Snapshots

ObjectiveFS supports two types of snapshots: automatic snapshots and checkpoint snapshots. Automatic snapshots are managed by ObjectiveFS and are taken based on the snapshot schedule while your filesystem is mounted. Checkpoint snapshots can be taken at anytime and the filesystem does not need to be mounted. Snapshots can be mounted as a read-only filesystem to access your filesystem as it was at that point in time. This is useful to recover accidentally deleted or modified files or as the source for creating a consistent point-in-time backup.

Snapshots are not backups since they are part of the same filesystem and are stored in the same bucket. A backup is an independent copy of your data stored in a different location.

Snapshots can be useful for backup. To create a consistent point-in-time backup, you can mount a recent snapshot and use it as the source for backup, instead of running backup from your live filesystem. This way, your data will not change while the backup is in progress.

Snapshots are fast to take. The storage used for each snapshot is incremental, only the differences between two snapshots are stored. If there are multiple mounts of the same filesystem, only one automatic snapshot will be generated at a given scheduled time.

Activate Snapshots

Snapshots are automatically activated upon initial mount of the filesystem, unless the nosnapshots mount option is used.

Legacy filesystems note: For older filesystems created before ObjectiveFS 5.0, snapshots are activated when the filesystem is mounted with ObjectiveFS 5.0 or newer. Upon activation, ObjectiveFS will perform a one-time id entification of existing snapshots and activate them if available.

You can use the mount.objectivefs list command to identify filesystems with activated snapshots by checking the SNAP field of the output. If snapshots have been activated on this filesystem, the SNAP column will show on. Otherwise, it will show -.

Create Snapshots

A. Create Automatic Snapshots
Automatic snapshots are managed by ObjectiveFS and are taken based on the snapshot schedule while your filesystem is mounted. If your filesystem is not mounted, automatic snapshots will not be taken since there are no chan ges to the filesystem.

Automatic snapshots are taken and retained based on the snapshot schedule. Older automatic snapshots are automatically removed to maintain the number of snapshots per interval.

A.1. Snapshot Schedule Format
The snapshot schedule can be specified as zero or more snapshot intervals, each specifying the number of snapshots at a specific interval, with the following format:

<number of snapshots>@<interval length><interval unit> 

Multiple snapshot intervals can be specified separated by space, with the intervals in increasing order. The number of snapshots and intervals should be positive integers.

The supported snapshot units include minutes, hours, days and weeks. For months, quarters and years, both ‘Simple’ units (4-week based) and ‘Average’ units (365.25-day based) are supported. If a calendar-based schedule is needed (e.g. last day of each month), please see the Snapshots FAQ.

Snapshot Interval Unit Description
  m Minute
  h Hour
  d Day
  w Week
  n Simple month (4 weeks)
  q Simple quarter (12 weeks)
  y Simple year (48 weeks)
  M Average month (365.25 days / 12)
  Q Average quarter (365.25 days / 4)
  Y Average year (365.25 days)

See the Snapshots Reference Guide for snapshot schedule examples.

A.2. Default Snapshot Schedule
The table below shows the default automatic snapshot schedule. This schedule would be specified as 72@10m 72@1h 32@1d 16@1w.

Snapshot Interval Number of Snapshots
  10-minute 72  
  hourly 72  
  daily 32  
  weekly 16  

A.3. Custom Snapshot Schedule
Team, Corporate and Enterprise Plans Feature
You can specify a custom automatic snapshot schedule with ObjectiveFS 7.2 and newer. The custom schedule can be set using the following command:

mount.objectivefs snapshot -s "<schedule>" <filesystem>
A new custom schedule will remove any existing automatic snapshots that do not match the new schedule. If you wish to save some of the automatic snapshots, you can convert them into checkpoint snapshots by mounting the relevant snapshots.

Custom snapshots are compatible with older ObjectiveFS versions and the snapshot schedule can be updated without restarting any mounts. ObjectiveFS 7.2 is only needed to set the custom schedule. Existing mounts will pick up the new custom schedule within a few hours.

Custom snapshots can use any combination of intervals in increasing order, as long as the total number of snapshots does not exceed 900.

See the Snapshots Reference Guide for snapshot schedule examples.

A.4. Useful Snapshot Schedule Commands
See the Snapshots Reference Guide for more details and examples.

a. Show current snapshot schedule of the filesystem

mount.objectivefs snapshot -l <filesystem>

b. Validate that a snapshot schedule is valid (useful when developing a new snapshot schedule before deployment)

mount.objectivefs snapshot -s <schedule>

c. Generate a sample snapshot schedule

mount.objectivefs snapshot -vs <schedule>

A.5. Disabling Automatic Snapshots
To disable automatic snapshots, you can set the snapshot schedule to an empty string.

mount.objectivefs snapshot -s "" <filesystem>

B. Create Checkpoint Snapshots
Team, Corporate and Enterprise Plans Feature

sudo mount.objectivefs snapshot <filesystem>

Checkpoint snapshots are manual point-in-time snapshots that can be taken at anytime, even if your filesystem is not mounted. Checkpoint snapshots are useful for creating a snapshot right before making large changes to the filesystem. They can also be useful if you need snapshots at a specific time of the day or for compliance reasons.

There is no limit to the number of checkpoint snapshots you can take. The checkpoint snapshots are kept until they are explicitly removed by the destroy command.

List Snapshots

DESCRIPTION:

You can list the snapshots for your filesystem by running the list -sz (snapshot timestamp in UTC format, recommended) or list -s command (snapshot timestamp in local time). The snapshot listing command shows both automatic (auto in the SNAP column) and checkpoint (manual in the SNAP column) snapshots.

Snapshots have the format <filesystem>@<time>. You can list all snapshots for a filesystem or only snapshots matching a specific time prefix (see examples below).

USAGE:

mount.objectivefs list -sz [filesystem[@<time>]]

EXAMPLES:

a. List all snapshots for myfs

# mount.objectivefs list -sz myfs@

b. List all snapshots for myfs that match 2024-02-21 in UTC

# mount.objectivefs list -sz myfs@2024-02-21

c. List all snapshots for myfs that match 2024-02-21T12 in UTC

# mount.objectivefs list -sz myfs@2024-02-21T12


Mount Snapshots

DESCRIPTION:

Snapshots can be mounted to access the filesystem as it was at that point in time. When a snapshot is mounted, it is accessible as a read-only filesystem.

You can mount both automatic and checkpoint snapshots. When an automatic snapshot is mounted, a checkpoint snapshot for the same timestamp will be created to prevent the snapshot from being automatically removed in case its retention schedule expires while it is mounted. These checkpoint snapshots, when created for data recovery purpose only, are also included in all plans.

Snapshots can be mounted using the same keys used for mounting the filesystem. If you choose to use a different key, you only need read permission for mounting checkpoint snapshots, but need read and write permissions for mounting automatic snapshots.

A snapshot mount is a regular mount and will be counted as an ObjectiveFS instance while it is mounted.

USAGE:

sudo mount.objectivefs [-o <options>] <filesystem>@<time> <dir>
<filesystem>@<time>
The snapshot for the filesystem at a particular time. The time can be specified as local time or UTC (ends with Z) in the ISO8601 format. You can use the list snapshots command to get the list of available snapshots for your filesystem.
<dir>
Directory (full path name) to mount your file system snapshot.
This directory should be an existing empty directory.
<options>
You can also use the same mount options as mounting your filesystem (some of them will have no effect since it is a read-only filesystem).

EXAMPLES:

A. Mount a snapshot without additional mount options on /ofssnap

$ sudo mount.objectivefs myfs@2024-02-08T00:30:00Z /ofssnap

B. Mount a snapshot with multithreading enabled

$ sudo mount.objectivefs -omt myfs@2024-02-08T00:30:00Z /ofssnap


Unmount Snapshots

Same as the regular unmount command.

Destroy Snapshots

To destroy a snapshot, use the regular destroy command with <filesystem>@<time>. Time should be specified in ISO8601 format (e.g. 2024-02-10T10:10:00) and can either be local time or UTC (ends with Z). Both automatic and checkpoint snapshots matching the timestamp will be removed.

sudo mount.objectivefs destroy <filesystem>@<time>

EXAMPLE:

$ sudo mount.objectivefs destroy s3://myfs@2024-02-20T02:27:54Z  
*** WARNING ***
The snapshot 's3://myfs@2024-02-20T02:27:54Z' will be destroyed. No other changes will be done to the filesystem.
Continue [y/n]? y
Snapshot 's3://myfs@2024-02-20T02:27:54Z' destroyed.


RELATED INFO:


EC2 IAM Roles

ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.

Automatic Rekeying with IAM roles

If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST to 169.254.169.254 to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables.

Manual Rekeying

You can also manually rekey by updating AWS_SECRET_ACCESS_KEY (or SECRET_KEY) and AWS_ACCESS_KEY_ID (or ACCESS_KEY) (and also AWS_SECURITY_TOKEN if used) in your config directory and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload the updated files and start using the updated keys.

RELATED INFO:


Compaction

DESCRIPTION:
ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.

USAGE:
You can specify the compaction rate by setting the mount option when mounting your filesystem. Faster compaction increases bandwidth usage. For more about mount options, see this section.

  • nocompact - disable compaction
  • compact - enable compaction level 1
  • compact,compact - enable compaction level 2 (uses more bandwidth than level 1)
  • compact,compact,compact - enable compaction level 3 (uses more bandwidth than level 2)
  • compact=<level> - set the compaction level (see below)

COMPACTION LEVELS:

  • Higher compaction levels will use more bandwidth. We recommend using an EC2 server in the same region as your S3 bucket when using compaction level 3 or higher to save on bandwidth cost.
  • Levels 1 (low) to 3 (high) are the normal compaction levels and are recommended for regular use.
  • Levels 4 and 5 are intended for quickly updating the storage layout of an existing filesystem, and can use a lot of bandwidth (up to TBs for large filesystems). These levels should be run only on a server in the same region as the S3 bucket since the data transfer cost is free, and requires the -o noratelimit option to be enabled. For instructions on how to use levels 4 and 5, see this document for details.

DEFAULT:
Compaction is enabled by default. If the filesystem is mounted on an EC2 machine accessing an S3 bucket, compaction level 2 is the default. Otherwise, compaction level 1 is the default.

EXAMPLES:
A. To enable compaction level 3

$ sudo mount.objectivefs -o compact=3 myfs /ofs

B. To disable compaction

$ sudo mount.objectivefs -o nocompact myfs /ofs

TIPS:

  • You can find out the number of S3 objects for your ObjectiveFS filesystem in the IUsed column of df -i.
  • To increase the compaction rate of a filesystem, you can enable compaction on all mounts of that filesystem.
  • You can also set up a temporary extra mount with the fastest compaction level to increase the compaction rate. See the Storage Layout Performance Optimization doc.

MONITORING PROGRESS (6.8 and newer):
The compaction and cleaning statistics for a mount can be printed in the objectivefs log by sending the USR1 signal to the ObjectiveFS process id. The reported statistics is an estimate based on the information currently available to this mount and can vary between mounts. If there are writes to the filesystem, the estimated number might vary.

Generating the statistics can slow down the mount between the time the signal is received until the time the statistics is logged. Therefore it is recommended only for dedicated cleaner mounts and not for production mounts.

Important: Do not send the USR1 signal to versions before 6.8.

Commands to print the current compaction statistics

$ ps -ef | grep mount.objectivefs
$ kill -USR1 <PID>

Command to print the compaction statistics every 10 minutes

$ while sleep 600; do kill -USR1 <PID>; done

For the compaction statistics output format, see Compaction Statistics in the logging section.


Multithreading

DESCRIPTION:
Multithreading is a performance feature that can lower latency and improve throughput for your workload. ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, compaction, cleaning, disk cache accesses and updates.

USAGE:
Multithreading can be enabled using the mt or mtplus mount options. mt sets the CPU threads to 4 and IO threads to 16. mtplus sets the CPU threads to between 6 and 32 based on the number of cpus on the server and the IO threads to 64.

You can also directly specify the number of dedicated CPU threads and IO threads using the cputhreads and iothreads mount options. To disable multithreading, use the nomt mount option.

The number of threads in the thread pools will scale up and down dynamically depending on the workload.

Mount options Description
-o mt sets cputhreads to 4 and iothreads to 16
-o mtplus sets cputhreads to between 6 and 32 based on the number of cpus and iothreads to 64
-o cputhreads=<N> sets the number of dedicated CPU threads to N (min:0, max:128)
-o iothreads=<N> sets the number of dedicated IO threads to N (min:0, max:128)
-o nomt sets cputhreads and iothreads to 0

DEFAULT VALUE:
By default, there are two IO threads, one compaction thread and no extra CPU threads.

EXAMPLE:

A. Enable default multithreading option (4 cputhreads, 16 iothreads)

$ sudo mount.objectivefs -o mt <filesystem> <dir> 

B. Set CPU threads to 8 and IO threads to 16

$ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir> 

C. Example fstab entry to enable multithreading plus

s3://<filesystem> <dir> objectivefs auto,_netdev,mtplus 0 0

Kernel Cache

DESCRIPTION:
Kernel cache is a performance feature that can improve re-read performance by reducing the FUSE overhead. It is available in version 6.0 and newer and can be activated together with the multithreading feature. In version 7.0 and newer, kcache+ is available for storing directories and symlinks in kernel cache. It is activated together with the multithreading feature for FUSE version 7.28 or newer.

USAGE:
To activate the kernel cache, enable the multithreading feature either by setting the mt mount option or by specifying dedicated cpu and io threads (see multithreading section). If your Linux kernel and FUSE driver supports it, the kernel cache will be enabled and kcache will be printed on the starting line of the objectivefs log.


Filesystem Pool

Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.

A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.

A file system in a pool is a regular file system. It has the same capabilities as a regular file system.

A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.

Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy

An example organization structure is:

 |
 |- myfs1                // one file system per bucket
 |- myfs2                // one file system per bucket
 |- mypool1 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 |
 |- mypool2 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 :

Create

To create a file system in a pool, use the regular create command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>

NOTE:

  • You don’t need to create a pool explicitly. A pool is automatically created when you create the first file system in this pool.
  • The file system will reside in the same region as the pool. Therefore, any subsequent file systems created in a pool will be in the same region, regardless of the -l <region> specification.

EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)

# Assumption: your /etc/objectivefs.env contains S3 keys
$ sudo mount.objectivefs create s3://mypool/myfs

B. Create a GCS file system in the default region

# Assumption: your /etc/objectivefs.env contains GCS keys
$ sudo mount.objectivefs create -l EU gs://mypool/myfs

List

When you list your file system, you can distinguish a pool in the KIND column. A file system inside of a pool is listed with the pool prefix.

You can also list the file systems in a pool by specifying the pool name.

sudo mount.objectivefs list [<pool name>]

EXAMPLE:
A. In this example, there are two pools myfs-pool and myfs-poolb. The file systems in each pools are listed with the pool prefix.

$ sudo mount.objectivefs list
NAME                        KIND      REGION
s3://myfs-1                 ofs       us-west-2
s3://myfs-2                 ofs       eu-central-1
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2
s3://myfs-poolb/            pool      us-west-1
s3://myfs-poolb/foo         ofs       us-west-1

B. List all file systems under a pool, e.g. myfs-pool

$ sudo mount.objectivefs list myfs-pool
NAME                        KIND      REGION
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2

Mount

To mount a file system in a pool, use the regular mount command with
<pool name>/<file system> as the filesystem argument.

Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

EXAMPLES:
A. Mount an S3 file system and run the process in background

$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env and run the process in foreground

$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs

Unmount

Same as the regular unmount command

Destroy

To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs destroy <pool>/<filesystem>

NOTE:

  1. You can destroy a file system in a pool, and other file systems within the pool will not be affected.
  2. A pool can only be destroyed if it is empty.

EXAMPLE:
Destroying an S3 file system in a pool

$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a  
*** WARNING ***
The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

ACL

DESCRIPTION:
Standard unix permission and basic ACL are enabled by default. To enable the full ACL/extended ACL, mount your filesystem with the acl mount option.

USAGE:

# mount.objectivefs -oacl <filesystem> <directory>

EXAMPLE:
A. Set extended ACL for file foo

 $ setfacl -R -m u:ubuntu:rx foo

B. Get extended ACL for file foo

 $ getfacl foo

UID/GID Mapping

Corporate and Enterprise Plans Feature

DESCRIPTION:
This feature lets you map local user ids and group ids to different ids in the remote filesystem. The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.

When a uid is remapped and U* is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody). Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G* is not specified.

USAGE:

IDMAP="<Mapping>[:<Mapping>]"
  where Mapping is:
    U<local id or name> <remote id>
    G<local id or name> <remote id>
    U* <default id>
    G* <default id>

Mapping Format
A. Single User Mapping:    U<local id or name> <remote id>
Maps a local user id or local user name to a remote user id.

B. Single Group Mapping:    G<local id or name> <remote id>
Maps a local group id or local group name to a remote group id.

C. Default User Mapping:    U* <default id>
Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody).

D. Default Group Mapping:    G* <default id>
Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody).

EXAMPLES:
A. UID mapping only

IDMAP="U600 350:Uec2-user 400:U* 800"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local ec2-user is mapped to remote uid 400, and vice versa
  • All other local uids are mapped to remote uid 800
  • All other remote uids are mapped to local uid 800
  • Group IDs are not remapped

B. GID mapping only

IDMAP="G800 225:Gstaff 400"
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • Local group staff is mapped to remote gid 400, and vice versa
  • All other local gids are mapped to remote gid 65534 (aka nobody/nfsnobody)
  • All other remote gids are mapped to local gid 865534 (aka nobody/nfsnobody)
  • User IDs are not remapped

C. UID and GID mapping

IDMAP="U600 350:G800 225"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • All other local uids and gids are mapped to remote 65534 (aka nobody/nfsnobody)
  • All other remote uids and gids are mapped to local 865534 (aka nobody/nfsnobody)

HTTP Proxy

Corporate and Enterprise Plans Feature

DESCRIPTION:
You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.

USAGE:
Set the http_proxy environment variable to the proxy server’s address (see environment variables section for how to set environment variables).

DEFAULT VALUE:
If the http_proxy environment is not set, this feature is disabled by default.

EXAMPLE:

Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:

$ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs

Alternatively, you can set the http_proxy in your /etc/objectivefs.env directory

$ ls /etc/objectivefs.env
AWS_ACCESS_KEY_ID          OBJECTIVEFS_PASSPHRASE
AWS_SECRET_ACCESS_KEY      http_proxy
OBJECTIVEFS_LICENSE 

$ cat /etc/objectivefs.env/http_proxy http://localhost:3128

Admin Mode

DESCRIPTION:
The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.

The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.

Operation User Mode Admin Mode
Create Needs passphrase confirmation No passphrase confirmation needed
List Allowed Allowed
Mount Allowed Not allowed
Destroy Needs authorization code and confirmation  Only confirmation needed

USAGE:
For plans with the Admin Key feature, your account has an admin license key, in addition to the regular license key. Please contact support@objectivefs.com for this key.

To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env. Please use your admin license key for OBJECTIVEFS_LICENSE.

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID      AWS_SECRET_ACCESS_KEY  
OBJECTIVEFS_LICENSE    OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
your_admin_license_key

You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env, for each user to mount their individual filesystems.

EXAMPLES:

A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs

B. Mount the filesystem as user tom in the background

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs

Local License Check

Enterprise Plan Feature

DESCRIPTION:
While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.

USAGE:
Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.


S3 Transfer Acceleration

Enterprise Plan Feature

DESCRIPTION:
ObjectiveFS supports AWS S3 Transfer Acceleration that enables fast transfers of files over long distances between your server and S3 bucket.

USAGE:
Set the AWS_TRANSFER_ACCELERATION environment variable to 1 to enable S3 transfer acceleration (see environment variables section for how to set environment variables).

REQUIREMENT:
Your S3 bucket needs to be configured to enable Transfer Acceleration. This can be done from the AWS Console.

EXAMPLES:

Mount a filesystem called myfs with S3 Transfer Acceleration enabled

$ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs myfs /ofs

AWS KMS Encryption

Enterprise Plan Feature

DESCRIPTION:
ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).

USAGE:
Use the AWS_SERVER_SIDE_ENCRYPTION environment variable (see environment variables section for how to set environment variables).

The AWS_SERVER_SIDE_ENCRYPTION environment variable can be set to:

  • AES256  (for Amazon S3-Managed Keys (SSE-S3))
  • aws:kms  (for AWS KMS-Managed Keys (SSE-KMS) with default key)
  • <your kms key>  (for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)

REFERENCE:

EXAMPLES:

A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)

$ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs

B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)

$ sudo AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs

C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key

$ sudo AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs

D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key

$ sudo AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs

TLS / SSL

DESCRIPTION:
ObjectiveFS has built-in TLS / SSL that connects to the object store via https. TLS adds an extra layer of encryption to the object store connection. ObjectiveFS always encrypts your data using the client-side encryption, both in transit and at rest. All requests are also signed using the object store’s signing policy. TLS is useful for cases where the object store only supports TLS, when corporate policy requires https or when https is preferred.

TLS is enabled by default for some object stores. The starting line of the objectivefs log will show the endpoint connection used for your filesystem: http or https.

The TLS implementation detects at runtime processor-specific instructions such as AES-NI opcodes and will use those instructions when possible. To check the cipher used for your connection, run with -v (the verbose option) and check the TLS connection log messages.

USAGE:
Set the TLS environment variable to 1 to explicitly enable TLS. Set the TLS environment variable to 0 to disable TLS and use http connections. Leaving TLS unset will use the default for your object store (see environment variables section for how to set environment variables).

RELATED VARIABLES:
See environment variables section for description of each variable

  • SSL_CERT_DIR
  • SSL_CERT_FILE
  • TLS
  • TLS_CIPHERS
  • TLS_NORESUME
  • TLS_NOSTANDARD_CERT
  • TLS_NOVERIFY_CERT
  • TLS_NOVERIFY_NAME
  • TLS_NOVERIFY_TIME
  • TLS_PROTOCOL

REQUIREMENT:
TLS is available in ObjectiveFS version 7.0 and newer.

EXAMPLE:
Mount a filesystem called myfs with TLS enabled

$ sudo TLS=1 mount.objectivefs myfs /ofs

Logging

Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On Linux, the log is typically at /var/log/messages or /var/log/syslog. On macOS, the log is typically at /var/log/system.log.

Below is a list of common log messages. For error messages, please see troubleshooting section.
A. Initial mount message
SUMMARY: The message logged every time an ObjectiveFS filesystem is mounted and contains the settings information of this mount.

FORMAT:

objectivefs <version> starting [<settings>]

DESCRIPTION:

Setting Description
acl Extended ACL is enabled
arm64 ARM64 cpu architecture
bulkdata The bulkdata mount option is enabled
cachesize <size> The memory cache size set by CACHESIZE
clean The regular cleaner is enabled. Set by the clean mount option
clean+ The cleaner+ is enabled. Set by the clean=2 mount option
compact <off | level> Compaction status: off or compaction level
cputhreads <N>/<M> Number of CPU threads/detected CPUs (see multithreading)
diskcache <path | off> The disk cache path (if enabled) or off (if disabled)
ec2 <region>/<type> An EC2 instance type in region
endpoint The endpoint used to access your object store bucket
export The export mount option is enabled
freebw The freebw mount option is enabled
fuse version <ver>  The FUSE protocol version that the kernel uses
kcache The kernel cache is enabled
hpc The hpc mount option is enabled
iothreads <num> Number of I/O threads (see multithreading)
local license Running with local license checking
mboost The mboost mount option is enabled
nooom The nooom mount option is set
noratelimit The noratelimit mount option is set
nordirplus The nordirplus mount option is set
nosnapshots The nosnapshots mount option is set
ocache The ocache mount option is enabled
oob The oob mount option is enabled
oom+ The oom oom mount option is set
project <id> Send Google Project ID header
region The region of your filesystem
shani SHA native instructions enabled
sig <v2 | v4> The object store endpoint signature version
x86-64-v[1-4] x86-64 cpu architecture version

EXAMPLE:

A. A filesystem in us-west-2 mounted with no additional mount options

objectivefs 6.6 starting [fuse version 7.26, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MiB, diskcache off, clean, compact 1, cputhreads 0, iothreads 2]

B. A filesystem in eu-central-1 mounted with on an EC2 instance with disk cache enabled, compaction level 3 and multithreading

objectivefs 6.6 starting [fuse version 7.26, region eu-central-1, endpoint http://s3-eu-central-1.amazonaws.com, cachesize 2457MiB, diskcache on, ec2, clean, compact 3, cputhreads 4, iothreads 8]

B. Regular log message
SUMMARY: The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.

FORMAT:

<put> <list> <get> <delete> <bandwidth in> <bandwidth out> <clean>

DESCRIPTION:

<put> <list> <get> <delete>
For this mount on this machine, the total number of put, list, get and delete operations to S3 or GCS since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
<bandwidth in> <bandwidth out>
For this mount on this machine, the total number of incoming and outgoing bandwidth since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
<clean> (when the storage cleaner is enabled)
For this mount on this machine, the total compressed S3 storage reclaimed by the storage cleaner.

EXAMPLE:
1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT

C. Caching Statistics
SUMMARY: Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.

FORMAT:

CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]

DESCRIPTION:

<cache hit>
Percentage of total requests that hits in the memory cache (cumulative)
<metadata>
Percentage of metadata requests that hits in the memory cache (cumulative)
<data>
Percentage of data requests that hits in the memory cache (cumulative)
<OS>
Amount of cached data referenced by the OS at the current time
Disk [<hit>]
Percentage of disk cache requests that hits in the disk cache (cumulative)

EXAMPLE:
CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]

D. Compaction Statistics
SUMMARY: Compaction progress estimate. See compaction section for how to generate this output.

FORMAT:

COMPACT <progress> <layout> <objects> <active> <cleaned>

DESCRIPTION:

progress=<metadata:data>
The percentage of the filesystem’s metadata/data that has been processed by the compactor (higher is better)
layout=<metadata:data>
How close to optimal the filesystem’s metadata/data layout is (higher is better)
objects=<multiples>
The current object count compared to the target object count (lower is better, target is 1.0x)
active=<ratio>
Estimate of potential reclaimable data detected but not yet processed
cleaned=<value>
The amount of data cleaned by this mount

EXAMPLE:
COMPACT progress=95.3%:88.5% layout=94.5%:73.1% objects=1.3x active=12.0:1.2 cleaned=55.38GB

E. Error messages
SUMMARY: Error response from S3 or GCS

FORMAT:

retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]

DESCRIPTION:

<operation>
This can be PUT, GET, LIST, DELETE operations that encountered the error message
<endpoint>
The endpoint used to access your S3 or GCS bucket, typically determined by the region
<S3/GCS response>
The error response from S3 or GCS
<amz-id>
S3 only: corresponding unique ID from Amazon S3 that request that encountered error. This unique ID can help Amazon with troubleshooting the problem.
<amz-id2>
S3 only: corresponding token from Amazon S3 for the request that encountered error. Used for troubleshooting.

EXAMPLE:
retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn


Relevant Files

/etc/objectivefs.env
Default ObjectiveFS environment variable directory
/etc/resolv.conf
Recursive name resolvers from this file are used unless the DNSCACHEIP environment variable is set.
/var/log/messages
ObjectiveFS output log location on certain Linux distributions (e.g. RedHat) when running in the background.
/var/log/syslog
ObjectiveFS output log location on certain Linux distributions (e.g. Ubuntu) when running in the background.
/var/log/system.log
Default ObjectiveFS output log location on macOS when running in the background.

Troubleshooting

Initial Setup

403 Permission denied
Your S3 keys do not have permissions to access S3. Check that your user keys are added to a group with all S3 permissions.
./mount.objectivefs: Permission denied
mount.objectivefs needs executable permissions set. Run chmod +x mount.objectivefs

During Operation

Transport endpoint is not connected
ObjectiveFS process was killed. The most common reason is related to memory usage and the oom killer. Please see the Memory Optimization Guide for how to optimize memory usage.
Large delay for writes from one machine to appear at other machines
  1. Check that the time on these machines are synchronized. Please verify NTP has a small offset (<1 sec).
    To adjust the clock:
    On Linux: run /usr/sbin/ntpdate pool.ntp.org.
    On macOS: System Preferences → Date & Time → Set date and time automatically
  2. Check for any S3/GCS error responses in the log file
RequestTimeTooSkewed: The difference between the request time and the current time is too large.
The clock on your machine is too fast or too slow. To adjust the clock:
On Linux: run /usr/sbin/ntpdate pool.ntp.org.
On macOS: System Preferences → Date & Time → Set date and time automatically
Checksum error, bad cryptobox
The checksum error occurs when our end-to-end data integrity checker detects the data you stored on S3 differs from the data received when it is read again. Two common causes are:
1. Your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause the end-to-end data integrity check to fail with this error.
To fix this, move the non-ObjectiveFS objects from this bucket.
2. You may be running behind a firewall/proxy that modifies the data in transit. Please contact support@objectivefs.com for the workaround.
Ratelimit delay
ObjectiveFS has a built-in request rate-limiter to prevent runaway programs from running up your S3 bill. The built-in limit starts at 25 million get requests, 1 million each for put and list requests and it is implemented as a leaky bucket with a fill rate of 10 million GET requests and 1 million PUT/LIST requests per day and is reset upon mount.
To explicitly disable the rate-limiter, you can use the noratelimit mount option.
Filesystem format is too new
One likely cause of this error is when your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause this error to occur.
To fix this error, move the non-ObjectiveFS objects from this bucket.

Unmount

Resource busy during unmount
Either a directory or a file in the file system is being accessed. Please verify that you are not accessing the file system anymore.

Upgrades

ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem.

RELATED INFO:


Questions

Don’t hesitate to give us a call at +1-415-997-9967, or send us an email at support@objectivefs.com.