User Guide

This guide is for release 5.0 and newer. For older releases, please refer to the 4.3.4 User Guide.

Overview

It’s quick and easy to get started with your ObjectiveFS file system. Just 3 steps to get your new file system up and running. (see Get Started)

ObjectiveFS runs on your Linux and macOS machines, and implements a log-structured file system with a cloud object store backend, such as AWS S3, Google Cloud Storage (GCS) or your on-premise object store. Your data is encrypted before leaving your machine, and stays encrypted until it returns to your machine.

This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.

What you need


Commands

Config
Sets up the required environment variables to run ObjectiveFS (details)
sudo mount.objectivefs config [<directory>]
Create
Creates a new file system (details)
sudo mount.objectivefs create [-l <region>] <filesystem>
List
Lists your file systems in S3 or GCS (details)
sudo mount.objectivefs list [-a] [<filesystem>]
Mount
Mounts your file system on your Linux or macOS machines (details)
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground: sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
Unmount
Unmounts your files system on your Linux or macOS machines (details)
sudo umount <dir>
Destroy
Destroys your file system (details)
sudo mount.objectivefs destroy <filesystem>

Reference

This section covers the detailed description and usage for each ObjectiveFS command.

Config

SUMMARY: Sets up the required environment variables to run ObjectiveFS

USAGE:

sudo mount.objectivefs config [<directory>]

DESCRIPTION:
Config is a one-time operation that sets up the required credentials, such as object store keys and your license, as environment variables in a directory. You can also optionally set your default region.

<directory>
Directory to store your environment variables. This should be a new non-existing directory.
Default: /etc/objectivefs.env

WHAT YOU NEED:

DETAILS:

Config sets up the following environment variables in /etc/objectivefs.env (if no argument is provided) or in the directory specified.

  • AWS_ACCESS_KEY_ID
    Your object store access key
  • AWS_SECRET_ACCESS_KEY
    Your object store secret key
  • OBJECTIVEFS_LICENSE
    Your ObjectiveFS license key
  • AWS_DEFAULT_REGION (optional)
    This optional field can be an AWS or GCS region or the S3-compatible endpoint (e.g. http://<object store>) for your on-premise object store

EXAMPLES:
A. Default setup: your environment variables will be created in /etc/objectivefs.env

$ sudo mount.objectivefs config
Creating config in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>
Enter Default Region (optional): <your preferred region>

B. User-specified destination directory

$ sudo mount.objectivefs config /home/ubuntu/.ofs.env
Creating config in /home/ubuntu/.ofs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>
Enter Default Region (optional): <your preferred region>

TIPS:

  • To make changes to your keys or default region, edit the files in the environment directory directly (e.g. /etc/objectivefs.env/AWS_ACCESS_KEY_ID).
  • If you don’t want to use the variables in /etc/objectivefs.env, you can also set the environment variables directly on the command line. See environment variables on command line.
  • You can also manually create the environment variables directory without using the config command. You will need to set up AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and OBJECTIVEFS_LICENSE in the environment directory. See environment variables section for details.
  • If you have an attached AWS EC2 IAM role to your EC2 instance, you can automatically rekey with IAM roles (see live rekeying) and don’t need to have the AWS_SECRET_ACCESS_KEY or AWS_ACCESS_KEY_ID environment variables.

Create

SUMMARY: Creates a new file system

USAGE:

sudo mount.objectivefs create [-l <region>] <filesystem>

DESCRIPTION:
This command creates a new file system in your S3, GCS or on-premise object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.

<filesystem>
A globally unique, non-secret file system name. (Required)
The filesystem name maps to a new object store bucket, and S3/GCS requires globally unique namespace for buckets.
For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo
-l <region>
The region to store your file system in. (see region list)
Default: The region specified by your AWS_DEFAULT_REGION environment variable (if set). Otherwise, S3’s default is us-west-2 and GCS’s default is based on your server’s location (us, eu or asia).

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).

DETAILS:
This command creates a new filesystem in your object store. You can specify the region to create your filesystem by using the -l <region> option or by setting the AWS_DEFAULT_REGION environment variable.

ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the Filesystem Pool section for details.

EXAMPLES:
A. Create a file system in the default region

$ sudo mount.objectivefs create myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>

B. Create an S3 file system in a user-specified region (e.g. eu-central-1)

$ sudo mount.objectivefs create -l eu-central-1 s3://myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>

C. Create a GCS file system in a user-specified region (e.g. us)

$ sudo mount.objectivefs create -l us gs://myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs): <your passphrase>

TIPS:

  • You can store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify this file’s permission is restricted to root only.
  • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs create myfs
  • To create a filesystem without manually entering your passphrase (e.g. for scripting filesystem creation), you can use the admin mode and store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE.

List

SUMMARY: Lists your file systems, snapshots and buckets

USAGE:

sudo mount.objectivefs list [-asz] [<filesystem>[@<time>]]

DESCRIPTION:
This command lists your file systems, snapshots or buckets in your object store. The output includes the file system name, filesystem kind (regular filesystem or pool), snapshot (automatic, checkpoint, enabled status), region and location.

-a
List all buckets in your object store, including non-ObjectiveFS buckets.
-s
Enable listing of snapshots.
-z
Use UTC for snapshot timestamps.
<filesystem>
The filesystem name to list. If the filesystem doesn’t exist, nothing will be returned.
<filesystem>@<time>
The snapshot to list. The time specified can be in UTC (needs -z) or local time, in the ISO8601 format (e.g. 2016-12-31T15:40:00).
If a prefix of a time is given, a list of snapshots matching the time prefix will be listed.
default
All ObjectiveFS file systems are listed.

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).

DETAILS:

The list command has several options to list your filesystems, snapshots and buckets in your object store. By default, it lists all your ObjectiveFS filesystems and pools. It can also list all buckets, including non-ObjectiveFS buckets, with the -a option. To list only a specific filesystem or filesystem pool, you can provide the filesystem name. For a description of snapshot listing, see Snapshots section.

The output of the list command shows the filesystem name, filesystem kind, snapshot type, region and location.

Example filesystem list output:

NAME                KIND  SNAP REGION        LOCATION
s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
s3://myfs-2         ofs   -    us-west-2     US West (Oregon)
s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)

Example snapshot list output:

NAME                           KIND  SNAP    REGION     LOCATION
s3://myfs@2017-01-10T11:10:00  ofs   auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T11:17:00  ofs   manual  eu-west-2  EU (London)
s3://myfs@2017-01-10T11:20:00  ofs   auto    eu-west-2  EU (London)
s3://myfs                      ofs   on      eu-west-2  EU (London)
Filesystem Kind Description
ofs ObjectiveFS filesystem
pool ObjectiveFS filesystem pool
- Non-ObjectiveFS bucket
? Error while querying the bucket
access No permission to access the bucket
Snapshot type Applicable for Description
auto snapshot Automatic snapshot
manual snapshot Checkpoint (or manual) snapshot
on filesystem Snapshots are activated on this filesystem
- filesystem Snapshots are not activated

EXAMPLES:
A. List all ObjectiveFS filessytem.

$ sudo mount.objectivefs list
NAME                KIND  SNAP REGION        LOCATION
s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
s3://myfs-2         ofs   on   us-west-2     US West (Oregon)
s3://myfs-3         ofs   on   eu-west-2     EU (London)
s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsb  ofs   -    us-east-1     US East (N. Virginia)
s3://myfs-pool/fsc  ofs   -    us-east-1     US East (N. Virginia)

B. List a specific file system, e.g. s3://myfs-3

$ sudo mount.objectivefs list s3://myfs-3
NAME                KIND  SNAP REGION        LOCATION
s3://myfs-3         ofs   -    eu-west-2     EU (London)

C. List everything, including non-ObjectiveFS buckets. In this example, my-bucket is a non-ObjectiveFS bucket.

$ sudo mount.objectivefs list -a
NAME                KIND  SNAP REGION  LOCATION
gs://my-bucket      -     -    EU      European Union
gs://myfs-a         ofs   -    US      United States
gs://myfs-b         ofs   on   EU      European Union
gs://myfs-c         ofs   -    ASIA    Asia Pacific

D. List snapshots for myfs that match 2017-01-10T11

$ sudo mount.objectivefs list -s myfs@2017-01-10T11
NAME                           KIND  SNAP    REGION     LOCATION
s3://myfs@2017-01-10T11:10:00  ofs   auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T11:17:00  ofs   manual  eu-west-2  EU (London)
s3://myfs@2017-01-10T11:20:00  ofs   auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T11:30:00  ofs   auto    eu-west-2  EU (London)

E. List snapshots for myfs that match 2017-01-10T12 in UTC

$ sudo mount.objectivefs list -sz myfs@2017-01-10T12
NAME                            KIND SNAP    REGION     LOCATION
s3://myfs@2017-01-10T12:10:00Z  ofs  auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T12:17:00Z  ofs  manual  eu-west-2  EU (London)
s3://myfs@2017-01-10T12:20:00Z  ofs  auto    eu-west-2  EU (London)
s3://myfs@2017-01-10T12:30:00Z  ofs  auto    eu-west-2  EU (London)

TIPS:

  • You can list partial snapshots by providing <filesystem>@<time prefix>, e.g. myfs@2017-01-10T12.
  • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs list

Mount

SUMMARY: Mounts your file system on your Linux or macOS machines.

USAGE:
Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>

DESCRIPTION:
This command mounts your file system on a directory on your Linux or macOS machine. After the file system is mounted, you can read and write to it just like a local disk.

You can mount the same file system on as many Linux or macOS machines as you need. Your license will always scale if you need more mounts, and it is not limited to the number of included licenses on your plan.

NOTE: The mount command needs to run as root. It runs in the foreground if “mount” is provided, and runs in the background otherwise.

<filesystem>
A globally unique, non-secret file system name. (Required)
For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo.
The filesystem can end with @<timestamp> for mounting snapshots.
<dir>
Directory (full path name) on your machine to mount your file system. (Required)
This directory should be an existing empty directory.

General Mount Options

-o env=<dir>
Load environment variables from directory <dir>. See environment variable section.

Mount Point Options

-o dev | nodev
Allow block and character devices. (Default: Allow)
-o diratime | nodiratime
Update / Don’t update directory access time. (Default: Update)
-o exec | noexec
Allow binary execution. (Default: Allow)
-o export | noexport
Enable / disable restart support for NFS or Samba exports. (Default: Disable)
-o nonempty
Allow mounting on non-empty directory (Default: Disabled)
-o ro | rw
Read-only / Read-write file system. (Default: Read-write)
-o strictatime | relatime | noatime
Update / Smart update / Don’t update access time. (Default: Smart update)
-o suid | nosuid
Allow / Disallow suid bits. (Default: Allow)

File System Mount Options

-o clean | noclean
Enable / disable storage cleaner. (Default: Disable)
-o compact[=<level>] | nocompact
Set level for / disable background index compaction. (Default: Enable) (details in Compaction section)
-o hpc | nohpc
Enable / disable high performance computing mode. Enabling hpc assumes the server is running in the same data center as the object store. (Default: will check)
-o mt | nomt | cputhreads=<N> | iothreads=<N>
Enable / disable / configure multithreading options. (Default: 2 dedicated IO threads) (details in Multithreading section)
-o oom | nooom
Linux only. Enable / disable oom protection to reduce the likelihood of being selected by the oom killer. To be exempt from the oom killer (use with care), you can specify oom twice. (Default: Enable)
-o ratelimit | noratelimit
Eanble / disable the built-in request rate-limiter. The built-in request rate-limiter is designed to prevent runaway programs from running up the S3 bill. (Default: Enable)
-o snapshots | nosnapshots
Enable / disable generation of automatic snapshots from this mount point. (Default: Enable)

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).
  • The file system name (see Create section for creating a file system)
  • An empty directory to mount your file system

EXAMPLES:
Assumptions:
1. /ofs is an existing empty directory
2. Your passphrase is in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE

A. Mount an S3 file system in the foreground

$ sudo mount.objectivefs mount myfs /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/ubuntu/.ofs.env

$ sudo mount.objectivefs mount -o env=/home/ubuntu/.ofs.env gs://myfs /ofs

C. Mount an S3 file system in the background

$ sudo mount.objectivefs s3://myfs /ofs

D. Mount a GCS file system with non-default options
Assumption: /etc/objectivefs.env contains GCS keys.

$ sudo mount.objectivefs -o nosuid,nodev,noexec,noatime gs://myfs /ofs

TIPS:

  • Control-C on a foreground mount will stop the file system and try to unmount the file system. To properly unmount the filesystem, use unmount.
  • You can mount your filesystem without needing to manually enter your passphrase by storing your passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify that your passphrase file’s permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE).
  • If your machine sleeps while the file system is mounted, the file system will remain mounted and working after it wakes up, even if your network has changed. This is useful for a laptop that is used in multiple locations.
  • To run with another ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
$ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs mount myfs /ofs

Mount on Boot

ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.

WHAT YOU NEED:

A. Linux

  1. Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE exists so you can mount your filesystem without needing to enter the passphrase.
    If it doesn’t exist, create the file with your passphrase as the content.(see details)

  2. Add a line to /etc/fstab with:

    <filesystem> <mount dir> objectivefs auto,_netdev[,<opts>]  0  0 
    _netdev is used by many Linux distributions to mark the file system as a network file system.

  3. For more details, see Mount on Boot Setup Guide for Linux.

B. macOS

macOS can use launchd to mount on boot. See Mount on Boot Setup Guide for macOS for details.


Unmount

SUMMARY: Unmounts your files system on your Linux or macOS machines

USAGE:

sudo umount <dir>

DESCRIPTION:
To unmount your filesystem, run umount with the mount directory. Typing Control-C in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount to properly unmount the filesystem.

WHAT YOU NEED:

  • Not accessing any file or directory in the file system

EXAMPLE:

$ sudo umount /ofs

Destroy

IMPORTANT: After a destroy, there is no way to recover any files or data.

SUMMARY: Destroys your file system. This is an irreversible operation.

USAGE:

sudo mount.objectivefs destroy <filesystem>

DESCRIPTION:
This command deletes your file system from your object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.

You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.

<filesystem>
The file system that you want to destroy. (Required)

NOTE:
Your file system should be unmounted from all of your machines before running destroy.

WHAT YOU NEED:

  • Your ObjectiveFS environment directory is set up (see Config section).
  • Your authorization code from your user profile page. (Note: the authorization code changes periodically. To get the latest code, please refresh your user profile page.)

EXAMPLE:

$ sudo mount.objectivefs destroy s3://myfs  
*** WARNING ***
The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

Settings

This section covers the options you can run ObjectiveFS with.

Regions

This is a list of supported S3 and GCS regions. ObjectiveFS supports all regions for S3 and GCS.

AWS S3

  • us-east-1
  • us-east-2
  • us-west-1
  • us-west-2 [default for S3 if no default region is specified]
  • ca-central-1
  • eu-central-1
  • eu-west-1
  • eu-west-2
  • ap-south-1
  • ap-southeast-1
  • ap-southeast-2
  • ap-northeast-1
  • ap-northeast-2
  • sa-east-1
  • us-gov-west-1 [requires AWS GovCloud account]

GCS

Multi-regions:

  • us
  • eu
  • asia

Sub-regions:

  • us-central1
  • us-east1
  • us-west1
  • europe-west1
  • asia-east1
  • asia-northeast1

RELATED COMMANDS:

REFERENCE:
AWS S3 regions, GCS regions


Endpoints

The table below lists the corresponding endpoints for the supported regions.

AWS S3

Region Endpoint
us-east-1 s3-external-1.amazonaws.com
us-east-2 s3-us-east-2.amazonaws.com
us-west-1 s3-us-west-1.amazonaws.com
us-west-2 s3-us-west-2.amazonaws.com
ca-central-1 s3-ca-central-1.amazonaws.com
eu-central-1 s3-eu-central-1.amazonaws.com
eu-west-1 s3-eu-west-1.amazonaws.com
eu-west-2 s3-eu-west-2.amazonaws.com
ap-south-1 s3-ap-south-1.amazonaws.com
ap-southeast-1 s3-ap-southeast-1.amazonaws.com
ap-southeast-2 s3-ap-southeast-2.amazonaws.com
ap-northeast-1 s3-ap-northeast-1.amazonaws.com
ap-northeast-2 s3-ap-northeast-2.amazonaws.com
sa-east-1 s3-sa-east-1.amazonaws.com
us-gov-west-1 s3-us-gov-west-1.amazonaws.com

GCS
All GCS regions have the endpoint storage.googleapis.com


Environment Variables

ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.

The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: manual rekeying).

A. Environment Variables from Directory
ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir tool from the daemontools package.

Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.

SETUP:
The Config command sets up your environment directory with three main environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • OBJECTIVEFS_LICENSE

You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.

EXAMPLE:

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID  AWS_SECRET_ACCESS_KEY  OBJECTIVEFS_LICENSE  OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
your_objectivefs_passphrase

B. Environment Variables on Command Line
You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.

USAGE:

sudo [<ENV VAR>='<value>'] mount.objectivefs 

EXAMPLE:

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

SUPPORTED ENVIRONMENT VARIABLES

To enable a feature, set the corresponding environment variable and remount your filesystem (exception: manual rekeying).

ACCOUNT
Username of the user to run as. Root will be dropped after startup.
AWS_ACCESS_KEY_ID
Your object store access key. (Required or use AWS_METADATA_HOST)
AWS_DEFAULT_REGION
The default object store region to connect to.
AWS_METADATA_HOST
AWS STS host publishing session keys (for EC2 set to “169.254.169.254”). Sets and rekeys AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SECURITY_TOKEN.
AWS_SECRET_ACCESS_KEY
Your secret object store key. (Required or use AWS_METADATA_HOST)
AWS_SECURITY_TOKEN
Session security token when using AWS STS.
AWS_SERVER_SIDE_ENCRYPTION
Server-side encryption with AWS KMS support. (Enterprise plan feature) (see Server-side Encryption section)
AWS_TRANSFER_ACCELERATION
Set to 1 to use the AWS S3 acceleration endpoint. (Enterprise plan feature) (see S3 Transfer Acceleration section)
CACHESIZE
Set cache size as a percentage of memory (e.g. 30%) or an absolute value (e.g. 500M or 1G). (Default: 20%) (see Memory Cache section)
DISKCACHE_SIZE
Enable and set disk cache size and optional free disk size. (see Disk Cache section)
DISKCACHE_PATH
Location of disk cache when disk cache is enabled. (see Disk Cache section)
DNSCACHEIP
IP address of recursive name resolver. (Default: use /etc/resolv.conf)
http_proxy
HTTP proxy server address. (see HTTP Proxy section)
IDMAP
User ID and Group ID mapping. (see UID/GID Mapping section)
OBJECTIVEFS_ENV
Directory to read environment variables from. (Default: /etc/objectivefs.env)
OBJECTIVEFS_LICENSE
Your ObjectiveFS license key. (Required)
OBJECTIVEFS_PASSPHRASE
Passphase for filesystem. (Default: will prompt)
STUNNEL
Tunnel proxy support for object stores that do not work with proxied requests. Set this variable to 1 if your proxy has a fixed single destination. (v. 5.1.1 or newer)

Features

Memory Cache

DESCRIPTION:
ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.

USAGE:
Set the CACHESIZE environment variable to one of the following:

  • the actual memory size (e.g. 500M or 2G)
  • as a percentage of memory (e.g. 30%)

A minimum of 64MB will be used for CACHESIZE.

DEFAULT VALUE:
If CACHESIZE is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.

DETAILS:

The cache size is applicable per mount. If you have multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of the CACHESIZE values for all mounts.

The memory cache is one component of the ObjectiveFS memory usage. The total memory used by ObjectiveFS is the sum of:
a. memory cache usage (set by CACHESIZE)
b. index memory usage (based on the number of S3 objects/filesystem size), and
c. kernel memory usage

The caching statistics such as cache hit rate, kernel memory usage is sent to the log. The memory cache setting is also sent to the log upon file system mount.

EXAMPLES:
A. Set memory cache size to 30%

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

B. Set memory cache size to 2GB

$ sudo CACHESIZE=2G mount.objectivefs myfs /ofs

RELATED INFO:


Disk Cache

DESCRIPTION:
ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.

The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.

Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.

ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.

Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.

RECOMMENDATION:
We recommend enabling disk cache when local SSD or harddrive is available. For EC2 instances, we recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size. (See how to mount an instance store on EC2 for disk cache).

USAGE:
The disk cache uses DISKCACHE_SIZE and DISKCACHE_PATH environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE.

DISKCACHE_SIZE:

  • Accepts values in the form <DISK CACHE SIZE>[:<FREE SPACE>].
  • <DISK CACHE SIZE>:

    • Set to the actual space you want ObjectiveFS to use (e.g. 20G or 1T).
    • If this value is larger than your actual disk (e.g. 1P), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.
  • <FREE SPACE> (optional):

    • Set to the amount of free space you want to keep on the volume (e.g. 5G).
    • The default value is 3G.
    • When it is set to 0G, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).
  • The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the DISK_CACHE_SIZE or (total disk space - FREE_SPACE).

DISKCACHE_PATH

  • Specifies the location of the disk cache.
  • Default location:
    macOS: /Library/Caches/ObjectiveFS
    Linux: /var/cache/objectivefs

DEFAULT VALUE:
Disk cache is disabled when DISKCACHE_SIZE is not specified.

EXAMPLES:
A. Set disk cache size to 20GB and use default free space (3GB)

$ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs

B. Use as much space as possible for disk cache and keep 10GB free space

$ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs

C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path

$ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs

D. Use the entire space for disk cache (when using dedicated volume for disk cache)

$ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs

WHAT YOU NEED:

  • Local instance store or SSD mounted on your DISKCACHE_PATH (default:/var/cache/objectivefs)

TIPS:

  • Different file systems on the same machine can point to the same disk cache. They can also point to different locations by setting different DISKCACHE_PATH.
  • The DISKCACHE_SIZE is per disk cache directory. If multiple file systems are mounted concurrently using different DISKCACHE_SIZE values and point to the same DISKCACHE_PATH, ObjectiveFS will use the minimum disk cache size and the maximum free space value.
  • A background disk cache clean-up will keep the disk cache size within the specified limits by removing the oldest data.
  • One way to warm a new disk cache is to rsync the content from an existing disk cache.

RELATED INFO:


Snapshots

ObjectiveFS supports automatic built-in snapshots and checkpoint snapshots. Automatic snapshots are managed by ObjectiveFS and are taken according to a snapshot schedule when your filesystem is mounted. Checkpoint snapshots can be taken at anytime and the filesystem doesn’t need to be mounted. Snapshots can be mounted as a read-only filesystem to access your filesystem data as it was at that point in time. There is no limit to the number of filesystem snapshots that you can create and use.

Snapshots are not backups since they are part of the same filesystem and are stored in the same bucket. A backup is an independent copy of your data stored in a different location.

Snapshots can be useful for backups. To create a consistent point-in-time backup, you can mount a recent snapshot and use it as the source for backup, instead of running backup from your live filesystem. This way, your data will not change while the backup is in progress.

Activate Snapshots

Snapshots are activated on a filesystem upon the first mount with an ObjectiveFS version with snapshots (v.5.0 or newer) and no special mount options are needed. Upon activation, ObjectiveFS will perform a one-time identification of existing snapshots and activate them if available. A log message will be generated when snapshots have been activated on your filesystem. You can also use the list command to identify the filesystems with activated snapshots.

NOTE: even though many snapshots are generated, the storage used for each snapshot is incremental. If two snapshots contain the same data, there is no additional storage used for the second snapshot. If two snapshots are different, only the difference is stored, and not a new copy.

Create Snapshots

A. Create Automatic Snapshots
After initial activation, snapshots are automatically taken only when your filesystem is mounted. When your filesystem is not mounted, automatic snapshots will not be taken since there are no changes to the filesystem. Automatic snapshots are taken and retained based on the snapshot schedule in the table below. Older automatic snapshots are automatically removed to maintain the number of snapshots per interval.

Snapshot Interval Number of Snapshots
  10-minute 72  
  hourly 72  
  daily 32  
  weekly 16  

RELEVANT INFO:

  • If there are multiple mounts of the same filesystem, only one snapshot will be generated at a given scheduled time.
  • Automatic snapshots generation is on by default and can be disabled with the nosnapshots mount option.

B. Create Checkpoint Snapshots (Business and Enterprise Plan Feature)

sudo mount.objectivefs snapshot <filesystem>

After initial activation, checkpoint (i.e. manual point-in-time) snapshots can be taken at anytime, even if your filesystem is not mounted. There is no limit to the number of checkpoint snapshots you can take. The checkpoint snapshots are kept until they are explicitly removed by the destroy command.

Checkpoint snapshots are useful for creating a snapshot right before making large changes to the filesystem. They can also be useful if you need snapshots at a specific time of the day or for compliance reasons.

List Snapshots

DESCRIPTION:

You can list one or more snapshots for your filesystem using the list command. Snapshots have the format <filesystem>@<time>, and are by default listed in local time, e.g. s3://myfs@2016-12-31T15:40:00. They can also be listed in UTC with the -z option.

The list command shows both automatic and checkpoint snapshots in your object stores. You can use it to list all snapshots available on your filesystem, or to list the snapshots matching a specific time prefix.

USAGE:

sudo mount.objectivefs list -s[z] [<filesystem>[@<time>]]

This command uses the -s option to enable listing of snapshots. It can further restrict the list of snapshots to a single filesystem by providing the filesystem name. The snapshots can be filtered by the time prefix such as myfs@2016-11 to list all snapshots in November 2016.

For more details on these options and the output format, see the list command.

EXAMPLES:

A. List all snapshots for myfs

$ sudo mount.objectivefs list -s myfs

B. List all snapshots for myfs that match 2017-01-10T11

$ sudo mount.objectivefs list -s myfs@2017-01-10T11

C. List all the snapshots for myfs that match 2017-01-10T12:30 in UTC

$ sudo mount.objectivefs list -sz myfs@2017-01-10T12:30


Mount Snapshots

DESCRIPTION:

Snapshots can be mounted to access the filesystem as it was at that point in time. When a snapshot is mounted, it is accessible as a read-only filesystem.

You can mount both automatic and checkpoint snapshots. When an automatic snapshot is mounted, a checkpoint snapshot for the same timestamp will be created to prevent the snapshot from being automatically removed in case its retention schedule expires while it is mounted. These checkpoint snapshots, when created for data recovery purpose only, are also included in the Startup Plan.

Snapshots can be mounted using the same AWS keys used for mounting the filesystem. If you choose to use a different key, you only need read permission for mounting checkpoint snapshot, but need read and write permissions for mounting automatic snapshots.

A snapshot mount is a regular mount and will be counted as an ObjectiveFS instance while it is mounted.

USAGE:

sudo mount.objectivefs [-o <options>] <filesystem>@<time> <dir>
<filesystem>@<time>
The snapshot for the filesystem at a particular time. The time can be specified as local time or UTC in the ISO8601 format. You can use the list snapshots command to get the list of available snapshots for your filesystem.
<dir>
Directory (full path name) to mount your file system snapshot. (Required)
This directory should be an existing empty directory.
<options>
You can also use the same mount options as mounting your filesystem (some of them will have no effect since it is a read-only filesystem).

EXAMPLES:

A. Mount a snapshot specified in local time

$ sudo mount.objectivefs mount myfs@2017-01-10T11:10:00 /ofs

B. Mount a snapshot specified in UTC

$ sudo mount.objectivefs mount myfs@2017-01-10T12:30:00Z /ofs

C. Mount a snapshot with multithreading enabled

$ sudo mount.objectivefs mount -o mt myfs@2017-01-10T11:10:00 /ofs

TIPS:

  • Snapshots and Disk Cache: mounting snapshots with the same disk cache as your filesystem is safe. It can also improve performance by getting common data from the disk cache.
  • Backups: to create a consistent point-in-time backup, mount a recent snapshot and use it as the source for backup. Unlike using a live filesystem for backup, your data will not change while the backup is in progress.

Unmount Snapshots

Same as the regular unmount command.

Destroy Snapshots

To destroy a snapshot, use the regular destroy command with <filesystem>@<time>. Time should be specified in ISO8601 format (e.g. 2017-01-10T10:10:00) and can be either local time or UTC. Both automatic and checkpoint snapshots matching the timestamp will be removed.

sudo mount.objectivefs destroy <filesystem>@<time>

EXAMPLE:
Destroy a snapshot specified in local time

$ sudo mount.objectivefs destroy s3://myfs@2017-01-10T11:10:00  
*** WARNING ***
The snapshot 's3://myfs@2017-01-10T11:10:00' will be destroyed. No other changes will be done to the filesystem.
Continue [y/n]? y


Recovering Files from Snapshots

If you need to recover a file from a snapshot, you can use the following steps:

  1. Identify the snapshot you want to recover from using list snapshots.

    $ sudo mount.objectivefs list -s myfs@2017-01-10

  2. Mount the snapshot on an empty directory, e.g. /ofs-snap.

    $ sudo mount.objectivefs myfs@2017-01-10T11:10:00 /ofs-snap

  3. Mount your filesystem on another empty directory, e.g. /ofs.

    $ sudo mount.objectivefs mount myfs /ofs

  4. Verify it is the right file to restore. Copy the file from the snapshot to your filesystem.

    $ cp /ofs-snap/<path to file> /ofs/<path to file>


Live Rekeying

ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.

Automatic Rekeying with IAM roles

If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST to 169.254.169.254 to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables.

Manual Rekeying

You can also manually rekey by updating the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables (and also AWS_SECURITY_TOKEN if used) and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload and pick up the updated keys.

RELATED INFO:


Compaction

DESCRIPTION:
ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.

USAGE:
You can specify the compaction rate by setting the mount option when mounting your filesystem. Faster compaction increases bandwidth usage. For more about mount options, see this section.

  • nocompact - disable compaction
  • compact - enable compaction level 1
  • compact,compact - enable compaction level 2 (uses more bandwidth than level 1)
  • compact,compact,compact - enable compaction level 3 (uses more bandwidth than level 2)
  • compact=<level> - (5.2 release and newer) set the compaction level (see below)

COMPACTION LEVELS:

  • Higher compaction levels will use more bandwidth. We recommend using an EC2 server in the same region as your S3 bucket when using compaction level 3 or higher to save on bandwidth cost.
  • Levels 1 (low) to 3 (high) are the normal compaction levels and are recommended for regular use.
  • Levels 4 and 5 are intended for quickly updating the storage layout of an existing filesystem, and can use a lot of bandwidth (up to TBs for large filesystems). These levels should be run only on a server in the same region as the S3 bucket since the data transfer cost is free, and requires the -o noratelimit option to be enabled. For instructions on how to use levels 4 and 5, see this document for details.

DEFAULT:
Compaction is enabled by default. If the filesystem is mounted on an EC2 machine accessing an S3 bucket, compaction level 2 is the default. Otherwise, compaction level 1 is the default.

EXAMPLES:
A. To enable compaction level 3

$ sudo mount.objectivefs -o compact=3 myfs /ofs

B. To disable compaction

$ sudo mount.objectivefs -o nocompact myfs /ofs

TIPS:

  • You can find out the number of S3 objects for your ObjectiveFS filesystem in the IUsed column of df -i.
  • To increase the compaction rate of a filesystem, you can enable compaction on all mounts of that filesystem.
  • You can also set up a temporary extra mount with the fastest compaction level to increase the compaction rate. See the Storage Layout Performance Optimization doc.

Multithreading

Business and Enterprise Plan Feature

DESCRIPTION:
Multithreading is a performance feature that can lower latency and improve throughput for your workload. ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, disk cache accesses and updates.

USAGE:
Multithreading mode can be enabled using the mt mount option, which sets the dedicated CPU threads to 4 and dedicated IO threads to 8. One read thread and one write thread will be spawned for each specified CPU and IO thread. You can also explicitly specify the number of dedicated CPU threads and IO threads using the cputhreads and iothreads mount options. You can also use the -nomt mount option to disable multithreading.

Mount options Description
-o mt sets cputhreads to 4 and iothreads to 8
-o cputhreads=<N> sets the number of dedicated CPU threads to N (min:0, max:128)
-o iothreads=<N> sets the number of dedicated IO threads to N (min:0, max:128)
-o nomt sets cputhreads and iothreads to 0

DEFAULT VALUE:
By default, there are 2 dedicated IO threads and no dedicated CPU threads.

EXAMPLE:

A. Enable default multithreading option (4 cputhreads, 8 iothreads)

$ sudo mount.objectivefs -o mt <filesystem> <dir> 

B. Set CPU threads to 8 and IO threads to 16

$ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir> 

C. Example fstab entry to enable multithreading

s3://<filesystem> <dir> objectivefs auto,_netdev,mt 0 0

Filesystem Pool

Business and Enterprise Plan Feature

Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.

A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.

A file system in a pool is a regular file system. It has the same capabilities as a regular file system.

A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.

Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy

An example organization structure is:

 |
 |- myfs1                // one file system per bucket
 |- myfs2                // one file system per bucket
 |- mypool1 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 |
 |- mypool2 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 :

Create

To create a file system in a pool, use the regular create command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>

NOTE:

  • You don’t need to create a pool explicitly. A pool is automatically created when you create the first file system in this pool.
  • The file system will reside in the same region as the pool. Therefore, any subsequent file systems created in a pool will be in the same region, regardless of the -l <region> specification.

EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)

# Assumption: your /etc/objectivefs.env contains S3 keys
$ sudo mount.objectivefs create s3://mypool/myfs

B. Create a GCS file system in the default region

# Assumption: your /etc/objectivefs.env contains GCS keys
$ sudo mount.objectivefs create -l EU gs://mypool/myfs

List

When you list your file system, you can distinguish a pool in the KIND column. A file system inside of a pool is listed with the pool prefix.

You can also list the file systems in a pool by specifying the pool name.

sudo mount.objectivefs list [<pool name>]

EXAMPLE:
A. In this example, there are two pools myfs-pool and myfs-poolb. The file systems in each pools are listed with the pool prefix.

$ sudo mount.objectivefs list
NAME                        KIND      REGION
s3://myfs-1                 ofs       us-west-2
s3://myfs-2                 ofs       eu-central-1
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2
s3://myfs-poolb/            pool      us-west-1
s3://myfs-poolb/foo         ofs       us-west-1

B. List all file systems under a pool, e.g. myfs-pool

$ sudo mount.objectivefs list myfs-pool
NAME                        KIND      REGION
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2

Mount

To mount a file system in a pool, use the regular mount command with
<pool name>/<file system> as the filesystem argument.

Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

EXAMPLES:
A. Mount an S3 file system and run the process in background

$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env and run the process in foreground

$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs

Unmount

Same as the regular unmount command

Destroy

To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs destroy <pool>/<filesystem>

NOTE:

  1. You can destroy a file system in a pool, and other file systems within the pool will not be affected.
  2. A pool can only be destroyed if it is empty.

EXAMPLE:
Destroying an S3 file system in a pool

$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a  
*** WARNING ***
The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

UID/GID Mapping

Business and Enterprise Plan Feature

DESCRIPTION:
This feature lets you map local user ids and group ids to different ids in the remote filesystem. The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.

When a uid is remapped and U* is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody). Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G* is not specified.

USAGE:

IDMAP="<Mapping>[:<Mapping>]"
  where Mapping is:
    U<local id or name> <remote id>
    G<local id or name> <remote id>
    U* <default id>
    G* <default id>

Mapping Format
A. Single User Mapping:    U<local id or name> <remote id>
Maps a local user id or local user name to a remote user id.

B. Single Group Mapping:    G<local id or name> <remote id>
Maps a local group id or local group name to a remote group id.

C. Default User Mapping:    U* <default id>
Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody).

D. Default Group Mapping:    G* <default id>
Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody).

EXAMPLES:
A. UID mapping only

IDMAP="U600 350:Uec2-user 400:U* 800"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local ec2-user is mapped to remote uid 400, and vice versa
  • All other local uids are mapped to remote uid 800
  • All other remote uids are mapped to local uid 800
  • Group IDs are not remapped

B. GID mapping only

IDMAP="G800 225:Gstaff 400"
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • Local group staff is mapped to remote gid 400, and vice versa
  • All other local gids are mapped to remote gid 65534 (aka nobody/nfsnobody)
  • All other remote gids are mapped to local gid 865534 (aka nobody/nfsnobody)
  • User IDs are not remapped

C. UID and GID mapping

IDMAP="U600 350:G800 225"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • All other local uids and gids are mapped to remote 65534 (aka nobody/nfsnobody)
  • All other remote uids and gids are mapped to local 865534 (aka nobody/nfsnobody)

HTTP Proxy

Business and Enterprise Plan Feature

DESCRIPTION:
You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.

USAGE:
Set the http_proxy environment variable to the proxy server’s address (see environment variables section for how to set environment variables).

DEFAULT VALUE:
If the http_proxy environment is not set, this feature is disabled by default.

EXAMPLE:

Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:

$ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs

Alternatively, you can set the http_proxy in your /etc/objectivefs.env directory

$ ls /etc/objectivefs.env
AWS_ACCESS_KEY_ID          OBJECTIVEFS_PASSPHRASE
AWS_SECRET_ACCESS_KEY      http_proxy
OBJECTIVEFS_LICENSE 

$ cat /etc/objectivefs.env/http_proxy http://localhost:3128

Admin Mode

Business and Enterprise Plan Feature

DESCRIPTION:
The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.

The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.

Operation User Mode Admin Mode
Create Needs passphrase confirmation No passphrase confirmation needed
List Allowed Allowed
Mount Allowed Not allowed
Destroy Needs authorization code and confirmation  Only confirmation needed

USAGE:
Enterprise plan users have an admin license key, in addition to their regular license key. Please contact support@objectivefs.com for this key.

To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env. Please use your admin license key for OBJECTIVEFS_LICENSE.

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID      AWS_SECRET_ACCESS_KEY  
OBJECTIVEFS_LICENSE    OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
your_admin_license_key

You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env, for each user to mount their individual filesystems.

EXAMPLES:

A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs

B. Mount the filesystem as user tom in the background

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs

Local License Check

Enterprise Plan Feature

DESCRIPTION:
While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.

USAGE:
Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.


S3 Transfer Acceleration

Enterprise Plan Feature

DESCRIPTION:
ObjectiveFS supports AWS S3 Transfer Acceleration that enables fast transfers of files over long distances between your server and S3 bucket.

USAGE:
Set the AWS_TRANSFER_ACCELERATION environment variable to 1 to enable S3 transfer acceleration (see environment variables section for how to set environment variables).

REQUIREMENT:
Your S3 bucket needs to be configured to enable Transfer Acceleration. This can be done from the AWS Console.

EXAMPLES:

Mount a filesystem called myfs with S3 Transfer Acceleration enabled

$ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs myfs /ofs

AWS KMS Encryption

Enterprise Plan Feature

DESCRIPTION:
ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).

USAGE:
Use the AWS_SERVER_SIDE_ENCRYPTION environment variable (see environment variables section for how to set environment variables).

The AWS_SERVER_SIDE_ENCRYPTION environment variable can be set to:

  • AES256  (for Amazon S3-Managed Keys (SSE-S3))
  • aws:kms  (for AWS KMS-Managed Keys (SSE-KMS) with default key)
  • <your kms key>  (for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)

REQUIREMENT:
To run SSE-KMS, stunnel is required. See the following guide for setup instructions.

EXAMPLES:

A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)

$ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs

B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs

C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs

D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs

Logging

Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On Linux, the log is typically at /var/log/messages or /var/log/syslog. On macOS, the log is typically at /var/log/system.log.

Below is a list of common log messages. For error messages, please see troubleshooting section.
A. Initial mount message
SUMMARY: The message logged every time an ObjectiveFS filesystem is mounted and contains the settings information of this mount.

FORMAT:

objectivefs <version> starting [<settings>]

DESCRIPTION:

Setting Description
cachesize The memory cache size set by CACHESIZE
clean The storage cleaner is enabled. Set by the clean mount option
compact <off | level> Compaction status: off or compaction level
cputhreads <num> Number of dedicated CPU threads (see multithreading)
diskcache <on | off> Indicates whether the disk cache is on or off
endpoint The endpoint used to access your object store bucket
export The export mount option is enabled
fuse version <ver>  The fuse protocol version that the kernel uses
hpc The hpc mount option is enabled
iothreads <num> Number of dedicated I/O threads (see multithreading)
noratelimit The noratelimit mount option is set
region The region of your S3 or GCS bucket

EXAMPLE:
A. A filesystem in us-west-2 mounted with no additional mount options

objectivefs 5.2 starting [fuse version 7.22, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MB, diskcache off, compact 1, cputhreads 0, iothreads 2]

B. A filesystem in eu-central-1 mounted with disk cache enabled, hpc, storage cleaner, compaction level 3 and multithreading

objectivefs 5.2 starting [fuse version 7.22, region eu-central-1, endpoint http://s3-eu-central-1.amazonaws.com, cachesize 2457MB, diskcache on, hpc, clean, compact 3, cputhreads 4, iothreads 8]

B. Regular log message
SUMMARY: The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.

FORMAT:

<put> <list> <get> <delete> <bandwidth in> <bandwidth out> <clean>

DESCRIPTION:

<put> <list> <get> <delete>
For this mount on this machine, the total number of put, list, get and delete operations to S3 or GCS since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
<bandwidth in> <bandwidth out>
For this mount on this machine, the total number of incoming and outgoing bandwidth since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
<clean> (when the storage cleaner is enabled)
For this mount on this machine, the total compressed S3 storage reclaimed by the storage cleaner.

EXAMPLE:
1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT

C. Caching Statistics
SUMMARY: Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.

FORMAT:

CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]

DESCRIPTION:

<cache hit>
Percentage of total requests that hits in the memory cache (cumulative)
<metadata>
Percentage of metadata requests that hits in the memory cache (cumulative)
<data>
Percentage of data requests that hits in the memory cache (cumulative)
<OS>
Amount of cached data referenced by the OS at the current time
Disk [<hit>]
Percentage of disk cache requests that hits in the disk cache (cumulative)

EXAMPLE:
CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]

D. Error messages
SUMMARY: Error response from S3 or GCS

FORMAT:

retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]

DESCRIPTION:

<operation>
This can be PUT, GET, LIST, DELETE operations that encountered the error message
<endpoint>
The endpoint used to access your S3 or GCS bucket, typically determined by the region
<S3/GCS response>
The error response from S3 or GCS
<amz-id>
S3 only: corresponding unique ID from Amazon S3 that request that encountered error. This unique ID can help Amazon with troubleshooting the problem.
<amz-id2>
S3 only: corresponding token from Amazon S3 for the request that encountered error. Used for troubleshooting.

EXAMPLE:
retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn


Relevant Files

/etc/objectivefs.env
Default ObjectiveFS environment variable directory
/etc/resolv.conf
Recursive name resolvers from this file are used unless the DNSCACHEIP environment variable is set.
/var/log/messages
ObjectiveFS output log location on certain Linux distributions (e.g. RedHat) when running in the background.
/var/log/syslog
ObjectiveFS output log location on certain Linux distributions (e.g. Ubuntu) when running in the background.
/var/log/system.log
Default ObjectiveFS output log location on macOS when running in the background.

Troubleshooting

Initial Setup

403 Permission denied
Your S3 keys do not have permissions to access S3. Check that your user keys are added to a group with all S3 permissions.
./mount.objectivefs: Permission denied
mount.objectivefs needs executable permissions set. Run chmod +x mount.objectivefs

During Operation

Transport endpoint is not connected
ObjectiveFS process was killed. The most common reason is related to memory usage and the oom killer. Please see the Memory Optimization Guide for how to optimize memory usage.
Large delay for writes from one machine to appear at other machines
  1. Check that the time on these machines are synchronized. Please verify NTP has a small offset (<1 sec).
    To adjust the clock:
    On Linux: run /usr/sbin/ntpdate pool.ntp.org.
    On macOS: System Preferences → Date & Time → Set date and time automatically
  2. Check for any S3/GCS error responses in the log file
RequestTimeTooSkewed: The difference between the request time and the current time is too large.
The clock on your machine is too fast or too slow. To adjust the clock:
On Linux: run /usr/sbin/ntpdate pool.ntp.org.
On macOS: System Preferences → Date & Time → Set date and time automatically
Checksum error, bad cryptobox
The checksum error occurs when our end-to-end data integrity checker detects the data you stored on S3 differs from the data received when it is read again. Two common causes are:
1. Your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause the end-to-end data integrity check to fail with this error.
To fix this, move the non-ObjectiveFS objects from this bucket.
2. You may be running behind a firewall/proxy that modifies the data in transit. Please contact support@objectivefs.com for the workaround.
Ratelimit delay
ObjectiveFS has a built-in request rate-limiter to prevent runaway programs from running up your S3 bill. The built-in limit starts at 25 million get requests, 1 million each for put and list requests and it is implemented as a leaky bucket with a fill rate of 10 million GET requests and 1 million PUT/LIST requests per day and is reset upon mount.
To explicitly disable the rate-limiter, you can use the noratelimit mount option.
Filesystem format is too new
One likely cause of this error is when your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause this error to occur.
To fix this error, move the non-ObjectiveFS objects from this bucket.

Unmount

Resource busy during unmount
Either a directory or a file in the file system is being accessed. Please verify that you are not accessing the file system anymore.

Upgrades

ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem.

RELATED INFO:


Questions

Don’t hesitate to give us a call at +1-415-997-9967, or send us an email at support@objectivefs.com.

Get updates about ObjectiveFS