User Guide (4.3.4 and older)

For release 5.0 and newer, please refer to the this guide.

Overview

It’s quick and easy to get started with your ObjectiveFS file system. Just 3 steps to get your new file system up and running. (see Get Started)

ObjectiveFS runs on your Linux and OSX machines, and implements a log-structured file system with a cloud object store backend, such as AWS S3, Google Cloud Storage (GCS) or your on-premise object store. Your data is encrypted before leaving your machine, and stays encrypted until it returns to your machine.

This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.

What you need


Commands

Config
Sets up the required variables to run ObjectiveFS (details)
sudo mount.objectivefs config [<directory>]
Create
Creates a new file system (details)
sudo mount.objectivefs create [-l <region>] <filesystem>
List
Lists your file systems in S3 or GCS (details)
sudo mount.objectivefs list [-a] [<filesystem>]
Mount
Mounts your file system on your Linux or OSX machines (details)
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground: sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
Unmount
Unmounts your files system on your Linux or OSX machines (details)
sudo umount <dir>
Destroy
Destroys your file system (details)
sudo mount.objectivefs destroy <filesystem>

Reference

This section covers the detailed description and usage for each ObjectiveFS command.

Config

SUMMARY: Sets up the required variables to run objectiveFS

USAGE:

sudo mount.objectivefs config [<directory>]

DESCRIPTION:
Config is a one-time operation that sets up the required credentials, such as object store keys and your license, as environment variables in a directory.

<directory>
Directory to store your environment variables. This should be a new non-existing directory.
Default: /etc/objectivefs.env

WHAT YOU NEED:

EXAMPLES:
A. Default setup: your credentials will be created at /etc/objectivefs.env

$ sudo mount.objectivefs config
Creating config in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>

B. User-specified destination directory

$ sudo mount.objectivefs config /home/tom/.ofs.env
Creating config in /home/tom/.ofs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>

TIPS:

  • If you need to make changes to your credentials, edit the files in the environment directory directly (e.g. /etc/objectivefs.env/AWS_ACCESS_KEY_ID)
  • If you don’t want to use your environment directory’s variables, you can also set the environment variables directly on the command line. See environment variables on command line.
  • You can also create the environment variables directory manually without using the config command. You will need to set up AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and OBJECTIVEFS_LICENSE in the environment directory. See environment variables section for details.

Create

SUMMARY: Creates a new file system

USAGE:

sudo mount.objectivefs create [-l <region>] <filesystem>

DESCRIPTION:
This command creates a new file system in your S3 or GCS object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.

<filesystem>
A globally unique, non-secret file system name. (Required)
The filesystem name maps to an S3/GCS bucket, and Amazon/Google requires globally unique namespace.
For S3, you can add the “s3://” prefix or have no prefix, e.g s3://myfs or myfs.
For GCS, you can add the “gs://” prefix or have no prefix, e.g gs://myfs or myfs.
-l <region>
The S3 or GCS region to store your file system. (see region list)
Default: S3’s default is us-west-2. GCS’s default is based on your physical location: US, EU or Asia.

ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the filesystem pool section for details.

WHAT YOU NEED:

EXAMPLES:
A. Create an S3 file system in the default region (us-west-2)

# Assumption: your /etc/objectivefs.env has S3 keys
$ sudo mount.objectivefs create myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs):  <your passphrase>

B. Create an S3 file system in a user-specified region (e.g. eu-central-1)

# Assumption: your /etc/objectivefs.env has S3 keys
$ sudo mount.objectivefs create -l eu-central-1 s3://myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs):  <your passphrase>

C. Create a GCS file system in the default region

# Assumption: your /etc/objectivefs.env has GCS keys
$ sudo mount.objectivefs create gs://myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs):  <your passphrase>

D. Create a GCS file system in a user-specified region (e.g. US)

# Assumption: your /etc/objectivefs.env has GCS keys
$ sudo mount.objectivefs create -l US myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs):  <your passphrase>

TIPS:

  • You can store your passphrase in your environment directory by creating a file OBJECTIVEFS_PASSPHRASE (e.g. /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE). This lets you run mount without needing to manually enter your passphrase every time.
    Please verify that its permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE).
  • To run with another ObjectiveFS environment directory, e.g. /home/tom/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/tom/.ofs.env mount.objectivefs create myfs

List

SUMMARY: Lists your file systems in S3 or GCS

USAGE:

sudo mount.objectivefs list [-a] [<filesystem>]

DESCRIPTION:
This command lists file systems in your object store backend: S3 or GCS, depending on your keys in /etc/objectivefs.env. The output includes the file system name, kind (i.e. ofs for regular file system or pool (see filesystem pool)), and region (see region list)

-a
List all file systems, including non-ObjectiveFS buckets in S3 or GCS.
Default: only ObjectiveFS file systems and pools are listed.
<filesystem>
A specific file system name to list. You can add “s3://” prefix for S3 or “gs://” prefix for GCS, e.g. myfs or s3://myfs or gs://myfs.
If the file system doesn’t exist, nothing will be returned.
Default: all ObjectiveFS file systems are listed.

WHAT YOU NEED:

EXAMPLES:
A. Default case with S3 keys: all of your ObjectiveFS file systems are listed

$ sudo mount.objectivefs list
NAME                        KIND      REGION
s3://myfs-1                 ofs       us-west-2
s3://myfs-2                 ofs       eu-central-1
s3://myfs-3                 ofs       ap-southeast-1
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2

B. Default case with GCS keys: all of your ObjectiveFS file systems are listed

$ sudo mount.objectivefs list
NAME                        KIND      REGION
gs://myfs-1                 ofs       US
gs://myfs-2                 ofs       EU
gs://myfs-3                 ofs       ASIA
gs://myfs-pool/             pool      US
gs://myfs-pool/myfs-a       ofs       US
gs://myfs-pool/myfs-b       ofs       US
gs://myfs-pool/myfs-c       ofs       US

C. List a specific file system, e.g. s3://myfs-2

$ sudo mount.objectivefs list s3://myfs-2
NAME                        KIND      REGION
s3://myfs-2                 ofs       eu-central-1

D. List everything, including non-ObjectiveFS buckets. In the example below, s3://my-bucket is a non-ObjectiveFS bucket.

$ sudo mount.objectivefs list -a
NAME                        KIND      REGION
s3://myfs-1                 ofs       us-west-2
s3://myfs-2                 ofs       eu-central-1
s3://myfs-3                 ofs       ap-southeast-1
s3://my-bucket              -         us-west-1

TIPS:

  • To run with another ObjectiveFS environment directory, e.g. /home/tom/.ofs.env:
$ sudo OBJECTIVEFS_ENV=/home/tom/.ofs.env mount.objectivefs list

Mount

SUMMARY: Mounts your file system on your Linux or OSX machines.

USAGE:
Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>

DESCRIPTION:
This command mounts your file system on a directory on your Linux or OSX machine. After the file system is mounted, you can read and write to it just like a local disk.

If your machine sleeps while the file system is mounted, the file system will remain mounted and working after it wakes up, even if your network has changed. This is useful for a laptop that is used in multiple locations.

You can mount the same file system on as many Linux or OSX machines as you’d like to share your file system with.

NOTE: The mount command requires sudo. It runs in the foreground if “mount” is provided, and runs in the background otherwise.

<filesystem>
Your globally unique, non-secret file system name. (Required)
It can have “s3://” prefix for S3, “gs://” prefix for GCS or no prefix (determined by your key).
<dir>
Directory (full path name) on your machine to mount your file system. (Required)
This directory should be an existing empty directory.

General Mount Options

-o env=<dir>
Load environment variables from directory <dir>. See environment variable section.

Mount Point Options

-o dev | nodev
Allow block and character devices. (Default: Allow)
-o diratime | nodiratime
Update / Don’t update directory access time. (Default: Update)
-o exec | noexec
Allow binary execution. (Default: Allow)
-o nonempty
Allow mounting on non-empty directory (Default: Disabled)
-o ro | rw
Read-only / Read-write file system. (Default: Read-write)
-o strictatime | relatime | noatime
Update / Smart update / Don’t update access time. (Default: Smart update)
-o suid | nosuid
Allow / Disallow suid bits. (Default: Allow)

File System Mount Options

-o compact | nocompact
Enable / disable background index compaction. (Default: Enable)
-o hpc | nohpc
Enable / disable high performance computing mode. Enabling hpc assumes the server is running in the same data center as the object store. (Default: will check)
-o mt | nomt | cputhreads=<N> | iothreads=<N>
Enable / disable / configure multithreading options. (Default: 2 dedicated IO threads) (details in Multithreading section)
-o ratelimit | noratelimit
Eanble / disable the built-in request rate-limiter. The built-in request rate-limiter is designed to prevent runaway programs from running up the S3 bill. (Default: Enable)

WHAT YOU NEED:

  • Your environment directory (see Config section)
  • The file system name (see Create section for creating a file system)
  • An empty directory to mount your file system

EXAMPLES:
Assumptions:
1. /ofs is an existing empty directory
2. Your passphrase is in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE

A. Mount an S3 file system

$ sudo mount.objectivefs mount myfs /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env

$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gs.env gs://myfs /ofs

C. Mount an S3 file system in the background

$ sudo mount.objectivefs s3://myfs /ofs

D. Mount a GCS file system with non-default options
Assumption: /etc/objectivefs.env contains GCS keys.

$ sudo mount.objectivefs mount  -o nosuid,nodev,noexec,noatime gs://myfs /ofs

TIPS:

  • Control-C will stop the file system, and try to unmount the file system. To properly unmount the filesystem, use unmount.
  • You can store your passphrase in a file called OBJECTIVEFS_PASSPHRASE in the environment directory (e.g. /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE). This lets you run the mount command without needing to manually enter your passphrase every time. Please verify that its permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE).
  • To run with another ObjectiveFS environment directory, e.g. /home/tom/.ofs.env:
$ sudo OBJECTIVEFS_ENV=/home/tom/.ofs.env mount.objectivefs mount myfs /ofs

Mount on Boot

ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.

WHAT YOU NEED:

A. Linux

  1. Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE exists so you can mount your filesystem without needing to enter the passphrase.
    If it doesn’t exist, create the file with your passphrase as the content.(see details)

  2. Add a line to /etc/fstab with:

    <filesystem> <mount dir> objectivefs auto,_netdev[,<opts>]  0  0 
    _netdev is used by many Linux distributions to mark the file system as a network file system.

  3. For more details, see Mount on Boot Setup Guide for Linux.

B. OSX

OSX can use launchd to mount on boot. See Mount on Boot Setup Guide for OSX for details.


Unmount

SUMMARY: Unmounts your files system on your Linux or OSX machines

USAGE:

sudo umount <dir>

DESCRIPTION:
To unmount your filesystem, run umount with the mount directory. Typing Control-C in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount to properly unmount the filesystem.

WHAT YOU NEED:

  • Not accessing any file or directory in the file system

EXAMPLE:

$ sudo umount /ofs

Destroy

IMPORTANT: After a destroy, there is no way to recover any files or data.

SUMMARY: Destroys your file system. This is an irreversible operation.

USAGE:

sudo mount.objectivefs destroy <filesystem>

DESCRIPTION:
This command deletes your file system from your S3 or GCS object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.

You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.

<filesystem>
The file system that you want to destroy. (Required)
It can have “s3://” prefix for S3, “gs://” prefix for GCS or no prefix (determined by your key).

NOTE:
Your file system should be unmounted from all of your machines before running destroy.

WHAT YOU NEED:

  • Your environment directory (see Config section)
  • Your authorization code from your user profile page
    (Note: the authorization code changes periodically. To get the latest code, please refresh your user profile page.)

EXAMPLE:

$ sudo mount.objectivefs destroy s3://myfs  
*** WARNING ***
The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

Settings

This section covers the options you can run ObjectiveFS with.

Regions

This is a list of supported S3 and GCS regions. ObjectiveFS supports all regions for S3 and GCS.

AWS S3

  • us-west-2 [default for S3 if no region is specified]
  • us-west-1
  • us-east-1
  • eu-west-1
  • eu-central-1
  • ap-southeast-1
  • ap-southeast-2
  • ap-northeast-1
  • ap-northeast-2
  • sa-east-1
  • us-gov-west-1 [requires AWS GovCloud account]

GCS

  • ASIA
  • US
  • EU

RELATED COMMANDS:

REFERENCE:
AWS S3 regions, GCS regions


Endpoints

The table below lists the corresponding endpoints for each supported regions.

AWS S3

Region Endpoint
us-east-1 s3-external-1.amazonaws.com
us-west-2 s3-us-west-2.amazonaws.com
us-west-1 s3-us-west-1.amazonaws.com
eu-west-1 s3-eu-west-1.amazonaws.com
eu-central-1 s3.eu-central-1.amazonaws.com or s3-eu-central-1.amazonaws.com
ap-southeast-1 s3-ap-southeast-1.amazonaws.com
ap-southeast-2 s3-ap-southeast-2.amazonaws.com
ap-northeast-1 s3-ap-northeast-1.amazonaws.com
ap-northeast-2 s3.ap-northeast-2.amazonaws.com or s3-ap-northeast-2.amazonaws.com
sa-east-1 s3-sa-east-1.amazonaws.com
us-gov-west-1 s3-us-gov-west-1.amazonaws.com

GCS
All GCS regions have the endpoint storage.googleapis.com


Environment Variables

ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.

The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: live rekey).

A. Environment Variables from Directory
ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir tool from the daemontools package.

Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.

SETUP:
The Config command sets up your environment directory with 3 main environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • OBJECTIVEFS_LICENSE

You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.

EXAMPLE:

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID  AWS_SECRET_ACCESS_KEY  OBJECTIVEFS_LICENSE  OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
your_objectivefs_passphrase

B. Environment Variables on Command Line
You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.

USAGE:

sudo [<ENV VAR>='<value>'] mount.objectivefs 

EXAMPLE:

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

SUPPORTED ENVIRONMENT VARIABLES

To enable a feature, set the corresponding environment variable.

ACCOUNT
Username of the user to run as. Root will be dropped after startup.
AWS_ACCESS_KEY_ID
Your S3 access key. (Required or use AWS_METADATA_HOST)
AWS_DEFAULT_REGION
Your preferred S3 region for creating a file system. (Default: us-west-2)
NOTE: The command line option: -l <region> will override this variable, if present.
AWS_METADATA_HOST
AWS STS host publishing session keys (for EC2 set to “169.254.169.254”). Sets and rekeys AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SECURITY_TOKEN. (added in 2.1)
AWS_SECRET_ACCESS_KEY
Your secret S3 key. (Required or use AWS_METADATA_HOST)
AWS_SECURITY_TOKEN
Session security token when using AWS STS.
AWS_SERVER_SIDE_ENCRYPTION
Server-side encryption with AWS KMS support. (Business and Enterprise plan feature) (see Server-side Encryption section)
CACHESIZE
Set cache size as a percentage of memory (e.g. 30%) or absolute value (e.g. 500M or 1G). (Default: 20%) (see Memory Cache section)
DISKCACHE_SIZE
Enable and set disk cache size and optional free disk size. (see Disk Cache section)
DISKCACHE_PATH
Location of disk cache when disk cache is enabled. (see Disk Cache section)
DNSCACHEIP
IP address of recursive name resolver. (Default: use /etc/resolv.conf)
http_proxy
HTTP proxy server address. (see HTTP Proxy section)
IDMAP
User ID and Group ID mapping. (see UID/GID Mapping section)
OBJECTIVEFS_ENV
Directory to read environment variables from. (Default: /etc/objectivefs.env)
OBJECTIVEFS_LICENSE
Your ObjectiveFS license key. (Required)
OBJECTIVEFS_PASSPHRASE
Passphase for filesystem. (Default: will prompt)

Features

Memory Cache

DESCRIPTION:
ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.

USAGE:
Use the CACHESIZE environment variable to set the memory cache size (see environment variables section for how to set environment variables).

You can set CACHESIZE to either the actual memory size (e.g. 500M or 2G) or as a percentage of memory (e.g. 30%). A minimum of 64MB will be used.

DEFAULT VALUE:
If CACHESIZE is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.

EXAMPLES:

A. Set memory cache size to 30%

$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

B. Set memory cache size to 2GB

$ sudo CACHESIZE=2G mount.objectivefs myfs /ofs

RELEVANT INFO:

  • The cache size is applicable per mount. If you are mounting multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of all CACHESIZE values for all mounts.
  • The cache size used will be printed by ObjectiveFS upon file system mount, either on the screen (when running in foreground) or in the log file (when running in background).
  • The minimum value for CACHESIZE is 64M.
  • Memory optimization guide

Disk Cache

DESCRIPTION:
ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.

The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.

Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.

ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.

Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.

RECOMMENDATION:
We recommend enabling disk cache when local SSD or harddrive is available. (See how to mount an instance store on EC2 for disk cache).

USAGE:
The disk cache uses DISKCACHE_SIZE and DISKCACHE_PATH environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE.

DISKCACHE_SIZE:

  • Accepts values in the form <DISK CACHE SIZE>[:<FREE SPACE>].
  • You can set <DISK CACHE SIZE> to the actual space you want ObjectiveFS to use (e.g. 20G or 1T). If the value specified is larger than the actual disk (e.g. 1P), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.
  • You can also set the optional <FREE SPACE> to the amount of free space you want to keep on the volume (e.g. 5G). The default value is 3G (if free space is not specified) and the minimum is 0G.
  • When <FREE SPACE> is set to 0G, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).
  • The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the specified disk cache size or (total disk space - free space).

DISKCACHE_PATH

  • Specifies the location of the disk cache.
  • Default location:
    OSX: /Library/Caches/ObjectiveFS
    Linux: /var/cache/objectivefs

DEFAULT VALUE:
Disk cache is disabled when DISKCACHE_SIZE is not specified.

EXAMPLES:

A. Set disk cache size to 20GB and use default free space (3GB)

$ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs

B. Use as much space as possible for disk cache and keep 10GB free space

$ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs

C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path

$ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs

D. Use the entire space for disk cache (when using dedicated volume for disk cache)

$ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs

WHAT YOU NEED:

  • Free space on your local SSD, OR
  • Mount your local instance store or SSD on /var/cache/objectivefs or on your DISKCACHE_PATH

RELEVANT INFO:

  • Different file systems can point to the same disk cache. They can also point to different locations by setting different DISKCACHE_PATH.
  • The DISKCACHE_SIZE is per disk cache directory. If multiple file systems are mounted concurrently using different DISKCACHE_SIZE values, and are pointing to the same DISKCACHE_PATH, ObjectiveFS will use the minimum disk cache size and the maximum free space value.
  • A background clean-up cycle will happen periodically and the oldest cached data will be deleted.
  • If you send SIGHUP to the mount.objectivefs program, it will start the disk cache clean-up cycle.
  • One way to warm a new disk cache is to rsync the content from an existing disk cache.

Live Rekeying

ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.

Automatic Rekeying with IAM roles

If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST to 169.254.169.254 to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables.

Manual Rekeying

You can also manually rekey by updating the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables (and also AWS_SECURITY_TOKEN if used) and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload and pick up the updated keys.


Compaction

ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.

USAGE:
You can specify the compaction rate by setting the mount option when mounting your filesystem. Faster compaction increases bandwidth usage. For more about mount options, see this section.

  • nocompact - disable compaction
  • compact - enable regular compaction
  • compact,compact - enable faster compaction (uses more bandwidth)
  • compact,compact,compact - enable fastest compaction (uses most bandwidth)

DEFAULT:
Compaction is enabled by default. If the filesystem is mounted on an EC2 machine in the same region as your S3 bucket, compact,compact is the default. Otherwise, compact is the default.

EXAMPLES:
A. To enable faster compaction

$ sudo mount.objectivefs -o compact,compact myfs /ofs

B. To disable compaction

$ sudo mount.objectivefs -o nocompact myfs /ofs

TIPS:

  • You can find out the number of S3 objects for your ObjectiveFS filesystem in the IUsed column of df -i.
  • To increase the compaction rate of a filesystem, you can enable compaction on all mounts of that filesystem.
  • You can also set up temporary extra mounts with the fastest compaction option to increase the compaction rate.

Filesystem Pool

Business and Enterprise Plan Feature

Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.

A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.

A file system in a pool is a regular file system. It has the same capabilities as a regular file system.

A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.

Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy

An example organization structure is:

 |
 |- myfs1                // one file system per bucket
 |- myfs2                // one file system per bucket
 |- mypool1 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 |
 |- mypool2 -+- /myfs1   // multiple file systems per bucket
 |           |- /myfs2   // multiple file systems per bucket
 |           |- /myfs3   // multiple file systems per bucket
 :

Create

To create a file system in a pool, use the regular create command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>

NOTE:

  • You don’t need to create a pool explicitly. A pool is automatically created when you create the first file system in this pool.
  • The file system will reside in the same region as the pool. Therefore, any subsequent file systems created in a pool will be in the same region, regardless of the -l <region> specification.

EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)

# Assumption: your /etc/objectivefs.env contains S3 keys
$ sudo mount.objectivefs create s3://mypool/myfs

B. Create a GCS file system in the default region

# Assumption: your /etc/objectivefs.env contains GCS keys
$ sudo mount.objectivefs create -l EU gs://mypool/myfs

List

When you list your file system, you can distinguish a pool in the KIND column. A file system inside of a pool is listed with the pool prefix.

You can also list the file systems in a pool by specifying the pool name.

sudo mount.objectivefs list [<pool name>]

EXAMPLE:
A. In this example, there are two pools myfs-pool and myfs-poolb. The file systems in each pools are listed with the pool prefix.

$ sudo mount.objectivefs list
NAME                        KIND      REGION
s3://myfs-1                 ofs       us-west-2
s3://myfs-2                 ofs       eu-central-1
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2
s3://myfs-poolb/            pool      us-west-1
s3://myfs-poolb/foo         ofs       us-west-1

B. List all file systems under a pool, e.g. myfs-pool

$ sudo mount.objectivefs list myfs-pool
NAME                        KIND      REGION
s3://myfs-pool/             pool      us-west-2
s3://myfs-pool/myfs-a       ofs       us-west-2
s3://myfs-pool/myfs-b       ofs       us-west-2
s3://myfs-pool/myfs-c       ofs       us-west-2

Mount

To mount a file system in a pool, use the regular mount command with
<pool name>/<file system> as the filesystem argument.

Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

EXAMPLES:
A. Mount an S3 file system and run the process in background

$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env and run the process in foreground

$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs

Unmount

Same as the regular unmount command

Destroy

To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs destroy <pool>/<filesystem>

NOTE:

  1. You can destroy a file system in a pool, and other file systems within the pool will not be affected.
  2. A pool can only be destroyed if it is empty.

EXAMPLE:
Destroying an S3 file system in a pool

$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a  
*** WARNING ***
The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>

Multithreading

Business and Enterprise Plan Feature (Linux only)

DESCRIPTION:
Multithreading is a performance feature that can lower latency and improve throughput for your workload. ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, disk cache accesses and updates.

USAGE:
Multithreading mode can be enabled using the mt mount option, which sets the dedicated CPU threads to 4 and dedicated IO threads to 8. You can also explicitly specify the number of dedicated CPU threads and IO threads using the cputhreads and iothreads mount options. You can also use the -nomt mount option to disable multithreading.

Mount options Description
-o mt sets cputhreads to 4 and iothreads to 8
-o cputhreads=<N> sets the number of dedicated CPU threads to N (min:0, max:128)
-o iothreads=<N> sets the number of dedicated IO threads to N (min:0, max:128)
-o nomt sets cputhreads and iothreads to 0

DEFAULT VALUE:
By default, there are 2 dedicated IO threads and no dedicated CPU threads.

EXAMPLE:

A. Enable default multithreading option (4 cputhreads, 8 iothreads)

$ sudo mount.objectivefs -o mt <filesystem> <dir> 

B. Set CPU threads to 8 and IO threads to 16

$ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir> 

C. Example fstab entry to enable multithreading

s3://<filesystem> <dir> objectivefs auto,_netdev,mt 0 0

HTTP Proxy

Business and Enterprise Plan Feature

DESCRIPTION:
You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.

USAGE:
Set the http_proxy environment variable to the proxy server’s address (see environment variables section for how to set environment variables).

DEFAULT VALUE:
If the http_proxy environment is not set, this feature is disabled by default.

EXAMPLE:

Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:

$ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs

Alternatively, you can set the http_proxy in your /etc/objectivefs.env directory

$ ls /etc/objectivefs.env
AWS_ACCESS_KEY_ID          OBJECTIVEFS_PASSPHRASE
AWS_SECRET_ACCESS_KEY      http_proxy
OBJECTIVEFS_LICENSE 

$ cat /etc/objectivefs.env/http_proxy http://localhost:3128

Local License Check

Enterprise Plan Feature

DESCRIPTION:
While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.

USAGE:
Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.


Admin Mode

Business and Enterprise Plan Feature

DESCRIPTION:
The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.

The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.

Operation User Mode Admin Mode
Create Needs passphrase confirmation No passphrase confirmation needed
List Allowed Allowed
Mount Allowed Not allowed
Destroy Needs authorization code and confirmation  Only confirmation needed

USAGE:
Business and enterprise account users have an admin license key, in addition to their regular license key. Please contact support@objectivefs.com for this key.

To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env. Please use your admin license key for OBJECTIVEFS_LICENSE.

$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID      AWS_SECRET_ACCESS_KEY  
OBJECTIVEFS_LICENSE    OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
your_admin_license_key

You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env, for each user to mount their individual filesystems.

EXAMPLES:

A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs

B. Mount the filesystem as user tom in the background

$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs

UID/GID Mapping

Business and Enterprise Plan Feature

DESCRIPTION:
This feature lets you map local user ids and group ids to different ids in the remote filesystem. The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.

When a uid is remapped and U* is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody). Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G* is not specified.

USAGE:

IDMAP="<Mapping>[:<Mapping>]"
  where Mapping is:
    U<local id or name> <remote id>
    G<local id or name> <remote id>
    U* <default id>
    G* <default id>

Mapping Format
A. Single User Mapping:    U<local id or name> <remote id>
Maps a local user id or local user name to a remote user id.

B. Single Group Mapping:    G<local id or name> <remote id>
Maps a local group id or local group name to a remote group id.

C. Default User Mapping:    U* <default id>
Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody).

D. Default Group Mapping:    G* <default id>
Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody).

EXAMPLES:
A. UID mapping only

IDMAP="U600 350:Uec2-user 400:U* 800"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local ec2-user is mapped to remote uid 400, and vice versa
  • All other local uids are mapped to remote uid 800
  • All other remote uids are mapped to local uid 800
  • Group IDs are not remapped

B. GID mapping only

IDMAP="G800 225:Gstaff 400"
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • Local group staff is mapped to remote gid 400, and vice versa
  • All other local gids are mapped to remote gid 65534 (aka nobody/nfsnobody)
  • All other remote gids are mapped to local gid 865534 (aka nobody/nfsnobody)
  • User IDs are not remapped

C. UID and GID mapping

IDMAP="U600 350:G800 225"
  • Local uid 600 is mapped to remote uid 350, and vice versa
  • Local gid 800 is mapped to remote gid 225, and vice versa
  • All other local uids and gids are mapped to remote 65534 (aka nobody/nfsnobody)
  • All other remote uids and gids are mapped to local 865534 (aka nobody/nfsnobody)

Server-Side Encryption

Enterprise Plan Feature

DESCRIPTION:
ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).

USAGE:
Use the AWS_SERVER_SIDE_ENCRYPTION environment variable (see environment variables section for how to set environment variables).

The AWS_SERVER_SIDE_ENCRYPTION environment variable can be set to:

  • AES256  (for Amazon S3-Managed Keys (SSE-S3))
  • aws:kms  (for AWS KMS-Managed Keys (SSE-KMS) with default key)
  • <your kms key>  (for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)

REQUIREMENT:
To run SSE-KMS, stunnel is required. See the following guide for setup instructions.

EXAMPLES:

A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)

$ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs

B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs

C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs

D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key
Note: make sure stunnel is running. See setup instructions.

$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs

Logging

Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On OSX, the log is typically at /var/log/system.log. On Linux, the log is typically at /var/log/messages or /var/log/syslog.

Below is a list of common log messages. For error messages, please see troubleshooting section.
A. Initial mount message
SUMMARY: The message logged every time an ObjectiveFS filesystem is mounted.

FORMAT:

objectivefs starting [<fuse version>, <region>, <endpoint>, <cachesize>, <disk cache setting>]

DESCRIPTION:

<fuse version>
The fuse protocol version that the kernel uses
<region>
The region where your S3 or GCS bucket resides
<endpoint>
The endpoint used to access your S3 or GCS bucket, typically determined by the region
<cachesize>
The maximum size used for memory cache
<disk cache setting>
Indicates whether disk cache is on or off

EXAMPLE:
objectivefs starting [fuse version 7.22, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MB, diskcache off]

B. Regular log message
SUMMARY: The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.

FORMAT:

<put> <list> <get> <delete> <bandwidth in> <bandwidth out>

DESCRIPTION:

<put> <list> <get> <delete>
For this mount on this machine, the total number of put, list, get and delete operations to S3 or GCS since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
<bandwidth in> <bandwidth out>
For this mount on this machine, the total number of incoming and outgoing bandwidth since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.

EXAMPLE:
1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT

C. Caching Statistics
SUMMARY: Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.

FORMAT:

CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]

DESCRIPTION:

<cache hit>
Percentage of total requests that hits in the memory cache (cumulative)
<metadata>
Percentage of metadata requests that hits in the memory cache (cumulative)
<data>
Percentage of data requests that hits in the memory cache (cumulative)
<OS>
Amount of cached data referenced by the OS at the current time
Disk [<hit>]
Percentage of disk cache requests that hits in the disk cache (cumulative)

EXAMPLE:
CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]

D. Error messages
SUMMARY: Error response from S3 or GCS

FORMAT:

retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]

DESCRIPTION:

<operation>
This can be PUT, GET, LIST, DELETE operations that encountered the error message
<endpoint>
The endpoint used to access your S3 or GCS bucket, typically determined by the region
<S3/GCS response>
The error response from S3 or GCS
<amz-id>
S3 only: corresponding unique ID from Amazon S3 that request that encountered error. This unique ID can help Amazon with troubleshooting the problem.
<amz-id2>
S3 only: corresponding token from Amazon S3 for the request that encountered error. Used for troubleshooting.

EXAMPLE:
retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn


Relevant Files

/etc/objectivefs.env
Default ObjectiveFS environment variable directory
/etc/resolv.conf
Recursive name resolvers from this file are used unless the DNSCACHEIP environment variable is set.
/var/log/messages
ObjectiveFS output log location on certain Linux distributions (e.g. RedHat) when running in the background.
/var/log/syslog
ObjectiveFS output log location on certain Linux distributions (e.g. Ubuntu) when running in the background.
/var/log/system.log
Default ObjectiveFS output log location on OSX when running in the background.

Troubleshooting

Initial Setup

403 Permission denied
Your S3 keys do not have permissions to access S3. Check that your user keys are added to a group with all S3 permissions.
./mount.objectivefs: Permission denied
mount.objectivefs needs executable permissions set. Run chmod +x mount.objectivefs

During Operation

Transport endpoint is not connected
ObjectiveFS process was killed. The most common reason is related to memory usage and the oom killer. Please see the Memory Optimization Guide for how to optimize memory usage.
Large delay for writes from one machine to appear at other machines
  1. Check that the time on these machines are synchronized. Please verify NTP has a small offset (<1 sec).
    To adjust the clock:
    On Linux: run /usr/sbin/ntpdate pool.ntp.org.
    On OSX: System Preferences → Date & Time → Set date and time automatically
  2. Check for any S3/GCS error responses in the log file
RequestTimeTooSkewed: The difference between the request time and the current time is too large.
The clock on your machine is too fast or too slow. To adjust the clock:
On Linux: run /usr/sbin/ntpdate pool.ntp.org.
On OSX: System Preferences → Date & Time → Set date and time automatically
Checksum error, bad cryptobox
The checksum error occurs when our end-to-end data integrity checker detects the data you stored on S3 differs from the data received when it is read again. Two common causes are:
1. Your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause the end-to-end data integrity check to fail with this error.
To fix this, move the non-ObjectiveFS objects from this bucket.
2. You may be running behind a firewall/proxy that modifies the data in transit. Please contact support@objectivefs.com for the workaround.
Ratelimit delay
ObjectiveFS has a built-in request rate-limiter to prevent runaway programs from running up your S3 bill. The built-in limit starts at 25 million get requests, 1 million each for put and list requests and it is implemented as a leaky bucket with a fill rate of 10 million get requests and 1 million put/list requests per day and is reset upon mount.
To explicitly disable the rate-limiter, you can use the noratelimit mount option.
Filesystem format is too new
One likely cause of this error is when your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause this error to occur.
To fix this error, move the non-ObjectiveFS objects from this bucket.

Unmount

Resource busy during unmount
Either a directory or a file in the file system is being accessed. Please verify that you are not accessing the file system anymore.

Best Practices

Backups

As with all filesystems, please keep backups of everything important.

Passphrase

Pick a strong passphrase (e.g. five or six random dictionary words), write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.

Upgrading To A New Release

ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem. (reference: how to upgrade to a new ObjectiveFS version)


Questions

Don’t hesitate to give us a call at +1-415-997-9967, or send us an email at support@objectivefs.com.