X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Fmgr%2Flocalpool.rst;fp=src%2Fceph%2Fdoc%2Fmgr%2Flocalpool.rst;h=0000000000000000000000000000000000000000;hb=7da45d65be36d36b880cc55c5036e96c24b53f00;hp=5779b7cf175b7b992861668e559f1aeed21f636f;hpb=691462d09d0987b47e112d6ee8740375df3c51b2;p=stor4nfv.git diff --git a/src/ceph/doc/mgr/localpool.rst b/src/ceph/doc/mgr/localpool.rst deleted file mode 100644 index 5779b7c..0000000 --- a/src/ceph/doc/mgr/localpool.rst +++ /dev/null @@ -1,35 +0,0 @@ -Local pool plugin -================= - -The *localpool* plugin can automatically create RADOS pools that are -localized to a subset of the overall cluster. For example, by default, it will -create a pool for each distinct rack in the cluster. This can be useful for some -deployments that want to distribute some data locally as well as globally across the cluster . - -Enabling --------- - -The *localpool* module is enabled with:: - - ceph mgr module enable localpool - -Configuring ------------ - -The *localpool* module understands the following options: - -* **subtree** (default: `rack`): which CRUSH subtree type the module - should create a pool for. -* **failure_domain** (default: `host`): what failure domain we should - separate data replicas across. -* **pg_num** (default: `128`): number of PGs to create for each pool -* **num_rep** (default: `3`): number of replicas for each pool. - (Currently, pools are always replicated.) -* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set) -* **prefix** (default: `by-$subtreetype-`): prefix for the pool name. - -These options are set via the config-key interface. For example, to -change the replication level to 2x with only 64 PGs, :: - - ceph config-key set mgr/localpool/num_rep 2 - ceph config-key set mgr/localpool/pg_num 64