`

Configuring Sharding——mongo分片

阅读更多

 

Configuring Sharding

 

 

http://www.mongodb.org/display/DOCS/A+Sample+Configuration+Session

理论:http://www.mongodb.org/display/DOCS/Configuring+Sharding

 

A Sample Configuration Session

The following example uses two shards (each with a single mongod process), one config db, and onemongosprocess, all running on a single test server. In addition to the script below, apython script for starting and configuring shard components on a single machineis available.

Creating the Shards

First, start up a couplemongods to be your shards.

$ mkdir /data/db/a /data/db/b
$ ./mongod --shardsvr --dbpath /data/db/a --port 10000 > /tmp/sharda.log &
$ cat /tmp/sharda.log
$ ./mongod --shardsvr --dbpath /data/db/b --port 10001 > /tmp/shardb.log &
$ cat /tmp/shardb.log

Now you need a configuration server andmongos:

$ mkdir /data/db/config
$ ./mongod --configsvr --dbpath /data/db/config --port 20000 > /tmp/configdb.log &
$ cat /tmp/configdb.log
$ ./mongos --configdb localhost:20000 > /tmp/mongos.log &
$ cat /tmp/mongos.log

mongosdoes not require a data directory, it gets its information from the config server.

In a real production setup, mongod's, mongos's and configs would live on different machines. The use of hostnames or IP addresses is mandatory in that case. 'localhost' appearance here is merely illustrative – but fully functional – and should be confined to single-machine, testing scenarios only.

You can toy with sharding by using a small--chunkSize, e.g. 1MB. This is more satisfying when you're playing around, as you won't have to insert 64MB of documents before you start seeing them moving around. It should not be used in production.

$ ./mongos --configdb localhost:20000 --chunkSize 1 > /tmp/mongos.log &

Setting up the Cluster

We need to run a few commands on the shell to hook everything up. Start the shell, connecting to themongosprocess (at localhost:27017 if you followed the steps above).

To set up our cluster, we'll add the two shards (aandb).

$ ./mongo
MongoDB shell version: 1.6.0
connecting to: test
> use admin
switched to db admin
> db.runCommand( { addshard : "localhost:10000" } )
{ "shardadded" : "shard0000", "ok" : 1 }
> db.runCommand( { addshard : "localhost:10001" } )
{ "shardadded" : "shard0001", "ok" : 1 }

Now you need to tell the database that you want to spread out your data at a database and collection level. You have to give the collection a key (or keys) to partition by.
This is similar to creating an index on a collection.

> db.runCommand( { enablesharding : "test" } )
{"ok" : 1}
> db.runCommand( { shardcollection : "test.people", key : {name : 1} } )
{"ok" : 1}

Administration

To see what's going on in the cluster, use theconfigdatabase.

> use config
switched to db config
> show collections
chunks
databases
lockpings
locks
mongos
settings
shards
system.indexes
version

These collections contain all of the sharding configuration information.

理论:http://www.mongodb.org/display/DOCS/Configuring+Sharding

 

 

Configuring Sharding


This document describes the steps involved in setting up a basic sharding cluster. A sharding cluster has three components:

1. One to 1000 shards. Shards are partitions of data. Each shard consists of one or moremongodprocesses which store the data for that shard. When multiplemongod's are in a single shard, they are each storing the same data – that is, they are replicating to each other.
2. Either one or three config server processes. For production systems use three.
3. One or moremongosrouting processes.

For testing purposes, it's possible to start all the required processes on a single server, whereas in a production situation, a number ofserver configurationsare possible.

Once the shards (mongod's), config servers, andmongosprocesses are running, configuration is simply a matter of issuing a series of commands to establish the various shards as being part of the cluster. Once the cluster has been established, you can begin sharding individual collections.

This document is fairly detailed; for a terse, code-only explanation, see thesample shard configuration. If you'd like a quick script to set up a test cluster on a single machine, we have apython sharding scriptthat can do the trick.

Sharding Components

First, start the individual shards (mongod's), config servers, andmongosprocesses.

Shard Servers

A shard server consists of amongodprocess or a replica set of mongod processes. For production, use areplica setfor each shard for data safety and automatic failover. To get started with a simple test, we can run a singlemongodprocess per shard, as a test configuration doesn't demand automated failover.

Config Servers

Run amongod --configsvrprocess for each config server. If you're only testing, you can use only one config server. For production, use three.

Note: Replicating data to each config server is managed by the router (mongos); they have a synchronous replication protocol optimized for three machines, if you were wondering why that number. Do not run any of the config servers with --replSet; replication between them is automatic.

Note: As the metadata of a MongoDB cluster is fairly small, it is possible to run the config server processes on boxes also used for other purposes.

mongosRouter

Runmongoson the servers of your choice. Specify the --configdb parameter to indicate location of the config database(s). Note: use dns names,not ip addresses, for the --configdb parameter's value. Otherwise moving config servers later isdifficult.

Note that eachmongoswill read from the first config server in the list provided. If you're running config servers across more than one data center, you should put the closest config servers early in the list.

Configuring the Shard Cluster

Once the shard components are running, issue the sharding commands. You may want to automate or record your steps below in a .js file for replay in the shell when needed.

Start by connecting to one of themongosprocesses, and then switch to theadmindatabase before issuing any commands.

The mongos will route commands to the right machine(s) in the cluster and, if commands change metadata, the mongos will update that on the config servers. So, regardless of the number ofmongosprocesses you've launched, you'll only need run these commands on one of those processes.

You can connect to the admin database viamongoslike so:

./mongo <mongos-hostname>:<mongos-port>/admin
> db
admin
Adding shards

Each shard can consist of more than one server (seereplica sets); however, for testing, only a single server with onemongodinstance need be used.

You must explicitly add each shard to the cluster's configuration using theaddshardcommand:

> db.runCommand( { addshard : "<serverhostname>[:<port>]" } );
{"ok" : 1 , "added" : ...}

Run this command once for each shard in the cluster.

If the individual shards consist of replica sets, they can be added by specifyingreplicaSetName/<serverhostname>[:port][,serverhostname2[:port],...], where at least one server in the replica set is given.

> db.runCommand( { addshard : "foo/<serverhostname>[:<port>]" } );
{"ok" : 1 , "added" : "foo"}

Any databases and collections that existed already in the mongod/replica set will be incorporated to the cluster. The databases will have as the "primary" host that mongod/replica set and the collections will not be sharded (but you can do so later by issuing ashardCollectioncommand).

Optional Parameters

name
Each shard has a name, which can be specified using thenameoption. If no name is given, one will be assigned automatically.

maxSize
Theaddshardcommand accepts an optionalmaxSizeparameter. This parameter lets you tell the system a maximum amount of disk space in megabytes to use on the specified shard. If unspecified, the system will use the entire disk. maxSize is useful when you have machines with different disk capacities or when you want to prevent storage of too much data on a particular shard.

As an example:

> db.runCommand( { addshard : "sf103", maxSize:100000/*MB*/ } );
Listing shards

To see current set of configured shards, run thelistshardscommand:

> db.runCommand( { listshards : 1 } );

This way, you can verify that all the shard have been committed to the system.

Removing a shard

See theremoveshard command.

Enabling Sharding on a Database

In versions prior to v2.0, dropping a sharded database causes issues - seeSERVER-2253for workaround.

Once you've added one or more shards, you can enable sharding on a database. Unless enabled, all data in the database will be stored on the same shard. After enabling you then need to run shardCollection on the relevant collections (i.e., the big ones).

> db.runCommand( { enablesharding : "<dbname>" } );

Once enabled,mongoswill place new collections on the primary shard for that database. Existing collections within the database will stay on the original shard. To enable partitioning of data, we have to shard an individual collection.

Sharding a Collection

When sharding a collection, "pre-splitting", that is, setting a seed set of key ranges, is recommended. Without a seed set of ranges, sharding works, however the system must learn the key distribution and this will take some time; during this time performance is not as high. The presplits do not have to be particularly accurate; the system will adapt to the actual key distribution of the data regardless.

Use theshardcollectioncommand to shard a collection. When you shard a collection, you must specify the shard key. If there is data in the collection, mongo will require an index to be created upfront (it speeds up the chunking process); otherwise, an index will be automatically created for you.

 
> db.runCommand( { shardcollection : "<namespace>",
                   key : <shardkeypatternobject> });
Running the "shardcollection" command will mark the collection as sharded with a specific key. Once called, there is currently no way to disable sharding or change the shard key, even if all the data is still contained within the same shard. It is assumed that the data may already be spread around the shards. If you need to "unshard" a collection, drop it (of course making a backup of data if needed), and recreate the collection (loading the backup data).

For example, let's assume we want to shard aGridFSchunkscollection stored in thetestdatabase. We'd want to shard on thefiles_idkey, so we'd invoke theshardcollectioncommand like so:

 > db.runCommand( { shardcollection : "test.fs.chunks", key : { files_id : 1 } } )
{ "collectionsharded" : "mydb.fs.chunks", "ok" : 1 }

You can use the {unique: true} option to ensure that the underlying index enforces uniqueness so long as the unique index is a prefix of the shard key. (note: prior to version 2.0 this worked only if the collection is empty).

db.runCommand( { shardcollection : "test.users" , key : { email : 1 } , unique : true } );

If the "unique: true" option isnotused, the shard key does not have to be unique.

db.runCommand( { shardcollection : "test.products" , key : { category : 1, _id : 1 } } );

You can shard on multiple fields if you are using a compound index.

In the end, picking the right shard key for your needs is extremely important for successful sharding.Choosing a Shard Key.

Examples

See Also

 

 

 

Procedure

 

Complete this procedure by connecting to anymongosin the cluster using themongoshell.

You can only remove a shard by its shard name. To discover or confirm the name of a shard using thelistshardsorprintShardingStatuscommands or thesh.status()shell helper.

The following example will remove shard namedmongodb0.

Note

To successfully migrate data from a shard, thebalancerprocessmustbe active. Check the balancer state using thesh.getBalancerState()helper in themongoshell. See this section onbalancer operationsfor more information.

Remove Chunks from the Shard

Start by running theremoveShardcommand. This begins “draining” chunks from the shard you are removing.

db.runCommand( { removeshard: "mongodb0" } )

This operation will return a response immediately. For example:

{ msg : "draining started successfully" , state: "started" , shard :"mongodb0" , ok : 1 }

Depending on your network capacity and the amount of data in your cluster, this operation can take anywhere from a few minutes to several days to complete.

Check the Status of the Migration

You can run theremoveShardagain at any stage of the process to check the progress of the migration, as follows:

db.runCommand( { removeshard: "mongodb0" } )

The output will resemble the following document:

{ msg: "draining ongoing" , state: "ongoing" , remaining: { chunks: 42, dbs : 1 }, ok: 1 }

In theremainingsub document, a counter displays the remaining number of chunks that MongoDB must migrate to other shards, and the number of MongoDB databases that have “primary” status on this shard.

Continue checking the status of theremoveshardcommand until the number of chunks remaining is 0, then you can proceed to the next step.

Move Unsharded Databases

Databases with non-sharded collections store these collections on a single shard, known as the “primary” shard for that database. The following step is necessary only when the shard you want to remove is also the “primary” shard for one or more databases.

Issue the following command at themongoshell:

db.runCommand( { movePrimary: "myapp", to: "mongodb1" })

This command will migrate all remaining non-sharded data in the database namedmyappto the shard namedmongodb1.

Warning

Do not run themovePrimaryuntil you havefinisheddraining the shard.

This command will not return until MongoDB completes moving all data, which may take a long time. The response from this command will resemble the following:

{ "primary" : "mongodb1", "ok" : 1 }

Finalize the Migration

RunremoveShardagain to clean up all metadata information and finalize the removal, as follows:

db.runCommand( { removeshard: "mongodb0" } )

When successful, the response will be the following:

{ msg: "remove shard completed succesfully" , stage: "completed", host: "mongodb0", ok : 1 }

When the value of “state” is “completed”, you may safely stop themongodb0shard.

 

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics