3 ==================================================================================
4 rbd-replay-many -- replay a rados block device (RBD) workload on several clients
5 ==================================================================================
7 .. program:: rbd-replay-many
12 | **rbd-replay-many** [ *options* ] --original-image *name* *host1* [ *host2* [ ... ] ] -- *rbd_replay_args*
18 **rbd-replay-many** is a utility for replaying a rados block device (RBD) workload on several clients.
19 Although all clients use the same workload, they replay against separate images.
20 This matches normal use of librbd, where each original client is a VM with its own image.
22 Configuration and replay files are not automatically copied to clients.
23 Replay images must already exist.
29 .. option:: --original-image name
31 Specifies the name (and snap) of the originally traced image.
32 Necessary for correct name mapping.
34 .. option:: --image-prefix prefix
36 Prefix of image names to replay against.
37 Specifying --image-prefix=foo results in clients replaying against foo-0, foo-1, etc.
38 Defaults to the original image name.
40 .. option:: --exec program
42 Path to the rbd-replay executable.
44 .. option:: --delay seconds
46 Delay between starting each client. Defaults to 0.
54 rbd-replay-many host-0 host-1 --original-image=image -- -c ceph.conf replay.bin
56 This results in the following commands being executed::
58 ssh host-0 'rbd-replay' --map-image 'image=image-0' -c ceph.conf replay.bin
59 ssh host-1 'rbd-replay' --map-image 'image=image-1' -c ceph.conf replay.bin
65 **rbd-replay-many** is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
66 the Ceph documentation at http://ceph.com/docs for more information.
72 :doc:`rbd-replay <rbd-replay>`\(8),