A single tool to manage your storage.
Download Examples |
Get the code Contribute |
System Storage Manager provides easy to use command line interface to manage your storage using various technologies like lvm, btrfs, encrypted volumes and more.
In more sophisticated enterprise storage environments, management with Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is becoming increasingly more difficult. With file systems added to the mix, the number of tools needed to configure and manage storage has grown so large that it is simply not user friendly. With so many options for a system administrator to consider, the opportunity for errors and problems is large.
The btrfs administration tools have shown us that storage management can be simplified, and we are working to bring that ease of use to Linux filesystems in general.
You can get System Storage Manager from the git repository on SourceForge project page http://sourceforge.net/p/storagemanager/code/. There are two branches: master which contains stable release and devel when the development is happening. Once in a while devel branch is merged into master releasing new version of System Storage Manager. Obviously devel branch is more up-to-date, however it might not be as stable as master branch. System Storage Manager have its own regression testing suite, so we’re trying hard not to break things that already works.
You can check out master branch of the git repository:
git clone git://git.code.sf.net/p/storagemanager/code ssm
(C)2011 Red Hat, Inc., Lukas Czerner <lczerner@redhat.com>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
ssm [-h] [–version] [-v] [-f] [-b BACKEND] {check,resize,create,list,add,remove,snapshot,mount} ...
ssm create [-h] [-s SIZE] [-n NAME] [–fstype FSTYPE] [-r LEVEL] [-I STRIPESIZE] [-i STRIPES] [-p POOL] [device [device ...]] [mount]
ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]
ssm remove [-h] [-a] [items [items ...]]
ssm resize [-h] [-s SIZE] volume [device [device ...]]
ssm check [-h] device [device ...]
ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
ssm add [-h] [-p POOL] device [device ...]
ssm mount [-h] [-o OPTIONS] volume directory
-h, --help | show this help message and exit |
--version | show program’s version number and exit |
-v, --verbose | Show aditional information while executing. |
-f, --force | Force execution in the case where ssm has some doubts or questions. |
-b BACKEND, --backend BACKEND | |
Choose backend to use. Currently you can choose from (lvm,btrfs). |
System Storage Manager have several commands you can specify on the command line as a first argument to the ssm. They all have specific use and its own arguments, but global ssm arguments are propagated to all commands.
ssm create [-h] [-s SIZE] [-n NAME] [–fstype FSTYPE] [-r LEVEL] [-I STRIPESIZE] [-i STRIPES] [-p POOL] [device [device ...]] [mount]
This command creates a new volume with defined parameters. If device is provided it will be used to create a volume, hence it will be added into the pool prior the volume creation (See Add command section). More devices can be used to create a volume.
If the device is already used in the different pool, then ssm will ask you whether you want to remove it from the original pool. If you decline, or the removal fails, then the volume creation fails if the SIZE was not provided. On the other hand, if the SIZE is provided and some devices can not be added to the pool the volume creation might succeed if there is enough space in the pool.
POOL name can be specified as well. If the pool exists new volume will be created from that pool (optionally adding device into the pool). However if the POOL does not exist ssm will attempt to create a new pool with provided device and then create a new volume from this pool. If –backend argument is omitted, the default ssm backend will be used. Default backend is lvm.
ssm also supports creating RAID configuration, however some back-ends might not support all the levels, or it might not support RAID at all. In this case, volume creation will fail.
If mount point is provided ssm will attempt to mount the volume after it is created. However it will fail if mountable file system is not present on the volume.
-h, --help | show this help message and exit |
-s SIZE, --size SIZE | |
Gives the size to allocate for the new logical volume A size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define ‘power of two’ units. If no unit is provided, it defaults to kilobytes. This is optional if if not given maximum possible size will be used. | |
-n NAME, --name NAME | |
The name for the new logical volume. This is optional and if omitted, name will be generated by the corresponding backend. | |
--fstype FSTYPE | |
Gives the file system type to create on the new logical volume. Supported file systems are (ext3, ext4, xfs, btrfs). This is optional and if not given file system will not be created. | |
-r LEVEL, --raid LEVEL | |
Specify a RAID level you want to use when creating a new volume. Note that some backends might not implement all supported RAID levels. This is optional and if no specified, linear volume will be created. You can choose from the following list of supported levels (0,1,10). | |
-I STRIPESIZE, --stripesize STRIPESIZE | |
Gives the number of kilobytes for the granularity of stripes. This is optional and if not given, backend default will be used. Note that you have to specify RAID level as well. | |
-i STRIPES, --stripes STRIPES | |
Gives the number of stripes. This is equal to the number of physical volumes to scatter the logical volume. This is optional and if stripesize is set and multiple devices are provided stripes is determined automatically from the number of devices. Note that you have to specify RAID level as well. | |
-p POOL, --pool POOL | |
Pool to use to create the new volume. |
ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]
List informations about all detected devices, pools, volumes and snapshots found in the system. list command can be used either alone to list all the information, or you can request specific section only.
Following sections can be specified:
-h, --help | show this help message and exit |
ssm remove [-h] [-a] [items [items ...]]
This command removes item from the system. Multiple items can be specified. If the item can not be removed for some reason, it will be skipped.
item can represent:
-h, --help | show this help message and exit |
-a, --all | Remove all pools in the system. |
ssm resize [-h] [-s SIZE] volume [device [device ...]]
Change size of the volume and file system. If there is no file system only the volume itself will be resized. You can specify device to add into the volume pool prior the resize. Note that device will only be added into the pool if the volume size is going to grow.
If the device is already used in the different pool, then ssm will ask you whether you want to remove it from the original pool.
In some cases file system has to be mounted in order to resize. This will be handled by ssm automatically by mounting the volume temporarily.
-h, --help | show this help message and exit |
-s SIZE, --size SIZE | |
New size of the volume. With the + or - sign the value is added to or subtracted from the actual size of the volume and without it, the value will be set as the new volume size. A size suffix of [k|K] for kilobytes, [m|M] for megabytes, [g|G] for gigabytes, [t|T] for terabytes or [p|P] for petabytes is optional. If no unit is provided the default is kilobytes. |
ssm check [-h] device [device ...]
Check the file system consistency on the volume. You can specify multiple volumes to check. If there is no file system on the volume, this volume will be skipped.
In some cases file system has to be mounted in order to check the file system This will be handled by ssm automatically by mounting the volume temporarily.
-h, --help | show this help message and exit |
ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume
Take a snapshot of existing volume. This operation will fail if back-end which the volume belongs to does not support snapshotting. Note that you can not specify both NAME and DESC since those options are mutually exclusive.
In some cases file system has to be mounted in order to take a snapshot of the volume. This will be handled by ssm automatically by mounting the volume temporarily.
-h, --help | show this help message and exit |
-s SIZE, --size SIZE | |
Gives the size to allocate for the new snapshot volume A size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define ‘power of two’ units. If no unit is provided, it defaults to kilobytes. This is option and if not give, the size will be determined automatically. | |
-d DEST, --dest DEST | |
Destination of the snapshot specified with absolute path to be used for the new snapshot. This is optional and if not specified default backend policy will be performed. | |
-n NAME, --name NAME | |
Name of the new snapshot. This is optional and if not specified default backend policy will be performed. |
ssm add [-h] [-p POOL] device [device ...]
This command adds device into the pool. The device will not be added if it’s already part of different pool by default, but user will be asked whether to remove the device from it’s pool. When multiple devices are provided, all of them are added into the pool. If one of the devices can not be added into the pool for any reason, add command will fail. If no pool is specified, default pool will be chosen. In the case of non existing pool, it will be created using provided devices.
-h, --help | show this help message and exit |
-p POOL, --pool POOL | |
Pool to add device into. If not specified the default pool is used. |
ssm mount [-h] [-o OPTIONS] volume directory
This command will mount the volume at specified directory. The volume can be specified in the same way as with mount(8), however in addition one can also specify volume in the format as it appear in the ssm list table.
For example, instead of finding out what the device and subvolume id of the btrfs subvolume “btrfs_pool:vol001” is in order to mount it, on can simply call ssm mount btrfs_pool:vol001 /mnt/test.
One can also specify OPTIONS in the same way as with mount(8).
-h, --help | show this help message and exit |
-o OPTIONS, --options OPTIONS | |
Options are specified with a -o flag followed by a comma separated string of options. This option is equivalent to the same mount(8) option. |
Ssm aims to create unified user interface for various technologies like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and possibly more. In order to do so we have a core abstraction layer in ssmlib/main.py. This abstraction layer should ideally know nothing about the underlying technology, but rather comply with device, pool and volume abstraction.
Various backends can be registered in ssmlib/main.py in order to handle specific storage technology implementing methods like create, snapshot, or remove volumes and pools. The core will then call these methods to manage the storage without needing to know what lies underneath it. There are already several backends registered in ssm.
Btrfs is the file system with many advanced features including volume management. This is the reason why btrfs is handled differently than other conventional file systems in ssm. It is used as a volume management back-end.
Pools, volumes and snapshots can be created with btrfs backend and here is what it means from the btrfs point of view:
Pool is actually a btrfs file system itself, because it can be extended by adding more devices, or shrink by removing devices from it. Subvolumes and snapshots can also be created. When the new btrfs pool should be created ssm simply creates a btrfs file system, which means that every new btrfs pool has one volume of the same name as the pool itself which can not be removed without removing the entire pool. Default btrfs pool name is btrfs_pool.
When creating new btrfs pool, the name of the pool is used as the file system label. If there is already existing btrfs file system in the system without a label, btrfs pool name will be generated for internal use in the following format “btrfs_{device base name}”.
Btrfs pool is created when create or add command is used with devices specified and non existing pool name.
Volume in btrfs back-end is actually just btrfs subvolume with the exception of the first volume created on btrfs pool creation, which is the file system itself. Subvolumes can only be created on btrfs file system when the it is mounted, but user does not have to worry about that, since ssm will automatically mount the file system temporarily in order to create a new subvolume.
Volume name is used as subvolume path in the btrfs file system and every object in this path must exists in order to create a volume. Volume name for internal tracking and for representing to the user is generated in the format “{pool_name}:{volume name}”, but volumes can be also referenced with its mount point.
Btrfs volumes are only shown in the list output, when the file system is mounted, with the exception of the main btrfs volume - the file system itself.
New btrfs volume can be created with create command.
Btrfs file system support subvolume snapshotting, so you can take a snapshot of any btrfs volume in the system with ssm. However btrfs does not distinguish between subvolumes and snapshots, because snapshot actually is just a subvolume with some block shared with different subvolume. It means, that ssm is not able to recognize btrfs snapshot directly, but instead it is trying to recognize special name format of the btrfs volume. However, if the NAME is specified when creating snapshot which does not match the special pattern, snapshot will not be recognized by the ssm and it will be listed as regular btrfs volume.
New btrfs snapshot can be created with snapshot command.
Pools, volumes and snapshots can be created with lvm, which pretty much match the lvm abstraction.
Lvm pool is just volume group in lvm language. It means that it is grouping devices and new logical volumes can be created out of the lvm pool. Default lvm pool name is lvm_pool.
Lvm pool is created when create or add command is used with devices specified and non existing pool name.
Crypt backend in ssm is currently limited to only gather the information about encrypted volumes in the system. You can not create or manage encrypted volumes or pools, but it will be extended in the future.
MD backend in ssm is currently limited to only gather the information about MD volumes in the system. You can not create or manage MD volumes or pools, but it will be extended in the future.
System storage manager is available from http://storagemanager.sourceforge.net. You can subscribe to storagemanager-devel@lists.sourceforge.net to follow the current development.
To install System Storage Manager into your system simply run:
python setup.py install
as root in the System Storage Manager directory. Make sure that your system configuration meet the requirements in order for ssm to work correctly.
Note that you can run ssm even without installation from using the local sources with:
bin/ssm.local
Python 2.6 or higher is required to run this tool. System Storage Manager can only be run as root since most of the commands requires root privileges.
There are other requirements listed bellow, but note that you do not necessarily need all dependencies for all backends, however if some of the tools required by the backend is missing, the backend would not work.
List system storage:
# ssm list
----------------------------------
Device Total Mount point
----------------------------------
/dev/loop0 5.00 GB
/dev/loop1 5.00 GB
/dev/loop2 5.00 GB
/dev/loop3 5.00 GB
/dev/loop4 5.00 GB
/dev/sda 149.05 GB PARTITIONED
/dev/sda1 19.53 GB /
/dev/sda2 78.12 GB
/dev/sda3 1.95 GB SWAP
/dev/sda4 1.00 KB
/dev/sda5 49.44 GB /mnt/test
----------------------------------
------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
------------------------------------------------------------------------------
/dev/dm-0 dm-crypt 78.12 GB ext4 78.12 GB 45.01 GB crypt /home
/dev/sda1 19.53 GB ext4 19.53 GB 12.67 GB part /
/dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test
------------------------------------------------------------------------------
Creating a volume of defined size with the defined file system. The default back-end is set to lvm and lvm default pool name is lvm_pool:
# ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1
The name of the new volume is ‘/dev/lvm_pool/lvol001’. Resize the volume to 10GB:
# ssm resize -s-5G /dev/lvm_pool/lvol001
Resize the volume to 100G, but it would require to add more devices into the pool:
# ssm resize -s 25G /dev/lvm_pool/lvol001 /dev/loop2
Now we can try to create new lvm volume named ‘myvolume’ from the remaining pool space with xfs file system and mount it to /mnt/test1:
# ssm create --fs xfs --name myvolume /mnt/test1
List all volumes with file system:
# ssm list filesystems
-----------------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
-----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001 lvm_pool 25.00 GB ext4 25.00 GB 23.19 GB linear
/dev/lvm_pool/myvolume lvm_pool 4.99 GB xfs 4.98 GB 4.98 GB linear /mnt/test1
/dev/dm-0 dm-crypt 78.12 GB ext4 78.12 GB 45.33 GB crypt /home
/dev/sda1 19.53 GB ext4 19.53 GB 12.67 GB part /
/dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test
-----------------------------------------------------------------------------------------------
You can then easily remove the old volume by:
# ssm remove /dev/lvm_pool/lvol001
Now lest try to create btrfs volume. Btrfs is separate backend, not just a file system. That is because btrfs itself have integrated volume manager. Defaul btrfs pool name is btrfs_pool.:
# ssm -b btrfs create /dev/loop3 /dev/loop4
Now create we btrfs subvolumes. Note that btrfs file system has to be mounted in order to create subvolumes. However ssm will handle it for you.:
# ssm create -p btrfs_pool
# ssm create -n new_subvolume -p btrfs_pool
# ssm list filesystems
-----------------------------------------------------------------
Device Free Used Total Pool Mount point
-----------------------------------------------------------------
/dev/loop0 0.00 KB 10.00 GB 10.00 GB lvm_pool
/dev/loop1 0.00 KB 10.00 GB 10.00 GB lvm_pool
/dev/loop2 0.00 KB 10.00 GB 10.00 GB lvm_pool
/dev/loop3 8.05 GB 1.95 GB 10.00 GB btrfs_pool
/dev/loop4 6.54 GB 1.93 GB 8.47 GB btrfs_pool
/dev/sda 149.05 GB PARTITIONED
/dev/sda1 19.53 GB /
/dev/sda2 78.12 GB
/dev/sda3 1.95 GB SWAP
/dev/sda4 1.00 KB
/dev/sda5 49.44 GB /mnt/test
-----------------------------------------------------------------
-------------------------------------------------------
Pool Type Devices Free Used Total
-------------------------------------------------------
lvm_pool lvm 3 0.00 KB 29.99 GB 29.99 GB
btrfs_pool btrfs 2 3.84 MB 18.47 GB 18.47 GB
-------------------------------------------------------
-----------------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
-----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001 lvm_pool 25.00 GB ext4 25.00 GB 23.19 GB linear
/dev/lvm_pool/myvolume lvm_pool 4.99 GB xfs 4.98 GB 4.98 GB linear /mnt/test1
/dev/dm-0 dm-crypt 78.12 GB ext4 78.12 GB 45.33 GB crypt /home
btrfs_pool btrfs_pool 18.47 GB btrfs 18.47 GB 18.47 GB btrfs
/dev/sda1 19.53 GB ext4 19.53 GB 12.67 GB part /
/dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test
-----------------------------------------------------------------------------------------------
Now let’s free up some of the loop devices so we cat try to add them into then btrfs_pool. So we’ll simply remove lvm mvolume and resize lvol001 so we can remove /dev/loop2. Note that myvolume is mounted so we have to unmount it first.:
# umount /mnt/test1
# ssm remove /dev/lvm_pool/myvolume
# ssm resize -s-10G /dev/lvm_pool/lvol001
# ssm remove /dev/loop2
Add device to the btrfs file system:
# ssm add /dev/loop2 -p btrfs_pool
Set’ see what happend. Note that to actually see btrfs subvolumes you have to mount the file system first:
# mount -L btrfs_pool /mnt/test1/
# ssm list volumes
------------------------------------------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
------------------------------------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001 lvm_pool 15.00 GB ext4 15.00 GB 13.85 GB linear
/dev/dm-0 dm-crypt 78.12 GB ext4 78.12 GB 45.33 GB crypt /home
btrfs_pool btrfs_pool 28.47 GB btrfs 28.47 GB 28.47 GB btrfs /mnt/test1
btrfs_pool:2012-05-09-T113426 btrfs_pool 28.47 GB btrfs 28.47 GB 28.47 GB btrfs /mnt/test1/2012-05-09-T113426
btrfs_pool:new_subvolume btrfs_pool 28.47 GB btrfs 28.47 GB 28.47 GB btrfs /mnt/test1/new_subvolume
/dev/sda1 19.53 GB ext4 19.53 GB 12.67 GB part /
/dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test
------------------------------------------------------------------------------------------------------------------------
Remove the whole lvm pool and one of the btrfs subvolume, and one unused device from the btrfs pool btrfs_loop3. Note that with btrfs, pool have the same name as the volume:
# ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/
Snapshots can also be done with ssm:
# ssm snapshot btrfs_pool
# ssm snapshot -n btrfs_snapshot btrfs_pool
With lvm, you can also create snapshots:
root# ssm create -s 10G /dev/loop[01]
# ssm snapshot /dev/lvm_pool/lvol001
Now list all snapshots. Note that btrfs snapshots are actually just subvolumes with some blocks shared with the original subvolume, so there currently no way to distinguish between those. ssm is using a little trick to search for name patters to recognize snapshots, so if you specify your own name for the snapshot ssm will not recognize it as snapshot, but rather as regular volume (subvolume). This problem does not exist with lvm.:
# ssm list snapshots
-------------------------------------------------------------------------------------------------------------
Snapshot Origin Volume size Size Type Mount point
-------------------------------------------------------------------------------------------------------------
/dev/lvm_pool/snap20120509T121611 lvol001 2.00 GB 0.00 KB linear
btrfs_pool:snap-2012-05-09-T121313 18.47 GB btrfs /mnt/test1/snap-2012-05-09-T121313
-------------------------------------------------------------------------------------------------------------
We are accepting patches! If you’re interested contributing to the System Storage Manager code, just checkout the git repository located on SourceForge. Please, base all of your work on the devel branch since it is more up-to-date and it will save us some work when merging your patches:
git clone --branch devel git://git.code.sf.net/p/storagemanager/code storagemanager-code
Any form of contribution - patches, documentation, reviews or rants are appreciated. See Mailing list section section.
System Storage Manager contains regression testing suite to make sure that we do not break thing that should already work. And we recommend every developer to run tests before sending patches:
python test.py
Tests in System Storage Manager are divided into four levels.
First the doctest is executed.
Then we have unittests in tests/unittests/test_ssm.py which is testing the core of ssm ssmlib/main.py. It is checking for basic things like required backend methods and variables, flag propagations, proper class initialization and finally whether commands actually result in the proper backend callbacks. It does not require root permissions and it does not touch your system configuration in any way. It actually should not invoke any shell command, and if it does it’s a bug.
Second part of unittests is backend testing. We are mainly testing whether ssm commands result in proper backend operations. It does not require root permissions and it does not touch your system configuration in any way. It actually should not invoke any shell command and if it does it’s a bug.
And finally there are real bash tests located in tests/bashtests. Bash tests are divided into files. Each file tests one command for one backend and it containing series of test cases followed by checks whether the command created the expected result. In order to test real system commands we have to create system device to test on and not touch any of the existing system configuration.
Before each test a number of devices are created using dmsetup in the test directory. These devices will be used in test cases instead of real devices. Real operation are performed in those devices as it would on the real system devices. It implies that this phase requires root privileges and it would not be run otherwise. In order to make sure that ssm does not touch any existing system configuration, each device, poor and volume name is include special prefix and SSM_PREFIX_FILTER environment variable is set to make ssm to exclude all items which does not match this filter.
Even though we tried hard to make sure that the bash tests does not change any of your system configuration the recommendation is not to run tests as with root privileges on your work or production system, but rather run it on your testing machine.
If you change or create new functionality, please make sure that it is covered by the System Storage Manager regression test suite to make sure that we do not break it unintentionally.
Important
Please, make sure to run full tests before you send a patch to the mailing list. To do so, simply run python test.py as root on your test machine.
System Storage Manager documentation is stored in doc/ directory. The documentation is build using sphinx software which help us not to duplicate texts for different type of documentation (man page, html pages, readme). If you are going to modify documentation, please make sure not to modify manual page, html pages or README directly, but rather modify doc/*.rst and doc/src/*.rst files accordingly so the change is propagated to all documents.
Moreover, parts of the documentation such as synopsis or ssm command options are parsed directly from the ssm help output. It means that when you’re going to add or change argument into ssm the only thing you have to do is to add or change it in the ssmlib/main.py source code and then run make dist in the doc/ directory and all the documents should be updated automatically.
Important
Please make sure you update the documentation when you add or change ssm functionality if the format of the change requires it. Then regenerate all the documents using make dist and include changes in the patch.
System Storage Manager developers communicate via the mailing list. Address of our mailing list is storagemanager-devel@lists.sourceforge.net and you can subscribe on the SourceForge project page https://lists.sourceforge.net/lists/listinfo/storagemanager-devel. Mailing list archives can be found here http://sourceforge.net/mailarchive/forum.php?forum_name=storagemanager-devel.
This is also the list where to send patches and where the review process is happening. We do not have separate user mailing list, so feel free to drop your questions there as well.
As already mentioned, we are accepting patches! And we are very happy for every contribution. If you’re going to send a path in, please make sure to follow some simple rules:
Hint
You can use git to do all the work for you. git format-patch and git send-email will help you with creating and sending the patch.