Cisco Video Surveillance Storage System Administration Guide
Quick Start

Table Of Contents

Quick Start

Basic Quick Start

Expert Quick Start

Check List

Integrating External Storage Volumes Into Cisco VSM

Understanding the Integration Script

Requirements

Integration Procedure

Example Integration Script with Restore Option


Quick Start


When you click the Quick Start button in the navigation pane, you are taken to the Basic Configure RAID System page. The navigation bar across the top contains links to this section's subpages.

Basic links to Basic Quick Start

Expert links to Expert Quick Start

Check List links to Check List

This chapter also contains the Integrating External Storage Volumes Into Cisco VSM section that describes how to integrate the Cisco Video Surveillance Storage Systems (CPS-SS-4RU and CPS-SS-4RU-EX) into a Cisco VSM Release 7.2 and higher deployment using an integration script.

Basic Quick Start

Clicking Quick Start takes you to the Basic Configure RAID System page, which lets you quickly and easily configure RAID arrays and volumes for your system. This is an excellent tool for getting started with a new storage system (see Set Up the System).


Caution If arrays or volumes have already been configured on the unit, this tool will erase all existing data. It is recommended that this tool ONLY be used when first setting up the unit.

Arrays are limited to the disks physically contained in a single Cisco Video Surveillance Storage System component.


Note If the system you are setting up is a storage unit/expansion unit pair, you are first asked to select the unit that you wish to configure. Select the unit you wish to configure, then click Next. When you are finished, you can configure the second enclosure by repeating this procedure.


The Basic Quick Start configuration page is displayed.


Note Only SATA disk drives can be used in the RAID array. SAS and SSD are not supported. If your Cisco Video Surveillance Storage System component contains a mixture of disk drive types, the Basic Quick Start configuration page will have two or three Quick Start Options sections, one for each drive type. Choose only the SATA option.



Step 1 Using the drop-down lists, set the following parameters:

Number of arrays: Choose the number of RAID sets that you wish to create. The maximum number depends on the number of disks detected in the unit.

Select RAID level: Choose the RAID level that all RAID sets will be configured for. You can choose from the following:

RAID 0 (striped)
RAID 1 (mirrored)
RAID 4 (parity)
RAID 5 (rotating parity)
RAID 6 (rotating dual parity)


Note For more information on RAID levels, see "RAID Levels".


Number of pool spares: Choose the number of spare disks that will be available to use as backups in case a RAID disk fails. The maximum number of pool spares depends on the number of disks detected in the unit.

Number of volumes per array: This setting controls whether or not each RAID array will be further divided into two or more smaller volumes. The default setting is 1. The number of volumes per array can be anywhere from 1 to 10.

Limit volume size to less than 2TB: This option is unchecked by default. If your hosts do not support volumes of more than 2TB in size, check this option.

Step 2 Click Next.

The New Configuration Preview page is displayed.

Step 3 Ensure that the settings for Arrays, Volumes, Pool Spares, and Volume Access are correct.

Step 4 If all settings are acceptable, select the confirmation check box, then click the Quickstart button.


Caution If any arrays or volumes have already been configured on the unit, the Management Console displays a warning dialog. If you wish to continue, click the check box and select Confirm Quickstart Configure. If you do not wish to continue, click CANCEL Quickstart.


Note Although your volumes are available immediately, Quickstart continues to run in the background. The Quickstart operation may take as much as several hours to complete, depending on the size and number of the disk drives in the unit. You can check the progress of the operation by going to RAID Information > Progress.



Expert Quick Start

Arrays are limited to the disks physically contained in a single Cisco Video Surveillance Storage System component.


Note If the system you are setting up is a storage unit/expansion unit pair, you are first asked to select the unit that you wish to configure. Select the unit you wish to configure, then click Next. When you are finished, you can configure the second enclosure by repeating this procedure.


The Expert Quick Start configuration page is displayed.


Note Only SATA disk drives can be used in the RAID array. SAS and SSD are not supported. If your Cisco Video Surveillance Storage System component contains a mixture of disk drive types, the Basic Quick Start configuration page will have two or three Quick Start Options sections, one for each drive type. Choose only the SATA option.



Step 1 Using the drop-down lists, set the following parameters:

Number of arrays: Choose the number of RAID sets that you wish to create. The maximum number depends on the number of disks detected in the unit.

Select RAID level: Choose the RAID level that all RAID sets will be configured for. You can choose from the following:

RAID 0 (striped)
RAID 1 (mirrored)
RAID 4 (parity)
RAID 5 (rotating parity)
RAID 6 (rotating dual parity)


Note For more information on RAID levels, see "RAID Levels".


Number of pool spares: Choose the number of spare disks that will be available to use as backups in case a RAID disk fails. The maximum number of pool spares depends on the number of disks detected in the unit.

Number of volumes per array: This setting controls whether or not each RAID array will be further divided into two or more smaller volumes. The default setting is 1. The number of volumes per array can be anywhere from 1 to 10.

Limit volume size to less than 2TB: This option is unchecked by default. If your hosts do not support volumes of more than 2TB in size, check this option.

Select stripe size: The default stripe size is 128Kbytes. You can choose to use smaller stripes by selecting 64Kbytes, 32Kbytes, or 16Kbytes.

Select host connection type: By default, this setting is set to Fibre/SAS/10Ge iSCSI (multi-path), which maps all logical unit numbers (LUNs) to all available Fibre Channel/SAS/10GbE iSCSI ports. If you wish to change the mapping, select one of the following:


Note SAS drives are not supported with the Cisco Video Surveillance Storage System. iSCSI is supported on Cisco Video Surveillance Systems (VSM) deployed as a Virtual Machine for VSM releases 7.2 or higher.


None (leave unmapped): The LUNs will not be associated with any ports on the unit and will not be available to the host. You can later manually assign each LUN to one or more ports using Configure Volumes > Map Volume (see Map Logical Volumes).

Fibre/SAS/10Ge iSCSI (non-redundant): Assigns each LUN to a single available Fibre Channel/SAS/10Gb iSCSI port.

Fibre/SAS/10Ge iSCSI (multi-path): Assigns LUNs to all available Fibre Channel/SAS/10Gb iSCSI ports (requires multipathing software).

iSCSI (non-redundant): Assigns each LUN to a single available iSCSI port.

iSCSI (multi-path): Assigns LUNs to all available iSCSI ports (requires multipathing software).

Select default host access: This setting defaults to Read/Write. This will allow all attached hosts to access all volumes on this unit. If you wish to restrict host access to this unit, change this setting to Deny, then use the procedure under Manage Hosts to assign Read or R/W access to specific hosts.

To ensure integrity and security of data, it is recommended that you change this setting to Deny.

Online Create: When this box is checked, volumes on this unit will be available immediately, with RAID creation continuing in the background. This does, however, slow down the RAID creation process. You can speed up the creation process by unchecking this box, in which case volumes will be unavailable until RAID creation is complete.

Leave free space on each array for future volumes/expansion: Be default, the volumes will take up all of the space in the RAID arrays. This setting lets you keep a percentage of the RAID array space free for additional volumes or expansion of current volumes. Select 0%, 10%, 25%, 50%, or 75%.

Step 2 Click Next.

The New Configuration Preview page is displayed.

Step 3 Ensure that the settings for Arrays, Volumes, Pool Spares, and Volume Access are correct.

Step 4 If all settings are acceptable, select Check this checkbox to confirm, then click the Quickstart button.


Caution If any arrays or volumes have already been configured on the unit, the Management Console displays a warning dialog. If you wish to continue, click the check box and select Confirm Quickstart Configure. If you do not wish to continue, click CANCEL Quickstart.


Note The Quickstart operation may take as much as several hours to complete, depending on the size and number of the disk drives in the unit. You can check the progress of the operation by going to RAID Information > Progress.



Check List

Clicking Quick Start > Check List takes you to the Configuration Checklist page, which contains links to pages in the Management Console that should be configured when the unit is first installed.

The items on the Quick Start Configuration Checklist are:

Security (see Security Settings)

System Name (see Configure Enclosures)

Network Settings (see Configure Network Settings)

Array Configuration (see Create a New RAID Array)

Volume Configuration and Access (see Create a Logical Volume)

Each item in the list displays its status on the Quick Start Configuration Checklist. If an item has a green check mark next to it, that item has been completed with a recommended setting. If an item has a red exclamation point next to it, that item has either not been completed or has an unrecommended setting. For more information see Set Up the System.

Integrating External Storage Volumes Into Cisco VSM

The CPS-SS-4RU and CPS-SS-4RU-EX systems provide external storage volumes to the Cisco Video Surveillance servers. This external storage is in addition to the internal storage available in the Cisco VSM server.

To use these external storage systems with a Cisco VSM, you must integrate the external system by running a script on the Cisco VSM server. See the following topics for more information:

Understanding the Integration Script

Requirements

Integration Procedure

Example Integration Script with Restore Option


Note See the Release Notes for Cisco Video Surveillance Manager for information on supported servers and platforms, such as the Cisco Connected Safety and Security UCS Platform Series servers.


Understanding the Integration Script

The CPS-SS system is configured to provide the full capacity of a given RAID array (with 2TB or 3TB drives) to the Cisco VSM server as a single volume. For example, if you have a RAID-5 set of 10 drives with 3TB, then the entire ~25TB is provided as a single volume; the single volume appears to the Cisco VSM server as a single hard drive (e.g. sdc, sdd, sde).

The setup_external_storage.sh script splits the single storage volume into two partitions of equal size, formats the partitions, mounts them, and integrates them into Cisco VSM.

The setup_external_storage.sh script offers the following options:

Table 2-1 Script Options 

Script
Purpose

No parameters

Run the script with no parameters (for example, setup_external_storage.sh) to discover any connected fibre channel devices and create the new media partitions for use by Cisco VSM.

See the "Integration Procedure" section for more information.

Restore

Include the restore option (for example, setup_external_storage.sh restore) to retrieve and restore any media partitions that were previously configured on the disk so they can be used again. No new partitions are created using this restore option.

Use this option only if the following previously occurred:

The script was previously run and the external storage partitions were successfully configured.

The Cisco VSM system software recovery procedure was executed (which removes the partitions from the Cisco VSM configuration).

See the "Example Integration Script with Restore Option" section for more information.

Help

Include the restore option (for example, setup_external_storage -h) to view more information about the script options and version.

See the "Integration Procedure" section for more information.


Requirements

The setup_external_storage.sh script requires the following:

Table 2-2 Script Requirements 

Requirements
Complete? (ΒΈ)

A Cisco Connected Safety and Security UCS Platform Series server running Cisco Video Surveillance release 7.2 or higher.

The Cisco Video Surveillance Storage System must be configured with one or more RAID array to provide storage for video recording by a Cisco Video Surveillance server.

A Cisco VSM server or virtual machine will exclusively access the volumes for each RAID array, (even though a VSM server can access the volumes multiple RAID arrays). The Storage System must be configured with multiple RAID arrays so that it can support multiple Cisco VSM servers.

The RAID array should be configured with a single RAID volume. The setup_external_storage.sh script will create partitions on the RAID volume as video repositories for VSM.

A Cisco Video Surveillance Storage System must be connected to the Cisco VSM server.

Note If the Fibre Channel (FC) connection is not present when the script is run, the external storage will not be detected and not integrated into Cisco VSM. The script can be run again after the FC cable connection is established. The FC port LEDs indicate the connection status.

Note Disconnecting the FC cable during normal operation removes the access by Cisco VSM to the external storage volumes. The /media mount points remain intact, however, and are not deleted form the server and Cisco VSM configuration. The script does not include a delete option for the external storage volumes.

The setup_external_storage.sh script file.

To download the script, go to the Cisco Video Surveillance software download page and select Standalone Tools.

http://software.cisco.com/download/type.html?mdfid=282976740&i=rm


Integration Procedure

Complete the following procedure to display the commands and output to view the server filesystem and partitions, display the script help, run the integration script and verify the results.


Note This example executes the integration script without options. If partitions were previously created, and the Cisco VSM system software was recovered (which deletes any partitions) use the recovery option as described in the "Example Integration Script with Restore Option" section.



Tip See the "Understanding the Integration Script" section for more information.


Procedure


Step 1 Prepare for the external storage integration:

a. Verify the requirements are complete.

See the "Requirements" section.

b. Copy the integration script to the Cisco VSM server as the "localadmin" user.

c. Run the following command to copy the files to the /usr/BWhttpd/sbin/ directory.

localadmin@vsm-server ~]$ sudo cp setup_external_storage.sh /usr/BWhttpd/sbin/

d. Change the user to root.

localadmin@vsm-server ~]$ sudo su -
[root@vsm-server ~]#

e. Verify that the fiber channel controller module lpfc is installed in the system:

[root@vsm-server ~]# modprobe -l -a lpfc

For example:

[root@vsm-server ~]# modprobe -l  -a lpfc
kernel/drivers/scsi/lpfc/lpfc.ko 

Note The lpfc module is included in Media Servers that were factory installed with Cisco VSM.


f. Connect the fiber channel cable from the external storage array to the Cisco VSM server.

g. Reboot the server so it boots with the storage attached.

Step 2 (Optional) Display the Cisco VSM release details (to verify support per the "Requirements") and the current filesystem disk space usage:

a. Display the Cisco VSM build details to verify the release is supported:

[root@vsm-server ~]# cat /etc/Cisco-release
PRODUCT="VSM"
RELEASE="7.2.0"
OSVER=""
GOLD_DISK="VSM 7.2.0-cd15"
BUILDDATE="Sun Aug 25 10:37:12 PDT 2013"

b. Display the filesystem disk space usage (in human readable format):

[root@vsm-server ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             7.9G  2.2G  5.4G  29% /
/dev/sdb7              50G  570M   47G   2% /mysql/data
/dev/sdb5             7.9G  2.8G  4.7G  38% /usr/BWhttpd
/dev/sdb3              32G  173M   30G   1% /var
/dev/sda1             146M   17M  122M  12% /boot
tmpfs                 4.0G  4.0K  4.0G   1% /dev/shm
/dev/sdc1             5.4T  8.2M  5.4T   1% /media1

Step 3 (Optional) Display the help output for command options and other information:

[root@vsm-server ~]# /usr/BWhttpd/sbin/setup_external_storage.sh help
setup_external_storage  will configure external storage volumes for
    use by  VSM 7.x.   It is currently optimized for RAID volumes
    configure in 10 drive, RAID 5 arrays (9+1).
    All other configurations are not supported and would cause
    performance impacts.
usage:
   setup_external_storage [noprompt|restore|help|]
     where
        noprompt    will destroy all partitioning and data on external
                    volumes without any prompting
            without argument it will look for existing partition
            and prompt the user if and only if partitioning info
            exists.
            version: 1.0     date: 11/06/2013

Step 4 Execute the setup_external_storage.sh integration script from the /usr/BWhttpd/sbin/ directory to discover any connected fibre channel devices and create the new media partitions for use by Cisco VSM.

The command syntax is:

[root@vsm-server ~]# /usr/BWhttpd/sbin/setup_external_storage.sh


Note After running the script, the newly created /media partitions are available for recording in Cisco VSM, without needing to reboot the server.



Note In the following example, the script is run without options, which creates new partitions. See the "Example Integration Script with Restore Option" section if previously configured partitions need to be restored.


For example:

[root@vsm-server ~]# /usr/BWhttpd/sbin/setup_external_storage.sh
user friendly !!!
get_external_storage_devices
using the next  MEDIA_PART_NUMBER = 1
WARNING:  /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000 has partitioning and 
or data
WARNING:  It appears the external storage has existing partitioning and
          possibly video data.  Continuing will erase any data on external
          partitions.
Are you sure you want to proceed? [yes/no]
yes
====== Creating Partition Tables =====================
DEVICE /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000
create_partition_table /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000
parted /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000 mklabel gpt
Warning: The existing disk label on /dev/sdd will be destroyed and all data on this 
disk will be
lost. Do you want to continue?
parted: invalid token: gpt
Yes/No? Yes
New disk label type?  [gpt]?
Information: Don't forget to update /etc/fstab, if necessary.
======= Creating Partitions =========================
create_partitions_on_device /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000
stripe size = 18432
START_S=34  SIZE_S=10
number of partitions: 2
stripe size = 18432
START_S=36864  SIZE_S=13502366MB
parted /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000 mkpart primary xfs 
36864s 27652884479s
Information: Don't forget to update /etc/fstab, if necessary.
stripe size = 18432
START_S=27652884480  SIZE_S=13502366MB
parted /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000 mkpart primary xfs 
27652884480s 100%
Information: Don't forget to update /etc/fstab, if necessary.
======= Formating Partitions and ====================
======= Creating fstab entries, mount pts =======
format_partitions_on_device /dev/disk/by-id/scsi-36000402006d812907fbf9a1d00000000
format partition: /dev/sdd1
log stripe unit specified, using v2 logs
format partition: /dev/sdd2
log stripe unit specified, using v2 logs
update_fstab_device_mount_log UUID=d7844df6-eda7-4acc-b317-36c0412b90fe /media2
update_device_name /dev/sdd 1 /media2
parted /dev/sdd name 1 /media2
Information: Don't forget to update /etc/fstab, if necessary.
update_fstab_device_mount_log UUID=6e68759b-8b6f-4036-a859-36571460b753 /media3
update_device_name /dev/sdd 2 /media3
parted /dev/sdd name 2 /media3
Information: Don't forget to update /etc/fstab, if necessary.
Configuring VSMS
cisco           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cisco_kernelTweaks      0:off   1:off   2:on    3:on    4:on    5:on    6:off

Step 5 Verify that the filesystem disk space usage and external storage partitions are correct.

a. Display the filesystem disk space usage (the -h option displays the results in human readable format):

[root@vsm-server ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             7.9G  2.2G  5.4G  29% /
/dev/sdb7              50G  570M   47G   2% /mysql/data
/dev/sdb5             7.9G  2.8G  4.7G  38% /usr/BWhttpd
/dev/sdb3              32G  171M   30G   1% /var
/dev/sda1             146M   17M  122M  12% /boot
tmpfs                 4.0G  4.0K  4.0G   1% /dev/shm
/dev/sdc1             5.4T  8.2M  5.4T   1% /media1
/dev/sdd1              13T  8.2M   13T   1% /media2
/dev/sdd2              12T  8.2M   12T   1% /media3

b. Display the available disks and disk partitions. The file system unique ID and its mount point are displayed from the list of configured partitions:

[root@vsm-server ~]# diff /etc/fstab /etc/fstab.orig
12,13d11
< UUID=d7844df6-eda7-4acc-b317-36c0412b90fe /media2         xfs        
rw,noatime,nodiratime,logbufs=2  1 2
< UUID=6e68759b-8b6f-4036-a859-36571460b753 /media3         xfs        
rw,noatime,nodiratime,logbufs=2  1 2

Example Integration Script with Restore Option

The restore option retrieves and restores any media partitions that were previously configured on the disk so they can be used again.

This option is used after the Cisco VSM system software is recovered, since the recovery process deletes any Cisco VSM storage partitions from the Cisco VSM configuration.


Tip See the "Understanding the Integration Script" section for more information.


Procedure


Step 1 Restore the Cisco VSM system software.

See the Cisco Video Surveillance Manager Recovery Guide (UCS Platform) for more information.

Step 2 Complete the procedure to "Integration Procedure" section, except use the restore option to the integration script.


Tip See the "Understanding the Integration Script" section for more information.


For example:

[root@vsm-server /]# /root/BWhttpd/sbin/setup_external_storage restore
restore
get_external_storage_devices
======= Creating fstab entries, mount pts =======
update_fstab_device_mount_log UUID=d7844df6-eda7-4acc-b317-36c0412b90fe /media2
update_device_name /dev/sdd 1 /media2
parted /dev/sdd name 1 /media2
Information: Don't forget to update /etc/fstab, if necessary.
update_fstab_device_mount_log UUID=6e68759b-8b6f-4036-a859-36571460b753 /media3
update_device_name /dev/sdd 2 /media3
parted /dev/sdd name 2 /media3
Information: Don't forget to update /etc/fstab, if necessary.
Configuring VSMS
cisco           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cisco_kernelTweaks      0:off   1:off   2:on    3:on    4:on    5:on    6:off

Step 3 Verify the results by listing the contents of each partition.

The following examples uses the -al option to list all results in long format.:

[root@vsm-server /]# ls -al /media2
total 8
drwxr-xr-x  6 root   root    103 Nov 26 17:29 .
drwxr-xr-x 27 nobody nobody 4096 Nov 26 18:00 ..
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10000
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10001
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10004
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10008
-rw-rw-rw-  1 root   root      0 Nov 26 17:14 getstoragestatus
-rw-rw-rw-  1 root   root      0 Nov 26 12:25 systemstoragestatus
[root@vsm-server /]# ls -al /media3
total 8
drwxr-xr-x  6 root   root    103 Nov 26 17:29 .
drwxr-xr-x 27 nobody nobody 4096 Nov 26 18:00 ..
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10002
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10003
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10005
drwxrwxr-x  3 root   root     21 Nov 26 17:29 10009
-rw-rw-rw-  1 root   root      0 Nov 26 17:14 getstoragestatus
-rw-rw-rw-  1 root   root      0 Nov 26 12:25 systemstoragestatus