You can also upgrade your license from the Other Servers section by clicking on the link in the notice displayed or the upgrade can be accessed by clicking the Settings button E-MAIL tab, default, presents an upgrade option when using a local or remote access license.
This will be the SMTP mail server name. E-Mail messages are sent to the E-Mail server using port Note If your E-Mail server is not configured to receive on port 25, then E-Mail will not function properly. If you change this setting be sure it is compliant with your IS policy. This field by default will be blank. If you would like a signature appended to the message, click the check box and type in the signature information in the scrollable window provided.
Enter the user E-Mail addresses. You may add up to ten 10 E-Mail addresses. Type the full E-Mail address, and click one or more of the check boxes for the type of event to which the user is to be notified. You will receive a confirmation message that the changes were successfully completed.
The address is immediately removed. These traps carry all the information that appears in the log entries for each level of severity. Click the SNMP tab. The default is Enter the Community to which the trap belongs.
The default is public. SNMP Servers may belong to several different communities or receive packets for different communities. You can select from Information, Warning and Error types. The server is immediately removed. Syslogd requires that the Global Access license be enabled before the feature is activated.
Once activated on each installation of StorView, the locally monitored storage system events are then sent to the syslogd daemon running on the host system. In this instance a failed syslogd server will not have any impact if another syslogd server is functioning normally. If you are using a Linux operating system, the syslogd daemon is a component of your operating system.
The StorView syslogd agent is global in scope, in that it will always receive all error logs from the accessible storage systems. It is not possible to customize filtering of events for each managed storage system for that host. With Embedded StorView, previously configured listening syslogd servers will persist even though the Embedded Module is flashed with new software as the file data is stored in NVRAM and is not affected by software upgrades.
However, with instances of installed host-based StorView, you must backup or make a copy of the syslog. It logs them to a messages file. For example they can be kernel, user level, mail system and system daemon messages. Syslogd daemon supports both Linux and Windows platforms. If the event was an Error type, and all daemons are configured to receive that type, then they will all receive the event.
If the event was an Information type, only those daemons configured to receive Information type will receive the event. Event logs are sent in standard text format and have no encryption, in such security can be a concern. Additionally UDP packets are involved so there is no assurance that the event log arrived at the subscriber syslogd server. The following fields are described: Field Description syslogd Server syslogd Port Allows the user to enter the IP address of the syslogd servers.
An error message is displayed if an invalid server name is entered. Allows the user to enter a specified syslogd port. The default port is An error message will appear if the enter is invalid or left blank. The range of port IDs are: 1 - If checked the recipient would receive Informational type of event logs. If checked the recipient would receive Warning type of event logs. If checked the recipient would receive Error type of event logs. Clicking this button clears the previously entered IP address for the selected line and resets the check boxes to their default state.
Send out a dummy test message to the specified recipient servers. Enter the port ID of the recipient syslogd server. You must enter a correct value otherwise you will receive an error message. If you wish, test the settings by clicking the TEST button. The server name is immediately removed. You will receive a confirmation message that the changes were successful. If you lose or misplace your password, contact technical support for further instructions.
In the event there is a port conflict with the default multicast port, you have the ability to change this parameter. Note The Monitoring Settings are disabled with the Remote license, you must upgrade to a Global license to enable these features. Group 1 is port , Group 2 is port , and Group 3 is port You may add up to 10 additional monitored servers.
Add additional explicit IP addresses of any other StorView server you wish to include to receive packets that is outside the subnet and click ADD button. Otherwise, skip to step 5. This includes choosing the correct RAID level and array options, assigning hot spares and creating the logical drives. More advanced features are available and discussed in later chapters.
This chapter will step you through this process, however you should have a basic understanding of RAID and storage concepts. Term Description Array A group of disk drives that are combined together to create a single large storage area.
Up to 64 arrays are supported, each containing up to 16 drives per array. There is no capacity limit for the arrays. This is user selectable, from 0, 0.
This is sometimes known as Reserved Capacity. This is the array that is used to automatically flush cache data in a situation where power has failed to some of the drives. This is the amount of data that is written on a single drive before the controller moves to the next drive in the stripe.
RAID 5, 6, and 50 disk arrays must have consistent parity before they can be used to protect data. If the user chooses the Trust option during array creation or stops the initialization, the array will be trusted. Note that any drive failure in a trusted array will result in data corruption.
It is possible to perform the initialization later. This recalculates the parity based on current data, ensuring data and parity are consistent. Access can be enabled or disabled for each host port of each controller. Each logical drive is presented to the host system with a unique LUN. In certain cases such as after deleting another logical drive it may be desirable to change the number that a logical drive is presented as. This can be done at any time, bearing in mind that any attached host systems may need to be rebooted or re-configured to maintain access to the logical drive.
RAID 0 is defined as disk striping where data is striped or spread across one or more drives in parallel. RAID 0 is ideal for environments in which performance read and write is more important than fault tolerance or you need the maximum amount of available drive capacity in one volume. For greatest efficiency, all drives in the stripe set must be the same capacity.
Environments with many small simultaneous transactions e. RAID 1 is useful for building a fault-tolerant system or data volume, providing excellent availability without sacrificing performance. However, you lose 50 percent of the assigned disk capacity. Read performance is somewhat higher than write performance. RAID 5 is defined as disk striping with parity where the parity data is distributed across all drives in the volume.
Normal data and parity data are written to drives in the stripe set in a round-robin algorithm. RAID 5 is multi-threaded for both reads and writes because both normal data and parity data are distributed round-robin.
This is one reason why RAID 5 offers better overall performance in server applications. RAID 5 is ideal for database applications. RAID 6 is the same as RAID 5 except that it uses a second level of independently calculated and distributed parity information for additional fault tolerance. This extra fault tolerance provides data security in the event two drives fail before a drive can be replaced. RAID 10 is defined as mirrored stripe set.
This can increase the performance by allowing the controller to more efficiently cluster commands together. Fault tolerance is also increased, as one drive can fail in each individual array. The process of separating data for storage on more than one disk. For example, bit striping stores bits 0 and 4 of all bytes on disk 1, bits 1 and 5 on disk 2, etc. This is the number of data drives multiplied by the chunk size. Each sub-array has one parity drive.
The controller keeps a map of all the space that is not assigned to any logical drive. This space is available for creation or expansion. Each unassigned region is individually listed.
In order to optimize your system performance based on the type of writes you expect in your operation, we have provided detailed information on optimizing the performance using full strip write operations in an appendix.
If you intend to setup a RAID 5 or 6 disk array and wish to consider optimum performance, you will need to consider the number of data drives, parity drives, and chunk size. For a review, refer to Appendix A. Additional information is provided at the appropriate step during configuration, see section 5. Here a performance profile is selected, the user chooses the disk drives and enters a unique name for the array.
The remaining array parameters are pre-configured by the performance wizard. Although the settings can also be changed manually from the pre-selected values using drop-down menus and check boxes, the profile selections should be valid for most configurations. Once completed the Create button is clicked, the array is created and it automatically begins to initialize.
The Performance Wizard is a component of the Create Array window. When you open the Create Array window and select a performance profile, the wizard will make recommended settings for the remaining array parameters. When a profile is selected from the drop-down menu, all the parameters except selecting the disk drives are pre-configured. If you change a parameter from a recommended setting, a warning icon appears under the Recommendations column for that item.
This indicates a deviation from the recommended setting. The recommended settings are not mandatory and the user can choose to ignore them when they configure their arrays. Note that a warning message will appear at the bottom of the window above the CREATE button if any setting has deviated from those the wizard recommended.
The following icons appear in the Recommendations column: Array and Logical Drive Section OK Icon Description and Condition This icon indicates the setting for the parameter matches the recommended setting for the selected profile.
This icon indicates a setting that deviates from the recommended setting for the selected profile. Warning Icon The General performance profile is the default profile. These array parameters define the details of the array as it is created and are saved in the configuration file. The configuration file is stored on all disk drives that are members of the array regardless of whether the drives are in one or multiple enclosures.
No changes are made to the configuration until the current process is saved, so it is possible to quit at any time without affecting the current configuration. After making changes to your configuration, be sure to make a new backup copy of the configuration file, see section 5. The ability of making a backup copy of the configuration allows you to quickly recover from a damaged configuration that was not self healing, and restore everything to the point in time when the configuration was last saved.
Caution A damaged configuration could result in loss of data. In the above examples, the mouse pointer is over a disk drive which is displaying the drive information in the Notes box. Select the profile you wish to use from the Performance Profile drop-down box.
Only one logical drive should be built for optimum performance. Note Making a change to a setting from the profile selected settings are not reset or changed when the user changes to another profile. If you altered the settings of a profile, close the window and click the Create Array button on the tool bar again. See Figure 5—3 on page 53 for an example of mismatched settings and the on screen warning message. Select drives to include in your array.
The performance wizard at this point has pre-configured all the settings. If you are satisfied with the settings continue with step 11, otherwise continue with the steps below to manually configure your array parameters. Select the RAID level for the array. Click the pull-down menu and choose from the available levels. For RAID 50 arrays. Create the sub-array s. From the pull-down menu select the number of subarrays you wish to create for this array.
Reduce the number of sub-arrays. Choose the chunk size. Click the pull-down menu and select a chunk size 64K, K, or K.
Chunk size is the amount of data that is written on a single drive before the controller moves to the next drive in the stripe. For RAID level 0, 1, or 10 arrays, choose the correct size from the tables below. The idea behind optimum performance is you want to do as many full stripe writes as possible.
The default setting is to initialize, so you will just verify this setting. If it is not set, click the pull-down menu and choose Initialize. Initialization will begin automatically in the background once the array is created.
You will have an option to stop or pause the initialization from the Main screen. If you Stop an initialization, the array will be trusted, see note below. As you create additional arrays, they too will begin initializing. The maximum number of arrays that can be initialized in parallel is sixty-four Note The Trust Array option should only be used in very special circumstances, see section 8.
This determines how much drive capacity to reserve for future capacity expansions or to enable replacement drives of lesser capacity sizes to be used. The back-off percent option is not applicable to non-redundant array types. A RAID 0 array is a non-redundant type of array and gains no benefit from establishing a reserve capacity. The values range from 0, 0. This is not applicable to RAID 0 arrays. A RAID 0 array is a nonredundant type of array and gains no benefit from establishing a reserve capacity.
Set the Read-Ahead Cache threshold. Selecting Automatic, which is the recommended and default setting, allows the controller to determine the optimum size. Selecting Disabled will turn off the Read-Ahead Cache. Select from one of the predetermined sizes to optimize the read performance based on your data patterns. Set the Writeback Cache options. The Writeback Cache is used to optimize the write performance specific to your data patterns.
In general, larger cache sizes will increase the write performance but may lower simultaneous read performance. The recommended size is 16 MB. The strategy of write operations results in a completion being sent to the host operating system as soon as the cache receives the data to be written.
The disk drives will receive the data at a more appropriate time in order to increase controller performance and optimization of drive head seek. Refer to section 5. Note If you change any profile settings, you will get the following message at the bottom of the window.
You can monitor the array initialization at the Main screen by observing a progress bar which appears under the array name displaying the percent complete. Also, in the enclosure front view, the affected disk drives being initialized display an animated icon indicating their initialization status. You can stop or pause the Initialization process if you wish by clicking on the link located to the right of the progress bar. Stopping the initialization will cause your array to be trusted.
A trusted array is indicated on the main screen with the following icon, adjacent to the specific array. You can change the amount of processor time that is dedicated to the initialization for better initialization performance, see section 7. Note Some features are not available on older firmware based systems.
In those cases, the unsupported feature will not appear in the user interface. The host may then send more data. This can significantly increase performance for host systems that only send a low number of commands at a time. The controller caches the data, and if more sequential data is sent from the host, it can cluster the writes together to increase performance further.
If sufficient data is sent to fill a stripe in RAID 5, 6, and 50 configurations, the controller can perform a Full Stripe Write, which significantly reduces the write overhead associated with RAID 5, 6, and Disabling writeback cache ensures that the data is sent to the drives before status is returned to the host. With writeback cache enabled, if a short term power failure occurs, the battery back-up unit provides adequate power to ensure that cache is written to disk when the power is restored.
In duplex operations, the cache is mirrored to both controllers which provides further redundancy in the event of a single controller failure. Mirrored cache is designed for absolute data integrity. The cache in each controller contains both primary cached data for the disk groups it owns, and a copy of the primary data of the other controllers. Mirrored cache ensures that two copies of cache exist on both controllers before confirming to the operating system that the write operation has completed.
Normally, write-intensive operations will benefit from the higher performance when writeback cache is enabled on that array. Read-intensive operations, such as a streaming server, may not benefit from writeback cache. A new performance option has been added which manages writethrough operation when write cache is full. The default value is checked which disables the option providing typical workloads a performance improvement.
When enabled, this option will force all write operations to go through the writeback cache and write command sorting provided thread balancing is disabled.
The thread balancing option is found in the Performance Options window accessed through the Advanced Settings window. If you were to use a larger stripe size, then you run the risk of not being able to cluster sufficiently for the application. In cases where you are performing larger writes to the controller, then you could go up to 4 MB for a stripe size, since you have more data to cluster.
It is recommended to keep the stripe size to MB or less for general use, perhaps increasing it for specific applications such as large sequential accesses. This stripe size is actually the sub-stripe size in RAID 50 cases. If the operating system can do much larger writes, then you may want to increase the chunk size.
With writeback cache enabled, the controller can cache data and perform full stripe writes. This allows the controller to perform a partial full stripe write, where it has most of the data for a full stripe, and can just read some from the drives to complete the stripe.
While Microsoft Windows does 64 KB accesses, these are not aligned. However, since the controller can cluster, this problem is somewhat offset since the controller usually can cluster sufficiently to do full stripe writes.
If it is completely random 64K access on Microsoft Windows, then a 64 KB chunk is not the best, rather KB or KB is better to minimize the number of commands that cross chunk boundaries. Larger chunk sizes should be used if the operating system is writing large blocks, or with large sequential writes where the controller can cluster sufficiently. In addition to the write cache options, the RAID controller provides additional options to further tune your system for the best possible performance.
Refer to section 7. This ensures the validity of the data and parity stored on the array member drives. Two features of initialization are background and parallel. Once the array is created, initialization automatically begins in the background. While initialization is in progress, logical drives can be created and the disks are made immediately available to the operating system at which time data can be loaded.
You may also choose to stop the initialization, or pause an initialization and then resume it at a later time. If you Stop an initialization, the array will automatically be trusted, see note below. The array can be initialized at a later time in which you could choose the option to Trust. This option should only be used in environments where the user fully understands the consequences of the function.
The trust option is provided to allow immediate access to an array for testing application purposes only. Note A trusted array does not calculate parity across all drives and therefore there is no known state on the drives.
As data is received from the host parity is calculated as normal, but it occurs on a block basis. There is no way to guarantee that parity has been calculated across the entire stripe.
The parity data will be inconsistent and so a drive failure within a trusted array will cause data loss. This will open the Array Information window. Type your password and click GO. From the Main screen you can monitor the initialization. The drive member icons of this array will change to an animated icon indicating the array is initializing. You can stop the initialization process, if you wish, by clicking the Stop link located to the right of the progress bar.
The initialization will continue from the point where it was paused. In the event of a drive failure, the controller will use either a defined global spare or a dedicated spare to replace a failed drive that is a member of a fault tolerant array.
The process of configuring your redundant arrays includes assigning one or more drives as hot spares. Global spares are not assigned to a specific array and they can be used by any array as the replacement drive, of course that is provided the spare drive is equal to or greater than the capacity of the array member drives. A dedicated spare is assigned to a specific array and can only be used by that array.
Spare Drive Use Rules: — Drives must be equal to or greater than the capacity of the array drive members. Note: You may get a variety of messages, warnings, or failure notices when attempting to use a un-supported drive.
Note: There must be at least one drive online and available that meets the rules for spare drives on page 59 to be assigned as a hot spare, and a configuration must exists at least one array defined. Figure 5—10 Drive Information Window 3 A pop-up window will appear, click the drop-down menu and select the array to which you wish to assign the dedicated spare.
Only arrays that the spare drive is large enough to replace any member drive of that array or of the same drive type will be displayed in the pull down menu. For example, if you have two arrays one created using GB drives array 0 and one using GB disk drives array 1.
If you have a GB spare drive that you attempting to assign to an array, only Array 0 will be displayed because the drives in array 1 are of equal or lesser capacity drives then the spare. However, if you have a GB spare drive both Array 0 and Array 1 will be displayed since the GB spare is equal to or greater than any drive in either array. The drive will then become online and available for other uses. The Drive Information window will open.
When a new drive is inserted in place of the failed drive, a rebuild operation will begin automatically using the new drive. This option is useful when a global or dedicated hot spare drive is not assigned and you have a fault tolerant array that experiences a drive failure.
This option allows the user to just insert a replacement drive and the rebuild will begin, instead of opening the Drive Information window for the replacement disk drive and assigning it as a hot spare. Refer to the Spare Drive Rules on page The Advanced Settings window will open. Place the mouse pointer on the check box next to the Auto Spare parameter and click to place a check mark enabling the feature. You can also restrict access to the logical drive by assigning which controller port the logical drive is made available on.
Only those host systems with access to the specific controller port s will have access to the logical drive. A logical drive is defined or created from regions of an array, a whole array, or a combination of regions of different array s that can be made available as a single disk to one or more host systems.
If you are creating a logical drive greater than 2 TB, please refer to your operating and file system documentation to verify it supports such sizes.
You may wish to avoid choosing combinations of a region from one array and from another array to create your logical drive. This will reduce data fragmentation.
The Create Logical Drive window will open. See below to use a portion of the region for the logical drive. Regions are the available free space in the array that can be used to create a logical drive.
In Figure 5— 15, you will notice all three Array regions are identified as Region 0's. If you create a logical drive and choose a portion of a region, after creating the logical drive, the remaining free space of that region label will be incremented by 1, in other words it will now be defined as Region 1, if another portion of that free remaining space is used for another logical drive, the value increments again, that is Region 2 and so forth.
If you choose to use a portion of free space from each region to construct the logical drive, you will first create the logical drive using one portion of one free region, then use the Expand logical drive function to add a portion of the next region and repeat again from the other free region. In other words, you use GB from Array 0 Region 0 to create the logical drive, then using the Expand logical drive function choose GB from the next array's region, and after the expansion is complete, repeat again choosing GB from the next array's region.
Note 3 Enter a name for your logical drive, you may use up to 32 characters. Holding the mouse pointer over the logical drive name on the Main screen will show the complete name in a popup. You may use all or some of these regions for this logical drive. If you are creating a logical drive greater than 2, GB 2 TB , please refer to your operating and file system documentation to verify it supports such sizes. You may do this before or after the snapshots have been taken.
Two sizes are available, bytes and bytes. Select the Controller Ports you wish to make the logical drive available through. Place a check mark next to the desired controller ports displayed. There is a one-to-one relationship between the controller port selected and a data cable connected to that port. Ensure that the ports you select are properly cabled. If the command was unsuccessful, review the settings for incorrect parameters and hardware for operational status.
In most storage system environments, creating the logical drives, assigning them their logical unit number LUN and setting the availability is sufficient to meet the requirements for setup. Otherwise access your operating system to make the new drives available for use. When you create or make changes to arrays, logical drives, hot spares, SAN LUN Mappings, feature licenses, or change the parameters of a specific controller setting, a file is written known as the configuration to all the disk drives that are members of the array.
StorView has the ability to capture that file allowing you to save it to an external file. Should the situation occur where a configuration is cleared, you are instantly able to re-establish your storage system by restoring the configuration from the external file. It is recommended to periodically save the configuration. Caution If you cannot restore the configuration exactly as it was, you will not be able to restore access to the data and it will be lost. If you wish to use the default file name, select the directory and click the Save button, otherwise enter the name you wish to use and specify the directory, and then click Save.
Click the Cancel button to exit without making any changes. If this occurs, click the icon and investigate the problem from the information provided. You may wish to investigate the Event log to verify one or more event s that changed the controller status, see section By passing the mouse pointer over each item in the Status group controller and battery , a pop-up window will appear with specific detailed data.
They include general controller status and battery status. Placing the mouse pointer over the item will display a pop-up window with more detailed information. Status icons appear adjacent to each item in the group along with a text message. Status icon and text message conditions are defined as green - normal, yellow - warning, and red - failed. When the mouse pointer is rolled over the Battery status icon, the pop-up display provides general information about the battery, that is whether it is charged, charging in process, or has a fault.
A warning yellow icon indicates the battery is low but is charging. An error red icon indicates the battery is low and has been charging for over 24 hours and is most likely defective. From this group you can determine the type of processor, onboard memory size, the firmware version, and the CPLD Complex Programmable Logic Device firmware version. The configuration can be assigned the WWN of any controller preferably one of the controllers installed in the enclosure.
The Shutdown button under each controller will cause the controller to execute a graceful shutdown by flushing the cache first then executing the shutdown command. If it is, wait until the controlling application has completed writing the data to the disk before continuing with the shutdown. This will ensure that the backup battery is not holding cache data and will prevent the battery from being drained.
The controllers have the ability to automatically update their partners firmware in dual controller configurations, however the update process behaves differently under certain conditions. If one controller has a later version of firmware than its partner controller, during the startup process the later version firmware controller will automatically update the firmware on the lower version controller. If a partner controller fails in a dual controller configuration, when the failed controller is replaced regardless of its firmware version it will always be updated to match the surviving controllers firmware.
If you want to downgrade the firmware version you must shut down one controller and flash the operating controller. Then shut down the controller which was downgraded, bring the second controller up and flash its firmware to the lower version.
Then start both controllers and resume operations. You will see an acknowledgement window appear indicating the status of the update, followed by the controller automatically restarting. Note For more information on License Information, see section 1. See section 6. In dual controller configurations, you must shutdown one RAID Controller and physically remove it from the enclosure before performing this procedure.
Click the Controller icon located just above the Tool Bar. The Controller Information window will open. You will see an acknowledgement window appear indicating the status of the update, followed by the expansion module automatically restarting.
A window opens prompting to confirm clearing the logs. Click the OK button. Open the event log with a text editor program. This is accomplished through the Advanced Settings window. These functions include managing the identity, fault tolerance, host port settings, drive advanced power management and array performance tuning. The configuration contains all the information that defines the disk arrays, logical drives, SAN LUN Mapping, hot spare drives and controller specific settings.
If you wish to change the configuration name, enter the new name in the block provided. If a new WWN is showing up in the list, it is possible for you to have a configuration WWN on your system where there is not a controller in the storage system with the same WWN.
For example: if you have one or two controllers in your system with no configuration. Then create an array. Then pull out Controller 0. If you only had a single controller in the system then insert another physical controller.
If you have dual controllers then just leave Controller 1 in there. The Configuration WWN is what is reported to the outside world no matter what port you are plugged into the system. This way if you swap controllers most likely because of a failure your Configuration WWN will still report the same WWN as it did before so you will not have to change any mappings on your host or fibre channel switch.
If another controller was used to create the configuration, its WWN is displayed. You should assign the configuration WWN to the installed controller. In this case click the pull-down menu and select Controller 0 or Controller 1.
If you modify the Configuration WWN, the feature license key will no longer be valid and the snapshot function feature will be disabled.
Normally, when deselected, a host connected to either port will see the same Configuration WWN. When enabled selected you will see a slightly different WWN for each port but the same Configuration name.
This option is useful to users who are connecting the storage to a switch employing a fabric topology where the same WWN is not tolerated. This is beneficial when a hot spare or global spare is not designated for a fault tolerant array and a drive fails in that array. When you assign a hot spare dedicated or global this option is automatically enabled. After creation of the hot spare, the option can be disabled if desired.
Do not interrupt the installation. You will be prompted to restart your computer. Click Yes. Re-connect all storage subsystems. Version 1. DSM driver xyrsp2x0. A DSM driver is installed for each type of Xyratex storage enclosure. Note that the numeric part of the filename may differ on older versions of the software. Control Panel GUI svpmgui.
Installation program. StorView Path Manager for Windows can be uninstalled in the standard manner: 1 Note Disconnect all storage subsystems from all HBAs before uninstalling if this is not done, uninstallation can take a long time, especially on a system with many paths. Click the Start button. Find the entry for StorView Path Manager and click on it. Click the Remove button and follow the on-screen instructions. Reconnect the storage subsystems. The StorView Path Manager software will still be uninstalled by the uninstaller, and there will be no adverse affects on the system.
This is optional, and can be done as follows: 1 Open the Windows Device Manager. User Interface 3. The control panel automatically updates about every 10 seconds.
At the top of the pane, the Managing Host line shows the network address of the computer currently being managed. By default, this is the local system. It can be changed using the Change Host link see 3. The right hand pane shows one line for each multipathed logical drive with the following information: The Disk Number as shown in the Windows Disk Management.
The capacity in GB. The drive letters of any volumes seen by Windows. A maximum of two letters will be shown. If more exist, a An icon identifying that the logical drive is part of a cluster, if applicable. The operating system would normally see each and every path as a separate logical drive, but MPIO enables it to see which paths point to the same logical drive and lists those paths under each drive accordingly.
See Setting the Path Selection Policy for details. The number of paths. Passive The path is healthy, but not currently in use. Failed The path has encountered errors. It is not healthy and cannot be used. It may be removed or may recover, depending on the situation. Bus The HBA port. Target The target ID of the logical drives this may not match the controller target ID settings because Windows uses its own scheme for assigning target IDs.
Any two devices with the same Port, Bus and Target will be on the same physical path. Controller The controller and controller port to which the path is connected.
This is in the form CxPy, where x is the controller number 0 or 1 and y is the port number 0 or 1. The left hand pane contains two links: Advanced Configuration see 3.
Tech Support see 3. In normal use you should not need to alter any of the settings from their defaults. The Advanced Configuration window is accessed by clicking the link of the same name in the Control Panel. The options presented here have useful defaults and generally dont need to be changed to get MPIO running successfully.
The following configuration items are present: Retry Count The number of times the driver will retry a failed path before it is declared invalid. It will then be removed after the period of time specified in the PDO Remove option. Retry Interval The time in seconds that the driver waits before retrying a path that returned an error. Path Verification Period The interval in seconds for checking idle paths see the Path Verify option above.
Auto Balance If this option has a check mark, preferred paths of all LUNs are automatically balanced evenly between host ports and controller ports. Additionally, when this feature is enabled, the storage administrator no longer has to manually manage preferred paths. Initial Rebalance Interval The time in seconds that the driver waits after booting up, before performing Auto Balance of preferred paths. If Auto Balance is not checked, this value is ignored. Auto Balance will occur if a path is added or removed.
To avoid multiple rebalances as paths are initially discovered, this value allows a minimum time for the first Auto Balance occurrence. Rebalance Interval The time in seconds that the driver waits, after the initial rebalance interval has elapsed, before responding to configuration changes by performing Auto Balance of preferred paths.
To avoid multiple rebalances in a short period of time, this value allows a minimum interval between Auto Balance occurrences. Click Apply to make changes, or Close to exit without altering the settings.
The qualified host name, for example: host1. The IP address of the host, for example: Note The user name and password of the remote host must be identical to that of the current host in order to make a connection. If a connection is established, the Managing label will change to the name of the new host computer and the Control Panel will reflect the settings on the new system.
Otherwise an error message will appear. DSM traces can also be generated from this window. The trace should be sent to your storage vendor as it may contain useful debug information.
Traces can be enabled or disabled by clicking on the link of the same name a reboot will be required for the change to take effect. The results will be shown in Windows Notepad. Note that if the Add button is used to add MPIO support to a storage device, the specified storage device will be under control of the generic Microsoft DSM and the specialized Xyratex driver will not be loaded. Instead of using the Add button on this tab, follow the Installation instructions documented in 2.
They will only show up on this tab if you have not installed the appropriate version of StorView Path Manager for Windows. Rather than using this tab, follow the Installation instructions documented in 2. These are two different policies. Click OK. To view path properties, select a path and click Edit The figure below shows the resulting screen. From this screen, the user can modify the Preferred Path. If more than one path is marked as Preferred, no change to the Preferred Path will be made.
Changes to the Path State are ignored. How To 4. Click on the Path Selection Policy for the logical disk. The Path Selection Policy window will open. So Best course of action, as I suggest would be to contact Xyratex. Ofcourse I am assuming that you are getting this issue pretty consistently so that you can verify with package reliably. As the connection that is failing is maintained by the MPIO software, that seems the most likely culprit.
This is not functionality provided by Windows Server natively. I would work with the vendor. I looked Disk management console and I saw DB disks were gone! This problem solves when I restart the server. I did this and for last 3 months that was ok but last night it happens again. It is our monitoring system and send very important messages.
Alert occuring sequence in LOG files:. If the disk is getting pulled that means all paths are lost. You'll want to check any logging available on the array to determine why those paths are being lost so you can fix it, but assuming the path loss is transient there are some registry keys you can tune to help survive it.
To explain, when a path failure occurs a worker thread is kicked off that will run at a fixed interval to determine if the path has been restored. This is important because when all paths are lost, the PDORemovePeriod timer starts and if no paths are restored to the disk before it expires, the disk is pulled.
If you're experiencing transient path loss maybe a bad switch or an issue on the array causes paths to go away and return quickly then if Windows misses the pnp notification for the path arrival, then the path will not be restored prior to PDORemovePeriod expiring and you'll lose the disk even if there is a valid connection.
The registry key prevents that from occurring. The 2nd set of keys performs the same function, only for Persistent Reservation PR commands. Normally you'd want to adjust the PDORemovePeriod to account for periods where you know paths will be lost for a period of time back end controller failover for example , but if that isn't a concern then setting it to 60 seconds or so is a good starting point.
0コメント