Wednesday, February 8, 2012

NetApp cabling and hardware basics


In this post we will review some of the NetApp backend connectivity and hardware basics, like disk ownership, single path and multipath, partner cabling, etc. I decided to write this article since I’ve been working with NetApp for some time, but always managing them remotely, never had the chance to stand in front of one of these amazing storage boxes, and never found a NetApp document explaining this stuff, so learning and figuring this has been like a challenge to me and I think it would be nice to share my progress with other ones starting working with NetApp storage.
Let’s see what we are going to talk about:
  • Head FC ports
  • Shelves and modules
  • Disk ownership (hardware and software)
  • Cabling (single path and multipath, single nodes and clusters)

Head FC ports

Filers need FC ports in order to be able to connect to fabrics and shelves (there is a chance of having SAS shelves but we will only cover FC shelves on this post). This is the rear view of a FAS3020, as you can see, it comes with 4 FC ports (marked in orange):
image
These ports are named as 0a, 0b, 0c and 0d, and as said they can be used to connect shelves (configure them as initiators) or to connect to hosts (configure them as targets). You can define whether a FC port is  target or initiator using the fcadmin config command.
As you can see there are 4 expansion slots, here you can place new HBAs to provide the filer increased FC port availability, in this case the ports will be named ‘#X’, where ‘#’ stands for the number of slot where the HBA has been installed and ‘X’ stands for the port index which can be ‘a’ or ‘b’ since most of the HBAs are dual ports.
You can use sysconfig command in order to see how many ports your system has and what it is connected to them. What is connected under each port is known as loop or stack, for example, if under port 0a you have 3 shelves, it would be a loop or a stack composed by 3 shelves. Here it is an example from a NetApp simulator, in this case, as it is a simulator, different adapters will be named as v1, v2, v3, etc.:
image

Shelves and Modules

As any storage system besides having the controllers or heads, it has shelves where disks should be contained. NetApp has different types of disks, shelves and modules. Depending on the type of disk you select you will be able to use certain types of shelves and certain types of modules. Check the following diagram to understand the possible combinations (as we already said, we won’t cover SAS shelves, but you can take a look at http://www.netapp.com/mx/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs-la.html for further information):
image
Let’s see what each box means:

Disk type:

  • ATA Disks: well… you must be already used with these types of disks.
  • SATA Disks: Also for this one, already well known.
  • FC disks: Fiber channel disks.
To read more on disks types, specially about supported types and sizes, there is a very good NOW article on disk supported types: Available disk capacity by disk size (you might require registering to read the article).

Shelf Models

  • ds14mk2: This shelf accepts 14 FC or ATA disks, it takes 3 rack units and accepts ESH2 and AT-FCX disks.
  • ds14mk4: This shelf accepts 14 FC disks, it takes 3 rack units and only accepts ESH4 disks.

Modules

Modules allow different disk shelves to be connected to the storage FC loops.
  • ESH2: This module is used for FC disks and it is connected to a FC 2GB bus.
  • ESH4: This module is used for FC disks and it is connected to a FC 4GB bus.
  • AT-FCX: This module is used for ATA or SATA disks and it is connected to a FC 2GB bus.
This is a general guide so you can understand the concepts and differences between disks, shelf types and modules, for further information you can see the technical specs of you storage box. Here you can see the rear view of a disk shelf with only one module installed. Each shelf has space for 2 modules, module A (in the top slot) and module B (in the bottom slot, empty in the next picture).
image
As we said before, shelves are connected to a loop or stack, each module has an IN port and an OUT port which allow shelf being chained, and between the modules you will see a little green display which will allow you to set the number of shelf inside the loop. In the following picture you can see a single FAS3020 controller with 2 shelves in loop 0a:
image
This is a single path configuration, but we will talk on that later. The picture shows how rx (receive) and tx (transfer) fibers from 0a port on the filer head are connected to IN port in module A in shelf 1, the from module A in shelf one rx and tx fibers are run from OUT port to IN port in shelf 2 module A.

Disk ownership

Well, here we will make a quick stop and just say there are two types of ownership for disks. But first, let’s define ownership, disks are owned by a filer which is the one that manages the LUNs, shares, exports, snapshots and the rest of the operations over the volumes it hosts, in a cluster scenario the filers must know it’s partner’s disks and only take ownership of those resources in case of a takeover.
About the types of ownership, we have:
  • Software based: The less common one, the ownership of the disks in managed with the disk assign command, disks owned by a filer might be distributed across all shelves belonging to the cluster.
  • Hardware based: The most common one, the filer connected to the A module of the shelves owns the disks.

Cabling

NetApp supports lot’s of cabling configurations, let’s start reviewing from the simplest one, to the more complex ones. As said in the previous sections of this post there are other supported configurations, such as metro cluster, but here we will cover the most common ones:
  • Single node, single path
  • Single node, multipath
  • Cluster, single path
  • Cluster, multipath

Single node, single path

The most simple configuration, this is the less redundant solutions since it has many points of failures, such as a single controller, only 1 loop to each stack and only one module per shelf. In this configuration, rx (receive) and tx (transfer) fibers are connected from any of the FC ports on the controller to IN port in module A in the first shelf of the loop or stack, then from module A in that shelf rx and tx fibers are run from OUT port to IN port in shelf 2 module A, and so on for all the shelves in each stack.
image

Single node, multipath

This configuration is more secure than the previous one since it reduces all the single point of failures but one (there is still only one controller):
image
As you can see, in this case shelves have 2 modules each and there are two loops connected to the same stack of shelves (0a, solid line, and 0c, dotted line).
Why have  we used 0a for primary loop and 0c for secondary? well… there is no actual limitation, you can use any adapter you like, but NetApp recommends to use 0a as primary and 0c as secondary and same for 0b / 0d, even across nodes, will talk on this later.
Remember when we talked about disk ownership? If hardware disk ownership has been set (which is in 99% of the cases) 0a loop, connected to the A modules, will own the disks, in case of a faulty module or fiber cable, resources would start being accessed using 0c loop. If you run sysconfig as we have seen early on this post, you would see there are 6 shelves on the system, 3 attached to 0a and three attached to 0c,  usingenvironment shelf command and storage show disk –p command you can identify which shelves are duplicated and which is the loop connected to A module and which to B modules.
For example if you run environment shelf command, you would obtain something like this for each shelf on the system, if you have 3 shelves in 2 loops then you would see this 6 times:
Channel: v0
Shelf: 1
SES device path: local access: v0.17
Module type: LRC; monitoring is active
Shelf status: normal condition
SES Configuration, via loop id 17 in shelf 1:
logical identifier=0x0b00000000000000
vendor identification=XYRATEX
product identification=DiskShelf14
product revision level=1111
Vendor-specific information: 
Product Serial Number:          Optional Settings: 0×00
Status reads attempted: 844; failed: 0
Control writes attempted: 3; failed: 0
Shelf bays with disk devices installed:
13, 12, 11, 10, 9, 8, 6, 5, 4, 3, 2, 1, 0
with error: none
Power Supply installed element list: 1, 2; with error: none
Power Supply information by element:
[1] Serial number: sim-PS12345-1  Part number: <N/A>
Type: <N/A>
Firmware version: <N/A>  Swaps: 0
[2] Serial number: sim-PS12345-2  Part number: <N/A>
Type: <N/A>
Firmware version: <N/A>  Swaps: 0
Cooling Unit installed element list: 1, 2; with error: none
Temperature Sensor installed element list: 1, 2, 3; with error: none
Shelf temperatures by element:
[1] 24 C (75 F) (ambient)  Normal temperature range
[2] 24 C (75 F)  Normal temperature range
[3] 24 C (75 F)  Normal temperature range
Temperature thresholds by element:
[1] High critical: 50 C (122 F); high warning 40 C (104 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
[2] High critical: 63 C (145 F); high warning 53 C (127 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
[3] High critical: 63 C (145 F); high warning 53 C (127 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
ES Electronics installed element list: 1, 2; with error: none
ES Electronics reporting element: none
ES Electronics information by element:
[1] Serial number: sim-LS12345-1  Part number: <N/A>
CPLD version: <N/A>  Swaps: 0
[2] Serial number: sim-LS12345-2  Part number: <N/A>
CPLD version: <N/A>  Swaps: 0
The first record marked in bold is the serial number for the shelf, since this is an output obtained from a simulated filer the serial number here is missing, and the other two serial numbers marked in bold identifies A and B modules in that shelf respectively. So serial numbers will help to understand which shelves are connected to which loops, then storage show disk –p command will help you to identify which is the primary loop:
image
As you can see the 3 simulated shelves we saw in the sysconfig output are connected only to v0 adapter (since NetApp simulator does not emulate multipathing to shelves), and also you can see primary port is A.

Cluster, single path

Take a look at the following configuration:
image
we have 2 nodes interconnected by an infiniband cable (this interconnect cable is used for heartbeat and other cluster related operations and checks), and then we have 2 stacks, one with 3 shelves, and another one with 2 shelves. The first stack has all the A modules connected to 0a loop in controller 1 while B modules are connected to 0c modules in controller 2, as in a single node multipath configuration 0a/0c is used, but the difference now resides in 0a loop belongs to the owning filer and 0c to the partner, not 0a and 0c in the same node. And then we have the very same configuration for node 2, 0a loop is connected to A modules in the second stack and 0c adapter in controller 1 is used to connect the partner to this stack.
In this configuration there is no single point of failure, but there is still one down side, if an A module, or a fiber between primary loops fails a takeover of the resources would be executed by the partner. For example, if module A in shelf 1 (upper one) on 0a loop in controller 1 fails (I know it might sound confusing, read it twice if necessary, I had to hehehe), controller 1 would lost connectivity to the whole stack, in this case, controller 2 would have to takeover resources from controller 1 in order to be able to continue servicing storage. Unfortunately, the takeover process implies cifs service to be restarted, all cifs connections are dropped, so it can be really disruptive.

Cluster, multipath

Ok, now take a look at the following graph and go crazy!

image
Let’s do some writing to describe connections because it is really hard to follow the lines hehe:
Controller 1 (left) has:
  • 0a loop connected to A modules in stack 1 (3 shelves, left).
  • 0c adapter is connected to B modules in stack 2 (2 shelves, right).
  • 0b loop is used to provide a second patch for controller 1 to stack 1, it is connected to B module OUT port in last shelf on the loop, this way if A module fails in any of the shelves on this stack Controller 1 would still have access to the disks on this stack without need to failover resources over the partner.
  • 0d port is connected to shelf 2 in stack 2 module A OUT port, this way if resources have been failed from controller 2 to controller 1 you still don’t have a single point of failure.
Controller 2 (right) has:
  • 0a loop connected to A modules in stack 2 (2 shelves, right).
  • 0c adapter is connected to B modules in stack 1 (3 shelves, left).
  • 0b loop is used to provide a second patch for controller 2 to stack 2, it is connected to B module OUT port in last shelf on the loop, this way if A module fails in any of the shelves on this stack Controller 2 would still have access to the disks on this stack without need to failover resources over the partner.
  • 0d port is connected to shelf 3 in stack 1 module A OUT port, this way if resources have been failed from controller 1 to controller 2 you still don’t have a single point of failure.
As you might have already guessed this is the most redundant configuration (at least among the standard ones, never worked with metro cluster for example so can’t talk about it), then only down side of this configuration is you have to use 2 FC ports per head in order to provide access to a stack of shelves, this might become really expensive in FAS6240 environment which might have LOTS of shelves in different stacks..
Hope you liked the article and helped you to understand some of the basics, as always, any questions or comments are welcomed.

1 comment:

  1. This blog give me exact information about NetApp cabling and hardware basics.
    importance of structured cabling installation

    ReplyDelete