Presentation is loading. Please wait.

Presentation is loading. Please wait.

FC Thin Provisioning for iSCSI and FC

Similar presentations


Presentation on theme: "FC Thin Provisioning for iSCSI and FC"— Presentation transcript:

1 FC06-053 Thin Provisioning for iSCSI and FC
Rick Jooss SANiSAN Business Unit Transcript: My name is Rick Jooss and I work in our SANiSAN Business Unit as a Technical Marketing or as we're now called, Product and Partner Engineers. We'll try to talk a little bit about thin provisioning in the iSCSI and Fiber Channel space. I'd like to get a little bit of show of hands. I did a presentation at kickoff where (inaudible) some people might remember we talked about thin provisioning. How many people saw that presentation? Okay, a pretty small percentage, okay. Because we have a lot of the same information here, but I want to have a little bit different focus. And so what we'll do is we'll spend more time on the newer stuff around the configurations, but I wanted to kind of get a feel for how many people have already seen that.

2 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So as an agenda, what we're going to talk about, we're going to talk about what is thin provisioning or flexible provisioning? What types of thin provisioning we have in the NetApp space. And then we're going to talk about the ONTAP variables that affect thin provisioning or affect the amount of space we're consuming. And then we're going to talk about configurations. And when I did this presentation last time, we spent a lot of time on the variables and we spent a little time on the configurations. So what I want to do now is I want to spend more time on some example configurations that we think make sense. But we need to run through the variables as well. One thing I'd like to ask is when people have questions raise your hand, ask. If it was two slides ago, that's fine, we'll go back. If you have the question, I'm certain that other people in the room have the same question. There's a lot of confusing material here, so be sure to ask. And, as I said, if you have the question, I'm sure somebody else does and I want to make sure we get through that as best we can. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

3 What is Thin Provisioning?
Presenting more space to the servers than the storage systems actually contains Classic examples from other industries Banks Water Companies Electric Companies Insurance Companies Large systems are needed Transcript: So what is thin provisioning? Thin provisioning is really when we present more space out to the servers or out to our end users, than what our storage system actually contains. In other industries, there's all kind of examples of thin provisioning that are done, they usually tend to be big utility-type things. Water companies, electric companies, the banks certainly do thin provisioning. If we all go to the banks to try to get our money out at the same time, that doesn't work very well. We can maybe learn something from probably 1929 in terms of that. And the insurance company I think is actually the best example of thin provisioning. If you look at an insurance company, insurance companies, what they do is they're very, very focused on trying to manage their risk. And that's what we have to do if we're going to thin provision as well. If you look at an insurance company, they'll go ahead and they try to geographically disperse themselves as well as the type of insurance they offer. What that translates to in a storage environment is we have to be careful that if we're going to thin provision we don't probably want to thin provision with one type of storage. We want to probably have some Exchange storage or some end user data and maybe different databases. And even if I would have the same type of data, then what I want to do is make sure I do that among different groups so that their need for storage isn't all going to hit at the same time. And if we look at an insurance company, if they would have focused only southeast Louisiana last year, they'd be out of business today, right? But they don't do that, they do that on purpose. They have armies of people that look at where their risk is and where they need to diversify. So we need to kind of have that same type of mentality when we're looking at thin provisioning storage as well. And the last bullet here is large systems are generally needed. Again, this goes back to the insurance company. An insurance company doesn't insure one individual person, right? That's not insurance, that's not risk management, that's called taking a bet. And generally if we're going to be doing that in the storage system as well, we're going to need to have the more users, the more volumes we're configuring the more it makes sense to be doing thin provisioning. So there's cases where we can do it with a relatively small number. We probably can't do it with one, that's probably too high a risk, probably. And we can think about that, in some cases it might. And Jeremy if you're going to come you can't sleep. Author’s Original Notes: Examples tend to be more transient cases – i.e. you use it and then you don’t need it any more Doing Thin provisioning with FlexClones is not covered but people seem to have a good understanding of that and are selling that pretty hard. The insurance example is the best one to focus on here because the model is very similar. If you where an insurance company that only sold insurance in southern Louisiana you would have been put out of business last year. Insurance companies very purposefully spread their risk by insuring in different areas, geographically and well as types of coverage. If you are going to thin provision, you need to think the same way. This provisioning only makes sense if we can have different type of data.

4 Types of Thin Provisioning
Snapshot Space Creating, mapping and snapshotting 100 x 100GB LUNs with less than 20+ TB (< 2X + Δ) In the NAS world snapshot space is always thin provisioned LUN Space Creating and mapping 10TB (100 x 100GB) LUNs when the storage system only has 8TB Only makes sense when one is not creating snapshots because having less than 10TB with snapshots is not reasonable Transcript: So there's two different types of thin provisioning in our space. So when we're thinking from a NetApp perspective, there's thin provisioning of snapshot space. And what that comes out to is, if we look, we take an example, we're going to configure 100/100 gigabyte LUNs. So normally for the LUNs themselves, we need 10-terabytes. In our default recommendations for a SAN environment, we'd actually need 20+ terabytes, right, because we've got this equation 2X + delta. But it's not reasonable to really expect customers who want to do that. I've only met a couple of customers who are really actually willing to pay that price. So we're going to talk about how we can thin provision our snapshot space. And, in particular, this is where we think people should be thin provisioning today. If we look in the NAS world, people have always done thin provisioning, it's been thin provisioned since the beginning. I don't know anybody who has 100% space reservation in their NAS space, so it makes a lot of sense. And we've made some changes to ONTAP, which really allow us to do this with much lower risk. We can control what we want to effect if we don't have enough space, so we'll go into that. The other thing that I can thin provision then is I can thin provision LUN space. So doing that, it only makes sense to be thin provisioning LUN space if I'm not creating snapshots. I don't think it's reasonable to have -- if I'm going to have a gigabyte LUNs that's 10-terabytes of storage it's not reasonable to have less than 10-terabytes if I'm creating snapshots as well. Because snapshots are going to consume some spaces as well, so it only makes sense to think about thin provisioning LUN space if I'm not creating snapshots. You know in the example I have listed here is you have 10-terabytes of LUNs and maybe your backend storage only has 8-terabytes. But LUNs tend to fill up with time, so thin provisioning LUNs is something I can generally do for some period of time that might be days, weeks, maybe even months. But eventually those LUNs are going to fill up, so I can't probably thin provision LUNs on an eternal basis, I can't keep doing that today. Next year we believe we'll have some technology that'll let us do that as well, but today we have to think of thin provisioning LUNs as kind of a short-term solution that'll go for days or weeks, maybe months, but not for a lot longer than that. Author’s Original Notes: In the NetApp world it can be thought of as having two different types of thin provisioning

5 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So let's talk about the ONTAP variables that affect space usage. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

6 ONTAP variables affecting space usage
Guarantee LUN reservation Fractional (space) Reservation Snap Reserve Auto Delete Auto grow Try first Transcript: So if we look, there's volume guarantees, so we'll talk about what a guarantee is. We'll talk about LUN reservations, fractional or space reservations, which is, by far, the most confusing for most people. We'll talk about Snap Reserve. I think people understand, for the most part, what Snap Reserve is. We'll talk about a couple of, I'll call them new features that came out with 7.1, auto delete and auto grow. And we'll also talk about the try-first variable, just determines which one of those will be used first.

7 Guarantees Set on a per volume basis
Determines when space is allocated from the aggregate Possible settings Volume (default)      Space is reserved from the aggregate at volume creation time This space is not available to other volumes regardless of how much is really used None         No space is reserved from the aggregate at volume creation time Space is taken from the aggregate as data is written Allows thin provisioning of volumes Any individual volume still is not allowed to take more space than its size File          The same as none but allows individual LUNs (or files) to set space reservations to ensure they have adequate space Transcript: So if we start with the guarantee, a guarantee is set on a volume basis. And what a guarantee does is it determines when space is going to be allocated from the aggregate. So it doesn't determine when the space is going to be used. What it's going to do is it's going to determine when that space is actually allocated, from an accounting perspective, from the aggregate. There's three different values. Really, the only two that make a lot of sense to probably talk about are going to be the first two. So if we look at volume this is the default. What that means is when I create a volume, at creation time the space is going to be allocated from an aggregate. So if I have a 10-terabyte aggregate and I go ahead and I create a 100-gigabyte LUN, it's going to take that 100-gigabytes out of that aggregate and it's not going to be available for anybody else. It doesn't actually write any data, it's just from an accounting perspective, that space isn't available to any other volumes to be used. The other extreme is if I go to a none guarantee, if I set a guarantee to none, I go ahead and I create that aggregate, no space will be used from the aggregate. So I create that LUN, I create that volume and when I create it, no space will be taken from the aggregate. Only when space is actually used, when I actually do writes to an object, will that space actually then get consumed and be taken out of the aggregate. And the last one, file. File is actually probably a misnomer, it should probably be called LUN. It looks exactly like the none case, except it will actually, when I create a LUN, it'll take that space out of the aggregate during the LUN creation time which is a different situation. Normally under none, it doesn't matter what I do inside my volume that space won't be taken out of the aggregate until it's actually used. But I don't really know of any configurations where file really makes any sense to be using. So really the first two are the most important ones here, volume and none, those are the two that we're really most interested in. We won't talk about file in any of the example configurations either. In fact, I've heard talk that we'd like to try to deprecate that. I don't know if that'll happen or not, but I don't know of any good use cases for it. So you guys have got to start asking questions here or it's going to be a long hour for you guys. I also want to comment, the reason my voice is this way, I lost my voice a couple of days ago and it's not because I was out too late last night. Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

8 LUN Reservation Set on per LUN basis
Determines when space is allocated from the volume Possible settings enabled (default)      Space is reserved from the volume at LUN creation time This space is not available to other LUNs or files regardless of how much is really used Disabled         No space is reserved from the volume at volume LUN time Space is taken from the volume as data is written Transcript: LUN reservations, so if we look, before, we had guarantees. So what that meant was it looked and determined when space was going to be taken out of the aggregate when I did volume creations. So LUN reservations, and when we talk about LUN reservations here, we're talking about the space consumption. We're not talking about SCSI-2 or SCSI-3 persistent reservations, so let's not confuse that. What this determines is one space is going to be allocated from the volumes. So the guarantees affect when volumes take space out of the aggregate. The LUN reservation determines when LUNs take space out of the volumes, so we can kind of think of that as that nesting that we have if we go to the volume first and then we go down to the LUN and this is going to determine when space is taken out of the volume. There's two different settings for this. One is enabled that's our default. That means that space when I create a LUN that space is automatically going to be allocated or taken out from an accounting perspective from my volume. And that means that space isn't going to be available for anybody else to use. I can't create more LUNs than what the size of my volume is. I can also disable it and I can do this either at LUN creation time, I can also set that afterwards. And if I create a LUN with space disabled, I can go ahead and I can enable that later by using the LUN command. Vice versa, if I have it enabled at the beginning, I can also turn it off and disable it too. And I can do that all online. And that's also true for volume guarantees, I can set those as I wish. A question? AUDIENCE QUESTION: (Inaudible). So if I set my volume guarantee to none and I set my LUN reservation to zero, I can create a 400-gigabyte LUN, yes. There was another question? AUDIENCE QUESTION: (Inaudible). Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

9 Guarantees Set on a per volume basis
Determines when space is allocated from the aggregate Possible settings Volume (default)      Space is reserved from the aggregate at volume creation time This space is not available to other volumes regardless of how much is really used None         No space is reserved from the aggregate at volume creation time Space is taken from the aggregate as data is written Allows thin provisioning of volumes Any individual volume still is not allowed to take more space than its size File          The same as none but allows individual LUNs (or files) to set space reservations to ensure they have adequate space Transcript: Right, so guarantees are only effect for volumes that's right. I'm sorry I have to repeat the question. The question regarding with guarantees doing the same thing as the reservation. So the file guarantee... AUDIENCE COMMENT: (Inaudible). With the file option. AUDIENCE QUESTION: (Inaudible). So the situation is the following, is that if I have it set to none, if I have reservation set to none -- let me just going to go back. If I have the guarantee set to none, what that means is that even if I create a LUN where I have it enabled that space will not be taken out of the aggregate. So none causes the LUN reservation to be ignored at the aggregate level. You have kind of two counting systems here. One is the space being taken from the aggregate. The other is the space being taken from the volume. So I can think of those as separate, but they're related. So the LUN reservation will determine when a space is allocated from the volume. The guarantee will determine when space is allocated from the aggregate. And, as I said, you can think of them as kind of a nested function there, so they're related. So what file does, is that if I have file set, it's just like the none guarantee, except it will automatically respect the LUN reservation. If I have none set from an accounting perspective for the aggregate, it will not respect the LUN reservation value. If I set it to file, it will still respect the LUN reservation. It won't worry about taking space out for the volume in general, but it will take space out of the aggregate for a LUN that has enabled. But, again, I don't want to focus on the file because there's no configuration where I think that really makes sense to be using and I find it confuses a lot of people. We had the same questions in the last class and I think I confused more people than I helped. I think what makes sense is think about the volume guarantee, the guarantee being set to volume and the guarantee being set to none and don't worry about the file case because I don't think there's a good use case for it, so I wouldn't be confused by that. Was there another question, yes, in the back? AUDIENCE QUESTION: (Inaudible). Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

10 LUN Reservation Set on per LUN basis
Determines when space is allocated from the volume Possible settings enabled (default)      Space is reserved from the volume at LUN creation time This space is not available to other LUNs or files regardless of how much is really used Disabled         No space is reserved from the volume at volume LUN time Space is taken from the volume as data is written Transcript: So I'm not sure I followed all that, to be perfectly honest. So what I'd like to do is we'll go through some example configurations. If you feel like your question is not answered at that point, let's try to go back and go into that in more detail. I don't want to jump into that just yet. So you also had the question of, if I had a 400-gigabyte LUN, so we can go through a configuration. You can actually create if you have an aggregate, let's say we have 200-gigabyte aggregate if I set the volume guarantee to none, I can actually create a 400-gigabyte volume inside there and that's how I do thin provisioning in the NAS space, right? If I want to thin provision a NAS space, what that equates to is I go ahead and I create a file system that's bigger than the amount of space I have. So the way I do that is I go ahead and I create a file system or a combination of file systems that's larger than the space I have available on my aggregate. So I could have a 200-gigabyte aggregate, I go ahead and I create a 400-gigabyte volume, the only way I can do that is by setting my guarantee to none and then I go ahead and I export that file system that's 400-gigabytes out to my user and my user believes that he's got 400-gigabytes of space there. He really only has 200 so as he gets that 200 boundary he'll run out of space, but he believes he has more space. So that's how we'd thin provision in a NAS space. A question? AUDIENCE QUESTION: (Inaudible). So the question is whether there's a use case, basically, for file with the guarantee being set to file? You can also meet that use case doing other configurations, I think, just as equally. I'm not going to argue that you can't come up with something that makes sense. I think you can also meet those exact same needs by using other configurations. Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

11 Fractional (space) Reservation
Set on a per volume basis Determines if space is reserved from the volume at first snapshot creation Possible settings 100% (default)      An amount of space equal to the amount of space used with in the LUNs is reserved from the volume This space is not available to other LUNs or files regardless of how much is really used 0 to 99%          An amount of space equal to the amount of space used with in the LUNs times fractional_reserve is reserved from the volume reserved = LUN_SPACE * fractional_reserve Transcript: So now if we move on, we have, by far, our most confusing variable and this is the fractional or the space reservation in fact it's so confusing, we don't even know what we really should call it. Sometimes we hear space reservation used, sometimes I hear fractional reservation used. This is also set on a per volume basis. And what this does is it determines if space is reserved within the volume at the time that I create my first snapshot. And what it does is if I have a LUN, which is a space reserved object, in theory, it does this for space reserved files as well, although nobody creates space reserved files. If we look and we have a LUN and that first snapshot is created, if my LUN is 20-gigabytes, what it's going to do is it's going to reserve an extra 20-gigabytes and this is where we get into the 2X + delta and it's going to reserve that space. And that space will not be used unless there's no other space available in the volume. And the thing that that does for us, it allows us to protect our snapshots, as well as our active LUN that running. So this is the safest way to go and we'll never lose any snapshots if we have this set to 100%. We'll also never take our LUNs offline, but it's very expensive to do that. So the possible settings, there are kind of two possible settings or I guess 100, depending on how you look at it. You can have it set to 100, so what that means is, again, if I have a 30-gigabyte LUN and I go ahead and I create a snapshot, it's going to reserve an extra 30-gigabytes out of there. I can also set that from any percentage from 0 to 100%. And if I do it, for example, to 50%, if I have a 30-gigabyte LUN, I create a snapshot, what it's going to do is it's going to take 15-gigabytes out of my volume so 50% of my LUN. And I use one LUN as an example, I could obviously have multiple LUNs as well. And it's going to take, depending on how much those sum up together, it's going to take 50% of that LUN space and it's going to set that aside, so that my LUNs don't run out of space. When my volume runs out of space, it'll start using that space there that's been reserved for. This has been our default configuration. We'll talk a little bit more about this and some of the possible configurations where we can do thin provisioning of snapshot space. I find it very confusing and I would like people to avoid using this a little bit more. A question go ahead. AUDIENCE QUESTION: Isn't this, though, when you first started you said most customers aren't willing to use 2X + delta? Exactly, so most customers aren't willing to use 2X + delta and that's why we'll go through some examples where I think we can avoid doing that. AUDIENCE QUESTION: (Inaudible). Yes, I think that when we go -- the question was would I like to see people avoid using it? I think what we'll see -- we'll go through some of the example configurations where I think that makes sense. If people are willing to pay the 2X + delta price, that's wonderful, we can sell them lots of storage, I think that's a great thing for all of us. They'll be able to keep all their snapshots and they'll be able to continue running, but I don't think that most people are willing. They love our snapshots, but I don't think they like them enough that they're willing to pay the 2X + delta price, so we'll go through how I think we can avoid that. I have to be careful using the word love. I don't know if you noticed in the general session that when I did the search on the word love, my presentation came up first, so I was kind of nervous about that. I didn't realize I use that word very often. Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

12 Set on a per volume basis
Snap Reserve Set on a per volume basis Determines if space is allocated from the volume for Snapshot data at volume creation time Possible settings 20% (default for NAS)      0% (current default for SAN)   0% to 100%  Transcript: So going on to Snap Reserve, I think people understand Snap Reserve. You know Snap Reserve when a volume is created, default takes 20% out of the volumes, I can't grow my files or my LUNs into that space, so 20% is the default. For SAN we recommend -- the current recommendation is we recommend that people set that to 0% because what they're using is they're the space reservations today or fractional reservation, so we don't want to have too many variables in play. This is, again, a case where I think we'll be changing that in the SAN environment, so we'll go through that in a couple of minutes. And this is really just an accounting trick, right? If snapshots, from an accounting perspective, get allocated out of that space first, if there's not enough space in snap reserve, it automatically starts growing into the volume as well so this is really just kind of an accounting trick. Author’s Original Notes: File should be called LUNs since that is typically what it is used for. Guarantees are new with FlexVols Guarentees define when space is taken or reserved from the aggregate for a volume

13 Snapshot Auto Delete Set on a per volume basis
Determines when (if) snapshots will be automatically deleted Makes NetApp like competition but better Possible settings Triggers      Volume Snap Reserve Space Reserve Order delete_order – oldest, newest defer_delete – scheduled, user created, prefix, none snapmirror/dump – try, disrupt Snapshots locked by clones, cifs or restores will not be deleted – change planned for 7.3 Transcript: There's two new concepts that we have available, I say new, they're new with 7.1. So we have auto delete. And what auto delete does, it is set on a per volume basis and it determines when, or I guess we could say, if snapshots will automatically be deleted. So the way I look at this is this makes us look like the competition. I would say it actually makes us look a little bit better than the competition. If we look at the competition, when you want to create snapshots, what you do is you set a certain amount of space aside and say, okay, I want to set 20% or 30% or so many gigabytes aside for snapshots. And what happens is if I use more than that space for snapshots, what happens is, basically my snapshots become corrupted which is to me the equivalent of having them be deleted. So what we can do is the same thing now, we can go ahead and we can set a boundary or a trigger and say when this becomes full I want to go ahead and I want to start deleting snapshots. And we can then determine in what order we want snapshots to be deleted, whether I want the newest snapshots to be deleted first, whether I want the oldest so I can kind of go FIFO or a LIFO configuration. And this is where I think we're better than the competition. It's not that all of our snapshots at one time are corrupted and deleted. We can determine which ones should be deleted. I actually even have some options for this defer delete to say which ones are really important to me, so I can set a particular prefix or I can say scheduled snapshots are more important to me or user created are more important, so delete everything up to those. And also SnapMirror is kind of a special case for us, right? SnapMirror snapshots are often very important to us and so I can go ahead and I can say, if it comes to deleting a SnapMirror snapshot, delete those last, first of all and then do I really want to do that or not? This is something that's settable and it makes sense because in some cases if I have two boxes sitting next to one another, I might say okay if I have to do a new baseline for that that's not exactly desirable, but I can live with that. If I have other cases where it's a tremendous amount of data or it's in a remote office where I've got a nice, little, skinny pipe going between them doing that baseline transfer might be something that I really don't want to have to do. And so in that case I can go ahead and set it and say don't delete that. I don't expect the situation to come up, if it does, I'd rather go offline than actually have to do that baseline transfer so I have a lot of control in terms of that. And I can determine of what kind of trigger, whether it's a volume being full, whether it's Snap Reserve space being full or whether space reserve is full. I can trigger on those three things to determine when I should start deleting snapshots. AUDIENCE QUESTION: (Inaudible). The question is can I set a level when they get to full? So the default today is 98%. So it turns out when they get 98% full, it'll go ahead and start deleting snapshots. You can actually tune that. There is a variable that you can actually tune that to a lower value if you want to do it in an earlier case than that. AUDIENCE QUESTION: (Inaudible). These are all set on a per volume basis. AUDIENCE QUESTION: (Inaudible). I believe they're through the vol command, exactly. So there is one caveat today that we should be careful of and this is that snapshots that are locked by clones or basically snapshots that are presenting data out to a user so whether it's locked by a clone, probably the most common case, whether I'm sharing that out by a CIFS or I have a restore running from it today, if I have that happening, the snapshot will not be auto deleted and so that adds a little risk today and that's going to be fixed. Right now, that's targeted for 7.3 where we're going to fix that, where we'll go ahead and even if a clone is based off of it, we'll delete that snapshot in the background which is clearly going to make my clone go away, but in most cases, that's probably what makes sense. And it'll have to be selectable because some people want that to be a selectable option just like it is with the SnapMirror. And then, you know, if I have my primary copy of my data as my database, for example, and then I've got a bunch of test copies or I'm taking a backup or something like that based on a FlexClone generally I'd still want to delete that snapshot. I was kind of surprised at what a lengthy discussion it was with engineering to convince them that we really did need to auto delete those snapshots as well. But I was able to convince them and we will see that in 7.3 that we'll go ahead and we'll be deleting those as well. And when we have that, then we really have no risks there at all. This is something we have to be aware of today that if we're creating FlexClone and we're depending on auto delete, we have to manage that situation today so that is a caveat that one should certainly be aware of. Author’s Original Notes:  auto_delete should delete snapshots locks by clones, cifs shares or snap restores - entered in May but targeted for IC.0

14 Set on a per volume basis
Volume Auto Grow Set on a per volume basis Determines if a volume should grow when it nearly full Possible settings ON Maximum growth Increment size OFF  Try_first option determines whether auto_grow or auto_delete is attempted first Transcript: The other option that we have today is auto grow. So this is something that's set on a per volume basis, it applies only to FlexVol, obviously. And what that determines is if a volume should grow. So when a volume becomes full, just before it becomes full, what it'll do is it automatically grow and it'll take more space out of our aggregate. The nice thing about auto grow and what that does is it allows us to have, you can think of it as a shared free pool inside my aggregate and whichever volume needs that space it can go ahead and start to grow into that and so it'll start taking more space out of it. I can set it to either on or off, it's off by default. And then when I set it on, I actually have a couple of variables that I can tune. I can say how large can it grow. By default, it'll grow to 120% of its original size. I can reset that at any time as well. And then the other thing I can do is I can go ahead and I can set how quickly it should grow, so in what increments should it grow? And currently it grows in, I think, it's 5% increments. If I have a 100-gigabyte volume the max size it'll grow to is 120 and it'll grow in 5-gigabyte increments. So 5% growth increments, max growth is 120, but I can set those. And I can tune those and I very well might tune them for different volumes. Some I might want to get more flexibility to grow larger, some I want to have them grow only up to a particular point so I can go ahead and I can tune those things to say how much to grow. Then the last option we have is this try first option that's determine. And what that does is it determines which one is going to happen first. If I have my trigger for auto delete set to volume, then what would happen is, when things fill up, I'd have auto grow and auto delete coming at the same time so which one should happen first do I want to auto delete first or do I want to auto grow first? Generally I think what you're going to want to see is, you're going to want to have try first if I have them both enabled and I want to probably grow first. And then if I can't grow anymore, I'm not allowed to grow anymore because I've hit that maximum value or because there's no more space in my aggregate to grow only then would I want to start deleting them. So I think it makes sense in general that you're going to want to have that set to auto grow first. There's a question? AUDIENCE QUESTION: (Inaudible). Right, so the question is if it auto grows, can I go ahead and reset a new limit for it, really? AUDIENCE QUESTION: (Inaudible). So two questions, so one is, if it grows automatically, you will definitely have to go and reset that manually. I mean, otherwise, it wouldn't make sense to have that limit if it kept automatically increasing itself, it wouldn't be setting a maximum value. And I believe, and I'm not 100% certain, if you grow a volume manually and you have auto grow set, I believe you're probably going to have to reset that value at that point as well but I'm not 100% certain of that. With manual growth, I'm not certain of that, I believe you have to reset it. AUDIENCE QUESTION: (Inaudible). So the question is (inaudible). Yes that's right. So I believe that's the case, I'm not 100% certain of that. But I do believe, if you grow it manually, you want to double-check what your auto grow maximum value is. AUDIENCE QUESTION: (Inaudible). So can you manually auto grow a volume beyond its maximum...? AUDIENCE COMMENT: (Inaudible). Can you manually grow a volume beyond its auto grow value? Yes, you can do that. Auto grow maximum only affects its auto growth, it doesn't affect what you're doing manually. AUDIENCE QUESTION: (Inaudible). There you have your answer. The question is, do we manage auto grow and auto delete? So today that is clearly our weakness, in terms of the space, is managing those kinds of operations, that, you know, I want to have that integrated into Operations Manager. Thanks to Pete here and the relatively large bank that he works with in New York, we'll see a big focus. You know, Rich Clifton has been a big fan of thin provisioning and being able to do that. And we've really recently started to get a lot of traction within the SMAI group around thin provisioning. And I think they're starting to ask a lot of good questions regarding that. They're starting to realize that they need to better support that. And you'll see, I talked about one BURT here in terms of the auto delete. There was a lot of discussions with engineering in terms of fixing those things. Well, I think Rich has made his point pretty clear there and we're starting to see a lot more focus on being able to fix those things. So today I think we have a lot of opportunities for professional services around managing some of this. And I think that we'll see a big improvement in our products in the next year or so. Today, I don't know of anything. You know, there's things you want to get notified, for example, if an auto grow happens. You need to go in and you can look at that, but we need to have some predefined reports and things like that. So if I have an auto grow, I very well might want to go in later and manually reduce that, for example. So that is a weakness today that I think we'll see a lot of improvements coming in the next 12 months. There was another question, yes. AUDIENCE QUESTION: (Inaudible). Can you use auto grow on a SnapMirror destination? So auto grow and SnapMirror today do not mix very well. I think, today, if you have a SnapMirror configuration, auto grow doesn't really make sense. There's some issues, there's some things where, by default, the file system won't grow because of SnapMirror, we don't want to have the SnapMirror overrun it. So there's a lot of changes going into ONTAP 7.3 regarding how space is managed from the backend and that's one of the things that's being looked at. With SnapMirror and space management, it really challenges your mind, in terms of some of the configurations regarding that. So today, in general, I think the recommendation would be, if you're doing SnapMirror on that volume, I don't think it makes sense to be using auto grow. I don't think you'll be successful doing that. I think, starting with 7.3, we'll be able to do that successfully. Does that answer? Other questions? Great, it's good to get questions that's a great thing.

15 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: Okay, so what I want to do at this point is I want to go through some configurations. I want to kind of talk about some advantages and disadvantages of those various configurations. So I have six configurations I want to go through. I want to talk a little bit about the default configuration and then I want to go through a couple of cases where we're thin provisioning snapshot space. And I want to talk about a couple configurations where we're thin provisioning LUN space. And I want to kind of talk about the pluses and the minuses of doing some of these configurations. You know, we talked about that there were -- what did we say, seven variables? So that would lead to a whole lot of permutations if we did them all. A lot of them don't make sense. You know in my last session somebody asked could you have a spreadsheet that has all the different combinations. And I tried to do that once and there's just too many combinations and too many of them don't make sense. What I to do here is I want to talk about a couple that we think actually do make sense. Okay, so let's go on and talk about our default configuration. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

16 Space Management – Default Configuration
Guarantee = volume LUN reservation = on Fractional_reserve = 100% Snap_reserve = 0% Auto_delete = off Auto_grow = off Try_first = NA Available space 40GB Reserve 30GB Transcript: So if we look at our default configuration, volume guarantee is going to be set to volume. Our LUN reservation is going to be set on. Our fractional reserve is going to be set to 100%. And by default we tell people in the blocks where to set their Snap Reserve to zero. So auto delete and auto grow are clearly going to be set to off is new things. In that case it doesn't matter what we have try first set to. So if we look at that... AUDIENCE QUESTION: (Inaudible). Well, so what I have is a 100-gigabyte volume, it makes math easy for me so just to kind of go through a little bit of an example. So what we have is our test LUN and then if I create that snapshot that reserve is going to be gigabytes is basically going to be taken out of that. Author’s Original Notes: The only thing that actually has to be changed is the snap_reserve has to be set to 0% 30GB TestLUN

17 Space Management – Default Configuration
Positives Easy to manage/monitor space Running “out” of space only results in no additional snapshots being created SnapShots and active LUNs will always be available Volumes are independent of each other Negatives Requires 2X + Δ space requirement Transcript: So if we look at the advantages and the disadvantages of this configuration, it's easy to monitor because you don't have to monitor it very much. You'll never run out of space. When I do, quote, run out of space, what'll happen is that I won't lose my LUNs, I won't lose any snapshots. What'll happen is when the volume is full, it'll stop creating extra snapshots, you won't be able to create any additional snapshots. But the snapshots that have already been created will never be lost, those will be available and the active LUNs will be available. So the only thing that happens when I, quote, run out of space is they'll start using that reserve and there'll be no further snapshots created. It's also very nice and that all the volumes are 100% independent of one another. In general, if I have the guarantee set to volume, in general, the volumes will be independent of one another. There's only one disadvantage of it and that is that is requires 2X + delta. And if we look, the 2X + delta is that I've got the test LUN that's 1X, I've got the reserve that's the second X and then the delta is the snapshots.

18 Space Management – Default Configuration
Guarantee = volume LUN reservation = on Fractional_reserve = 100% Snap_reserve = 0% Auto_delete = off Auto_grow = off Try_first = NA Available space 40GB Reserve 30GB Transcript: When I start creating snapshots and the snapshots are using space, they'll be coming out of that available space up there. So that's why it's 2X + delta. So the delta is the actual snapshot space and that reserve is just kind of a, it's a safety net that if everything else fills up I'll still be able to go ahead and writes to my LUNs. Author’s Original Notes: The only thing that actually has to be changed is the snap_reserve has to be set to 0% 30GB TestLUN

19 Space Management – Default Configuration
Positives Easy to manage/monitor space Running “out” of space only results in no additional snapshots being created SnapShots and active LUNs will always be available Volumes are independent of each other Negatives Requires 2X + Δ space requirement Transcript: So it's only one negative, one disadvantage, but it's a pretty big one. And as I talked about, I don't think a lot of people are willing to pay that price. I wanted to show that configuration, that's our default. I want to show a couple of configurations that I think make a lot more sense.

20 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So one of those is if we do auto delete. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

21 Thin Provisioning – Snapshot Space – Autodelete
Auto Delete Configuration: Guarantee = volume LUN reservation = on Fractional_reserve = 0% Snap_reserve = 20% Auto_delete =snap_reserve Auto_grow = off Try_first = snap_delete Snapshot Available 20GB Available space 50GB Transcript: And if we look at auto delete, if we look at the configuration that we're going to have setup there, we're going to have the guarantee set to volume again. We're going to have the LUN reservation set to on. We're going to set our fractional reservation, in this case, to zero. Most people don't understand fractional reservation that goes away, so I don't like that, it makes it much nicer and I can set that to zero and I don't have to worry about that. I'm going to set my snap reserve, in this example I've set the snap reserve to 20%. It could be any variable that you think makes sense there. If 20% is our default in the NAS case, I think that probably also makes sense here. If you know you're going to have a lot of overwrites, you might increase that to 30%, so you can talk about what that is. But some reasonable reservation for my snap data. Then what I'm going to do is I'm going to set auto_delete equal to snap_reserve. Auto grow, in this example, I have set to off. You could turn that on as well if you wanted to. And then I'm going to have my try first, it's going to be set to auto delete. And the picture I have over here on the right probably doesn't make complete sense here. I have a test LUN, generally when I have that space available, I would also configure other LUNs there that space would be available for configuring other LUNs. And so what'll happen in this case is I'll go ahead, I can create snapshots. And creating the snapshots doesn't cause any space to be reserved because I've got my fractional reservation set to zero. And when my snapshot area fills up, that 20% fills up, what it's going to do at that point is it's going to start deleting snapshots. And what snapshots I delete are going to be determined by the order that I set, whether it's going to delete the first ones, the last ones or user preferred ones. So it's going to go ahead and it's going to delete those snapshots at that point. We have a question in the back? AUDIENCE QUESTION: (Inaudible). So the question is does the snapshot reserve apply to all LUNs in the volume? And that's exactly right. Whether I have one LUN here or 100 LUNs in the volume it doesn't matter. Basically that snap reserve is going to be used for any overwrites that occur to any of those LUNs, that's exactly right. There's another question? AUDIENCE QUESTION: (Inaudible). That's right. So the question was, if I have the auto delete, the trigger set to Snap Reserve, that when I get to 20%, it's going to start whacking snapshots before my volume is full, that's exactly right. I could also set that to volume and then watch it grow all the way. But the idea that I tried to show here is that I'll set my snap reserve to some value that I think makes sense and my administrator can look at that and say, okay, for that particular application that I created that volume for, he still has the ability to create extra LUNs. And that's why I say, in this case, you generally would probably have that volume full, that available space freed up with other volumes. You might for performance reasons, for example, say, well, I like to keep some particular amount of space free within that volume or that aggregate so I could go ahead and maybe leave some margin as well. But that's right, in this case, if my snapshot data goes beyond or gets to 20%, it'll start deleting snapshots even though there's other space available there, that's exactly right. A question? AUDIENCE QUESTION: (Inaudible). If my LUN filled up to 21-gig, I wouldn't have any snapshots. AUDIENCE QUESTION: (Inaudible). If my snapshots -- so if I overwrote 21-gigabytes of my LUN, in this case then my snapshot would be deleted and if that was my last snapshot that's right that would get deleted. AUDIENCE QUESTION: (Inaudible). Auto delete will keep deleting until it goes below 20%, so that's right, it can do that. It's unusual that you overwrite an entire LUN. And, again, you can talk about what values make sense here, right, whether 20% is enough. I think that's a reasonable discussion to have, whether 10% is enough or whether 20%. You know, you can think of configurations where, for example, if I'm using SnapMirror. So if the only reason I have snapshots is because I'm using SnapMirror, in general, I would expect my SnapMirror to be pretty short-lived snapshots, right? I'm not going to be keeping those around for days or weeks or months or at least not intentionally. And I might be willing to pay some extra -- I'm going to have to pay for some extra disks to be able to handle that capacity but I'm not going to want to necessarily pay for a 2X + delta to be able to create my SnapMirror. So what I can do is I can go ahead and I can set to some value, you know, that SnapMirror case, I might argue that I don't expect to ever go above 5%. I don't expect those snapshots to ever live for very long. So I'm going to set those to 20, so I still have some breathing room in there, so in this case that snapshot does exist for some longer period of time I can live for a reasonable period of time and willing to pay that extra 20%. If something happens that causes it to go beyond that 20%, I have to pay some kind of a price and so that's why I would start to delete those. A question in the back? AUDIENCE QUESTION: (Inaudible). So from the storage tier, if the LUN looks full and the write behavior -- so if I come into here and my LUN Is full I create a snapshot and I start doing writes, what'll happen is then it'll start consuming space outside of my Snap Reserve space. Let's pretend I don't have any snapshots, I write my LUN, I fill it up and now I go and I create a snapshot and I start doing overwrites. Those overwrites, from an accounting perspective, are going to be taking space out of my Snap Reserve area. AUDIENCE QUESTION: (Inaudible). They will not take space out of the available no it will not. In this configuration that available space will never get touched by doing writes to my LUN. In this particular case, and this is why I say I think it's easy from an accounting perspective. From a management perspective is the only time that available space will get used is if I start doing some kind of management of my space, I create more LUNs or I let people create. You know, in general, I'm not going to create files inside those same volumes that have LUNs, we don't recommend doing that, it doesn't make sense so it's very easy from that point of view. And that space is not going to get touched by anything by creating more LUNs. And that's why I said in this model, it doesn't make sense that I have 30-gigabytes and I have 50-gigabytes free, I'm probably not going to do it that way. It would have made a lot more sense for me to show that being 80 or 90 gigabytes of space being used there. And, again, I use 20% here, I think that's probably a pretty reasonable value. It depends how conservative or how aggressive you want to be. How long you think your snapshots -- you know, the answer there is, how much should I reserve? Well, as we all know, we have that unfortunate answer of, it depends. It depends on how you're going to be overwriting it. If it's going to be SnapMirror data that you really expect those to be short-lived, then you're not going to have a lot. If you expect it to be something where I'm going to have a big delta because really that snapshot is the equivalent of a delta space, right? If I have snapshots that I'm creating a lot of them or I think they're going to live for quite a long time, then I'm going to go ahead and I'm going to reserve more of that space. That's not going to be 20% for every customer, without a doubt. Yes, another question? You guys took this seriously with the questions, I think that's good. AUDIENCE QUESTION: (Inaudible). So the question is whether we're going to document in a tech report? So I have a version of a tech report that is available that goes through this example. As an example, this is in there that talks about that. I don't know the number of that tech report. AUDIENCE COMMENT: (Inaudible). What is it? AUDIENCE COMMENT: (Inaudible). 3382, that could be, that kind of rings a bell, AUDIENCE COMMENT: (Inaudible). That's available on tech library. AUDIENCE QUESTION: (Inaudible). Yes, it includes this exact example is definitely shown in there. And, again, I would like to see us go to looking at this as our default value. Because I don't know what you're experience is, but when I've talked to customers about wanting to do snapshots, again, they all seem to like our snapshots. But what they don't want to do is they're not willing to pay the 100% penalty for it, it doesn't make sense in most cases. So I think if we know that 99% of our customers aren't willing to pay for that, we shouldn't make that our default. Our default is the safest option today, but if we know the people aren't willing to pay for that, we should change our defaults away from there. I just don't think that makes sense. AUDIENCE COMMENT: (Inaudible). Right, from a competitive perspective, it puts us in a bad situation, I agree completely with that. Author’s Original Notes: This “thin provisioning” of the snapshot space Snap_reserve can clearly be set to any value Could also set auto_grow to on for additional safety measure 30GB TestLUN

22 Thin Provisioning – Snapshot Space – Autodelete
Positives Easy to monitor/understand space just volume and .snapshot Sacrifices snapshots before active LUNs Volumes are independent of each other Looks like the competition Negatives Doesn’t use shared space from the aggregate Until ONTAP 7.3 have to be cautious with using FlexClones Transcript: So I hope this is what I talked about, it looks like the competition in that respect, which is a good thing in that case. As you talked from a competitive point of view, if this is our default, we can say we're the same as you. I think we're better or we're a little bit more flexible, but in general, it looks like the competition. One of the negatives of this is that it doesn't use a shared space from the aggregate. It's very nice to be able to use kind of a shared free pool from the aggregate. We'll talk about that in the next example. And the caveat that I talked about with auto delete, until we get to 7.3, if you're creating FlexClones off of that snapshot, you've got to be careful with that. We have to make sure we're managing that. That's the kind of case where you're going to want to probably have a little bit bigger buffer, than a little bit small so you don't want to get as aggressive if you're creating FlexClones.

23 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So if we go to the next one, auto grow, Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

24 Thin Provisioning – Snapshot Space – Autogrow
Auto Grow Configuration: Guarantee = volume LUN reservation = on Fractional_reserve = 0% Snap_reserve = 0% Auto_delete = volume Auto_grow = on Try_first = auto_grow Transcript: so in this case, we're going to set our guarantee to volume. We're going to set our LUN reservation to on again. Fractional_reserve to 0%. In this case we're setting our snap_reserve to zero. And we're setting our auto_delete to volume. And we also have auto_grow set on. So in that case it also makes sense to look at what are we going to do first since try_first isn't available. And generally I think it makes sense that what we're going tog do is we're going to try to auto grow first. In this example that definitely makes sense. I couldn't figure out how to graphically show this and that's why we don't have a picture there. I screwed up the picture before that, so I thought this one, there's no chance of being able to do that. And so what happened here is that if my volume fills up and when my volume becomes full, what it's going to do is it's going to try to grow. So if there's space inside my aggregate, it's going to go ahead and grow and it'll do that up to whatever value I set it for, you know, default is 120%. And then if it can't grow and there might be two reasons why it can't grow, right? And one is because it hit 120. The other is because it had a bunch of other volumes in the same situation and they went ahead and they grew first and this guys comes in later. And the aggregate is full and there's no space left for it to grow, then what'll happen is, the first thing it tries, it says I tried to auto grow, it wasn't successful there so then what I'm going to do is I'm going to start deleting snapshots. It allows me to use that free space in my aggregate as best I can, it'll auto grow and use those. If that space isn't available, then what it'll do is it'll go ahead and start sacrificing the snapshots at that point. So we're really using the power of both of these two variables. I get the free space out of that aggregate, however, I can go ahead and I can auto delete those. If I run out of space, if I can't grow any more or somebody else is already growing, it'll start auto deleting snapshots at that point. A question? AUDIENCE QUESTION: (Inaudible). So the question is whether you'll run out of space while things are being auto deleted? So my answer to that is it shouldn't be an issue. The reason I say that somewhat hesitantly is someone in my last class said they had that issue that they ran out of space. So the default is set to 98%, so right now I would actually be -- I heard that and I've heard of another case that might have happened as well. And I need to take that back and we need to confirm -- that should never happen. Basically ONTAP should stall out until it's able to delete that. So given that information I heard there and I heard some circumstantial evidence to another one of those, I would actually set that 98% value down to something lower, like 95% or 90%.

25 Thin Provisioning – Snapshot Space – Autogrow
Positives Uses shared free space from the aggregate for possible growth Sacrifices snapshots before active LUNs Can tune amount of thin provisioning per volume Works even with a smaller number of volumes Negatives Volumes are not completely independent of one another Always the case when using shared free space Growth in one volume can limit growth of another Transcript: So, again, the advantages, the positives of this configuration is that I get to use that free space from the aggregate kind of as shared free pool, which I think is really good. It'll still sacrifice my snapshots before my LUNs, that's where the auto delete kicks in if it has to. Generally I want to keep my primary storage up before I sacrifice my LUNs. I think most people consider that to be a positive. I can tune this very much on a per volume basis, right, because I can determine how much space I leave for that volume. I can also tune how much I allow a particular volume to grow. I might allow some volumes a lot of room to grow or capability of growing. I might allow some very little capability of growing before they start to deleting snapshots. So I can tune that very much on a per volume basis. And this is a configuration that even makes sense with a relatively small numbers of volumes. I can do this with a relatively small number of volumes where I would give them quite a bit of growth potential. So I think this is a really nice configuration. You know the disadvantage or the negative is that the volumes are not completely independent of one another. And, you know, they have some independence in that they'll go ahead and start auto deleting. But, clearly, anytime I'm using that shared free space in the aggregate, they're not completely independent of one another. Because if a bunch of other volumes grow, there might be now space for my volume to grow, so that's where they're not completely independent of one another. Questions? AUDIENCE QUESTION: (Inaudible). The aggregate today will not auto grow, that's right. So when I go ahead and an aggregate is the fixed size and I don't have the ability to auto grow an aggregate, right? And the question is does an aggregate auto grow? No you have to go ahead and manually add disk to that. Author’s Original Notes: A case more conservative case where you don’t expect to need extra space but what to cover it if needed

26 Thin Provisioning – Autogrow Example Volume Grow – Starting Point
volume used / size / consumed vol1 = / 100 / 100 vol2 = / 100 / 100 vol3 = / 100 / total = / 300 / 300 free = 30 Transcript: So if I look, I tried to go through and show this as an example. So what I did is I've got my little pie here as my aggregate and I showed my volumes inside of there. And if we look over here on the right hand side, I've got three columns, how much is used, the size of the actual volume and then how much is actually consumed. So because I've got my volume guarantee set to volume, the size and the consumed are basically always going to be the same thing here. Even though I'm not using my starting configuration here is I created three volumes, each of them is 100-gigabytes, none of them is being used. As far as the aggregate is concerned, 300-gigabytes is gone, that leaves me 30-gigabytes free. Author’s Original Notes: Explain the picture Note that in real life the more volume you have the more this makes sense The whole circle is the aggr … Note, the

27 Thin Provisioning – Autogrow Example Volume Grow – After volume growth
volume used / size / consumed vol1 = / 100 / 100 vol2 = 120 / 120 / 120 vol3 = / 100 / total = 260 / 320 / 320 free = 10 Transcript: So if I go to the next step and I go ahead and I consume a bunch of space in Volume 2 and it actually auto grows a couple of times it goes up to 120-gigabytes. So if I look at the space that's being used, I can say Volume 1, I've used some space, Volume 3, I've used a little bit. As far as what's being consumed from an accounting perspective, though, that used doesn't really matter. It's 320-gigabytes that's being consumed from my aggregate and I've got 10-gigabytes free at that point. Single volume grows to 120GB

28 Thin Provisioning – Autogrow Example Volume Grow – After another volume grows
volume used / size / consumed vol1 = 110 / 100 / 110 vol2 = 120 / 100 / 120 vol3 = / 100 / total = 290 / 330 / 330 free = 0 Transcript: All right so then if I go to the next step and I have Volume 1, it uses a bunch of space, it actually auto grows, it grew up to a 110-gigabytes. At that point, my aggregate is now full so I'm using 330-gigabytes out of it. A question? AUDIENCE QUESTION: (Inaudible). So the question is will it grow to a 110, even though it couldn't grow to a 120? So I have my increment set so the maximum it can grow to is 120, but it'll grow in incremental steps of 5%. So it would have actually done two growths at this point. It would have grown 5-gigabytes and another 5-gigabytes and that's where it's going to stop. AUDIENCE QUESTION: (Inaudible). So the question is if they can't grow its increments, so if it would have been 102-gigabytes free, instead of a or if it would have been 2-gigabytes free, instead of five. I don't know the answer to that. I believe it won't grow at all if it can't grow its increments. I believe it's increment available, I don't believe it'll grow, but I'm not 100% certain of that. So in this case what I look at is, I've got it here, my aggregate is full. So what'll happen is if another volume wants to grow, it can't. So if I look, Volume 3 here is 100-gigabytes if it would consume more space and it would want to grow, it can't do it. And this is where I said the volumes are not completely independent of one another, right? So even though Volume 3 hasn't used more than its 100, it's affected and it can't grow any more than that. What'll happen is if any of these volumes want to grow, we also set auto delete to on. So what'll happen is if they need space, what they'll do is they'll start auto deleting snapshots and that's true in all the volumes. We had them set all to do the same thing. You know in this case, for simplicity, I have all three volumes set with the same parameters. I can also vary those. I could have one set with auto delete off, I could have one set with auto delete on. But in this case they're all set to act the same way. So if they need more space at this point, they're going to have to start auto deleting to keep things online. Author’s Original Notes: Everything continues to run If another volume tries to autogrow it will not be able to If autodelete is set it will autodelete a snapshot Single volume grows to 120GB Another volume grow to 110GB Aggregate is now full

29 Thin Provisioning – Autogrow Example Volume Grow – After freeing space
volume used / size / consumed vol1 = 110 / 110 / 110 vol2 = 110 / 110 / 110 vol3 = / 100 / total = / 320 / 320 free = 10 Transcript: What I can do, though, is I could actually go in and I could actually free space up. So if I go down to Volume 2 where it had done an auto grow and it would take it 120-gigabytes. And if I would go in and free up space, the most common way I would probably free up space is manually deleting some snapshots. I might delete LUNs or files or something as well, but the most common case is I'd probably go in and delete some snapshots. At that point I could ahead and manually resize that volume and I could resize it back down to a 110. And now that 10-gigabytes is still back kind of in that free pool in the aggregate and that's available to any of those volumes. So Volume 1 can now grow up to 120 if it needs to. Volume 3, if it starts to need that extra space, could also grow up into there. AUDIENCE QUESTION: (Inaudible). If you offline a LUN, can you shrink it? It is possible to shrink a LUN, yes, you can do that. I don't think I know of any case that I would recommend it. So from an ONTAP perspective, you can certainly resize a LUN. You can size it down as well as up. But we have a file system on that LUN and in theory you can defrag it and then you have that space at the end. I've had three people ask me in the last couple of days of whether you can do that. The problem is, nobody does that and so I would just never recommend that. In theory, you can do it and you have to make sure that the file system doesn't use that space at the end and then you can shrink it down. But I always say, I think you'll be the first person in the world to do it and I wouldn't recommend it. It is possible to do. Space is freed within the volume - manual snapshot deletion Volume is manually resized

30 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: How are we doing on time? We're getting a little tight. Okay, we'll go through snapshot with volume guarantee set to none. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

31 Thin Provisioning – Snapshot Space Guarantee = none
None Guarantee Configuration: Guarantee = none LUN reservation = on Fractional_reserve = 0% Snap_reserve = 0% Auto_delete = volume Auto_grow = off Try_first = auto_delete Transcript: So I want to go through this configuration. In this case, I've got volume guarantee set to none. I've got LUN reservation set to on and that's only going to determine how space is taken out of the volume, not how it's taken out of the aggregate. I'm not going to have fractional_reserve set. I'm going to have snap_reserve set to zero as well here. And I'm going to do auto_delete at the volume level, so if the volume actually fills up, it'll start auto deleting. I don't think it makes any sense to have to have auto_grow and the none guarantee at the same time. It just doesn't seem to make any sense to me to do, so we're going to set that to off. Yes, a question? AUDIENCE QUESTION: (Inaudible). So the question was, if I can't set the fractional_reserve to some value if I have guarantee set to none? You know I can still set fractional_reserve. And what fractional_reserve will do is it'll start to deal with how space is allocated out of my volume. But, again, it doesn't make very much sense because it's still not going to be reserved out of my aggregate. So if I'm not reserving that space out of the aggregate, it probably doesn't make sense to be reserving out of the volume either. So I can set it when I look at the accounting inside a volume it'll still be used, but basically setting a guarantee to none says, I don't really care what you're doing at the volume level I'm not going to reserve any space in the aggregate, it doesn't really make sense to set it. So you can, but it doesn't make sense. AUDIENCE QUESTION: (Inaudible). So you said you haven't been able to? I'm pretty sure that you can, but I might be wrong about that. It doesn't matter if you can or not, it doesn't make sense to do it. Author’s Original Notes: Even with a none guarantee you cannot consume or space that the volume size

32 Thin Provisioning – Snapshot Space Guarantee = none
Positives Uses shared space from the aggregate Negatives Volumes are not independent of one another Activity in some volumes can cause all volumes with a none guarantee to go offline Generally only makes sense with a large number of volumes Transcript: So the positive of using a none guarantee is it really allows you to use a shared space from the aggregate, it's very easy to do that. The disadvantage is the volumes are very much dependent on one another. They're not independent of one another in any way. Basically one volume can affect all the other volumes and cause those to go offline. And that's why we said with auto grow, they were somewhat dependent on one another. In the none case, they're very dependent on one another. In general this configuration I think only makes sense with a relatively large number of volumes.

33 Thin Provisioning – Snapshot Space Guarantee = none - Example
volume used / size / consumed vol1 = / 150 / 0 vol2 = / 150 / 0 vol3 = / 150 / total = / 450 / 0 free = 330 Transcript: We'll go through the same type of configuration. In this case I created my volumes to be 150-gigabytes each. You'll notice that the space consumed is zero. And in the none case, what happens, is it doesn't matter how big things are. How big a volume just determines how much space a single volume can consume, it's still a limiting factor. But it doesn't determine how much is actually consumed at one particular time. My used is always going to be the same as my consumed in this case. At volume creation no space is allocated from the aggregate

34 Thin Provisioning – Snapshot Space Guarantee = none - Example
volume used / size / consumed vol1 = / 150 / 40 vol2 = / 150 / 100 vol3 = / 150 / total = 240 / 450 / 240 free = 90 Transcript: So if I go and I start doing a bunch of writes, 40-gigabytes out of Volume 1, 100-gigabytes out of the other two so if I look, again, my used is the same as my consumed in this case, so 240. I've got 90-gigabytes free inside my volume.

35 Thin Provisioning – Snapshot Space Guarantee = none - Example
volume used / size / consumed vol1 = / 150 / 90 vol2 = / 150 / 120 vol3 = / 150 / total = / 450 / 330 free = 0 Transcript: Now if I go and I use more space, Volume 1 is up to 90-gigabytes, my Volume 2 and Volume 3 are up to 120, now my aggregate is full at this point in time. So this kind of highlights both the advantages and the disadvantages. In this case, I was able to have Volume 2 and Volume 3 both grow to 120. I couldn't do that in my last case. They wouldn't have had enough space because they would have ran out of space before that, so that's the advantage. And the reason that they can do that is they're using space that Volume 1 is not using. It's only using 90 so any of that space it uses the aggregate as one big free pool, right? There's no space reserved for an individual volume. It also shows the disadvantages that if at this point I do a write and one of these guys need more space, Volume 3, Volume 2 or even Volume 1, all the volumes are going to go offline. They don't have enough space, there's no space left. In the other configuration that had a volume, a particular volume might go offline, my LUNs might go offline from a particular volume but my other volumes would have continued to run. In this case, Volume 1 for example, it's been very well behaved. It hasn't gone above 100-gigabytes or even close to a 150 that's allocated, but it's affected very much by what happened to Volume 3 or Volume 2. It'll go offline if one of those guys now uses the space. So very dependent it basically deals with my aggregate as one big free pool. AUDIENCE QUESTION: (Inaudible). No, so the SMAI products the question was, as you know in SnapManager, will it go ahead and does it have these options available? So the SMAI group has been extremely conservative in terms of wanting to do anything regarding thin provisioning. That's what I was saying I think we'll start to see more of a change there, that they'll be more willing to add some features to allow us to take better advantage of that. So today in SnapManager, you can't set those kinds of things. We'll go through it a little bit in LUN, I have, I think, about two minutes left, so we're going to try to pound through these last two. Author’s Original Notes: Even though volume 1 stayed within it’s constraints it will go offline. That’s the price fo volume 1 making it space available to the other volumes Point that 3 volume is a reasonable number of volume to be doing this! Highlights both the advantage and disadvantage of a none guarantee Allows vol2 and vol3 to use space which vol1 is not using If any volume consumes more space all volume with a guarantee of none are going to go offline on the next write

36 Thin Provisioning – LUNs – Guarantee = NONE
Guarantee = none configuration: Guarantee = none LUN reservation = on Fractional_reserve = NA (no snapshots) Snap_reserve = 0% (no snapshots) Auto_delete = snap_reserve (safety measure) Auto_grow = off Try_first = snap_delete Transcript: So LUN configuration, as I mentioned, when we're thin provisioning LUNs, we're not going to do that when we're doing snapshots. So there's two configurations. One is where I set the guarantee to none. Basically all these other things, in general, it doesn't make sense to be creating snapshots, so it doesn't really matter what I have these other things set to. The only one I think is kind of an interesting case here is to set the auto_delete to snap_reserve. And you'll notice I've got snap_reserve set to 0%. So what that means is that it prevents me from creating snapshots that I didn't want. So what happens is if you create a snapshot, it goes away that fast, it gets deleted immediately. So it prevents snapshots from accidentally being created and consuming space that I didn't expect them to be creating. Author’s Original Notes: This is actually how you thin provision in the NAS space – creating volumes that are exported that are larger the aggregate Auto_delete will automatically and immediately delete any snapshots

37 Thin Provisioning – LUNs – Guarantee = NONE
Positives Uses shared free space from the aggregate Works with SDW without issues Negative Limited by BURT Not possible to configure more a LUN larger than actual free space in the aggregate For example, aggregate has 200GB free and you create a 400GB volume It is possible to create 2 x 200GB or 4 x 100GB or 1000 x 100GB LUNs It is not possible to create 1 x 201GB LUN Transcript: And if I look at the positives and the negatives of using a guarantee of none, basically the positives, I get to use that free space from the aggregate as one big pool. And it looks a lot like the example I had with snapshots that have the same advantages and disadvantages. There's one caveat here there's a BURT number here, that I have to be a little bit careful in terms of how I can configure LUNs. I can't actually configure a single LUN that is larger than the amount of free space that I have available. And this is a BURT that I think now targeted for 7.3. I can create a combination of LUNs that's a lot more than my space available, I just can't create any one individual LUN. So if I look, if I have an aggregate 200-gigabytes, I create a 400-gigabyte volume with a guarantee of none, I can't create a 201-gigabyte LUN. I can create as many 100-gigabyte LUNs as I want, but that's something that's limited by that BURT. We expect that to be solved in 7.3. Author’s Original Notes: Currently hoping to have this fixed in IC but that is not yet committed

38 What is Flexible or Thin Provisioning Types of Thin Provisioning
Agenda What is Flexible or Thin Provisioning Types of Thin Provisioning ONTAP Variables Configurations Default Configuration Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So the last configuration we have is LUN reservations disabled. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.

39 Thin Provisioning – LUNs – reservation disabled
LUN reservation disabled configuration: Guarantee = volume LUN reservation = off Fractional_reserve = NA (no snapshots) Snap_reserve = 0% (no snapshots) Auto_delete = snap_reserve (safety measure) Auto_grow = off Try_first = NA Transcript: So in this case we're going to set it to volume, we're going to set our guarantee to volume. And we're going to set our LUN reservation off. So what that'll do is it'll allow me to create LUNs that are bigger than the volume that I have. So the size of all my LUNs inside that volume can be bigger than the actual volume size that I have. Again, I'm not going to probably want to be creating snapshots with that. Author’s Original Notes: Autogrow could be added to allow using the shared free space from the aggregate

40 Thin Provisioning – LUNs – reservation disabled
Positives Volumes are independent of one another Amount of thin provisioning can be tuned per volume Negative Doesn’t currently work with SDW With Data ONTAP DSM 3.0, SDW is not needed for multipathing Transcript: So the positives of this are the volumes are independent of one another and I can tune that on a per volume basis. One of the big disadvantages really was that I couldn't use snapshot for Windows. Snapshot for Windows, if it sees a LUN, if you try to talk to it, it automatically sets the space reservations back on. So up until now that basically made this an unusable solution because I needed to have multipathing. And in order to have snap multipathing, I needed SnapDrive. Well, starting with DSM, which is not separated from DSM 3.0, it's separated from SnapDrive, and so I can do multipathing with the DSM and I can go ahead I can still use non-reserve LUNs. So in that case what I can do, I can have a 100-gigabyte volume and I can create 200-gigabytes of LUNs. I probably wouldn't go to that extreme, right? I'm probably going to have a 100-gigabyte volume and create a 120-gigabytes of LUNs, something like that inside there. And I can do that now that I have the ONTAP DSM, I can work around that issue. I still can't use SnapDrive to connect to it and things like that, but in general, we said we weren't going to be creating snapshots so it's not as big a disadvantage to not have SnapDrive. We expect this to be fixed in SnapDrive 5.0 that'll be coming out in the spring. Author’s Original Notes: Because SDW checks and if reservations are enabled – it enables them. Currently to be fixed in SDW with 5.0 release planned for April SDW limitation is not as big a disadvantage now that multipathing is delivered separately - especially since snapshots are not being created

41 Agenda - Summary Configurations Default Configuration
Snapshot - Snapshot Auto delete Snapshot - Volume Auto grow Snapshot - Volume guarantee none LUN - Volume guarantee none LUN - LUN reservation disabled Transcript: So the last slide I have is we talked about the six configurations. I think the ones that make the most sense, the default configuration, as I said I don't think that's reasonable for most people. I think that the snapshot auto delete and the volume auto grow in particular, I think those are two very nice configurations. I think they're relatively easy to understand, I think they make a lot of sense. And I think that as far as going with the LUNs, if you want to thin provision LUNs, the one that makes sense is to do a LUN reservation disable. The none guarantee has some advantages to it. I think it's more risky, I think it's a little harder to manage. But I think these are probably the three configurations that I would recommend that if people want to go out and do thin provisioning of that that these are the configurations I would give the most thought to. Last question. AUDIENCE QUESTION: (Inaudible) was the white paper. And if you have feedback, I'd love that. I need to actually rev it, so if people have feedback and stuff they'd like to see added to that, let me know. I lied, that wasn't my last question? AUDIENCE QUESTION: (Inaudible). There's not applications for specific stuff today, we realize we need to add that. I need to let people go. If you have questions, please feel free to come up, but I need to let people go, so that people can get to their next class. Author’s Original Notes: Goal is to cover the concepts surrounding provisioning and space usage not to cover specific commands in detail We are going to quickly cover the variables involved. Those were covered in more detail in the kickoff presentation. That section is more review and we want to spend some more time on the best practices configurations talking about how they work and the advantages and disadvantages of both configurations are.


Download ppt "FC Thin Provisioning for iSCSI and FC"

Similar presentations


Ads by Google