Jul 17, 2017

The Right Way to Size Your Private Cloud Storage Environment

Paul Painter, Director, Solutions Engineering

Private cloud storage is an increasingly popular mechanism for securing data in virtualized environments, but knowing how to rightsize environments for current and future needs is one of the most important facets of a successful strategy.

In this post, I’ll briefly explain my process for sizing cloud storage environments and share with you the most important element of any solution.

The No. 1 Thing Your Private Cloud Storage Solution Needs

Once the metrics are collected and the capacity/performance requirements are identified, it’s time to process it all. Back in the day, this was a fairly manual process. While vendors did offer tools to size the storage arrays, they were, for the most part, not self-serve and only available to internal engineers and resellers.

Regardless, the logical question for both then and now is this: “What should we do if our storage requirements change or the workload changes its behavior?”

The answer most of us had back then? “We do a re-assessment and re-sizing and then propose a new storage solution.” In other words, not so useful.

The correct answer today? “The storage solution employed is adaptive, flexible, proactive and intelligent.”  

While, again, there’s a plethora of blog posts highlighting the pro’s and con’s for each competing storage vendor on the market, the above answer is non-negotiable. A service provider must be able to spell out how quickly and effectively they can react to workload changes or performance concerns.

These days in IT it’s a crime to not leverage data analytics for predictive behavior of storage solutions, which in turn allows you to address the performance issues before they arise or, at the very least bring them to your attention.

Service providers like HorizonIQ rightsize each and every storage solution. But we augment those initial assessments with technology that allows us to proactively account for unpredictable changes.

For our cloud backup environments, we use Nimble Storage’s Infosight analytics to identify workloads into the categories of “estimated to exceed the defined boundaries” or “requires performance or capacity upgrade.”

Private Cloud Storage Sizing Examples

In Part 1, I identified the primary metrics needed to complete the sizing. These included data size, storage block size and bandwidth and IOPS read/write ratio. After collecting the data, we can estimate the storage requirements. Take a look at two hypothetical workload examples below:

File server workload assessment results:

  • Data: 20 TB
  • Total IOPS: 8,000
  • Reads: 60%
  • Writes: 40%
  • Block size: 4 KB
  • Total Storage Throughput: ??

MS SQL workload assessment results:

  • SQL Data: 300 GB
  • SQL Logs: 50 GB
  • Total IOPS: 20,000
  • Reads: 60%
  • Writes: 40%
  • Block size: ranges from 512B to 8MB
  • Total Storage Throughput: ??

Clearly, we’re missing our bandwidth numbers. We can easily calculate it for the File server:

Total number of IOPS * Block size = Total storage throughput for the workload type

8000 * 4KB = 32,000 KB/s or 32MB/s

(Please note, the Total IOPS is assumed to be the “peak” number, and estimated not to exceed this number).

But how do we estimate it for the SQL workload, where the block size ranges? This is where your storage vendor tools come in handy. Using Nimble Storage SiteAnalyzer, there’s an option to select “real world block sizes seen for this application type” and leverage the appropriate ratios of different blocks seen in SQL environments.

From here, all we have to do is plug the numbers above into the sizing tool. In this example, the recommended array type is CS3000 with 9.6TB of Raw SSD Cache and 25TB of usable capacity on spinning disks.

How does it address specific environment requirements like latency  —  e.g. “Across 100 workloads, latency shouldn’t exceed 10ms.”  

Thanks to the sizing tool, we can pick the configuration that estimates 99 percent probability for the Cache Hit (data served off SSD disks), which in turn ensures the fastest response for the application without forcing us to break the bank an acquire an All-Flash solution.

Most importantly: Should any of the assessed parameters change by the time the project goes live on the new storage array, we can easily collect all performance metrics on the storage array and proactively identify potential bottlenecks regardless of its performance or capacity of the array that requires attention.

Explore HorizonIQ
Bare Metal

LEARN MORE

About Author

Paul Painter

Director, Solutions Engineering

Read More