Introduction VMware vSphere Virtual SAN

Good to know….not really clear to understand.

Many of you have seen the announcements by now and I am guessing that you are as excited as I am about the announcement of the public beta of Virtual SAN with vSphere 5.5. What is Virtual SAN, formerly known as “VSAN” or “vCloud Distributed Storage” all about?

read more here: Introduction VMware vSphere Virtual SAN.

Advertisements

Pareto priciple applyed to Storage Subsystem

In recent months I have had the opportunity to analyze the performance of some storage subsystems to some customers.
I found, at least for me it was a novelty, given that 100% of the disk space provided by a storage subsystems, only a small fraction of this is used and accessed by a large number of I / Os, typically not less than 80% of the total.

In the following image you can see an example of this behavior.
The blue line, namely the line that unites all the points obtained on a scattered plot, represents the total number of I / Os of a storage subsystem analyzed for a given period of time.

Screenshot from 2013-08-17 09:58:52As you can see, only in 7.5% of the disk space is accessed by 95% of the I / Os. This can be a special case but typically the relationship is given by a 20% of disk space accessed by 80% of I / Os.

The pink line, or the line joining all the points represented on a plot scatterd, instead shows the behavior of those who are considered Hot I / Os relative to the total disk space provided.

As you can see, the Hot 80% of I / Os engages in only 22.5% of the total disk space.

This behavior reminded me of the Pareto Principle:

http://en.wikipedia.org/wiki/Pareto_principle

The Pareto principle (also known as the 80–20 rule, the law of the vital few, and the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes.

That applyed to computer science, i.e.

In computer science and engineering control theory such as for electromechanical energy converters, the Pareto principle can be applied to optimization efforts.[10] For example, Microsoft noted that by fixing the top 20% most reported bugs, 80% of the errors and crashes would be eliminated.

So, the question is:

Can we apply the Pareto Principle configuring a Storage Subsytem ?

My personally answer is YES !

Given a known profile of I / Os, applying the Pareto Principle we could configure a disk subsystem with a lower proportion of high-performance disks and a high percentage of low-performance disks.
In this way you will optimize costs of the subsystem under all points of view:

$ $ * TB
$ $ * Footprint
$ $ * Cooling and Power

In addition if the disk space allocation related to the performance demands were automated or left to decide for an automatic tiering, all would also be much easier to manage.

For now I think I’ve given enough elements to think about and I’d like to know if you have had similar experiences or if you think differently.

EMC demotes VMAX into ‘capacity tier’

Interesting news running on many blogs in the very last days.

Storage CH Blog

Post by Simon Sharwood (thank you) over at El Reg

EMC Chief Technology Officer John Roese has all-but-demoted the company’s Symmetrix VMAX arrays from their position atop the company’s prow, today declaring the arrays are the biggest and baddest offering in a “capacity tier” of products that possess lesser grunt than a “performance tier” of products.

Read on here

View original post