The Journey to a Hybrid Software Defined Storage Infrastructure S01E05

This is the fifth episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

To read previous episode check here:

E01

https://ilovemystorage.wordpress.com/2016/11/23/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e01/

E02

https://ilovemystorage.wordpress.com/2016/11/29/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e02/

E03

https://ilovemystorage.wordpress.com/2016/12/06/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e03/

E04

https://ilovemystorage.wordpress.com/2016/12/13/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e04/

Enjoy your reading!

9.    #8 Cloud Storage Problems: How to Avoid Them

Moving storage to the cloud offers some enticing benefits, but only if you can avoid the common cloud storage problems. Here are some of the biggest cloud storage problems you need to be aware of before moving your invaluable data to cloud storage.

  1. Not choosing the right cloud storage provider

The old adage is that no-one got fired for choosing IBM, and when it comes to cloud storage it’s tempting to choose one of the two biggest cloud providers: AWS or Microsoft Azure.

But while they may well be the best choice for many companies, they may not be the best choice for yours. Depending on the size of your organizations it may make sense to look at smaller storage providers who will be able to give you more attention.

The things to look for with other storage providers include:

  • Downtime history, to get an idea of how reliable they have been in the past – and therefore an indication of how reliable they may be in the future.
  • Data accessibility, including what bandwidth they have within their data center, between their data centers and to the Internet.
  • Their pricing structure, including fixed charges and bandwidth charges to move data in and out. A common cloud storage problem is to neglect to establish how easily you can scale your requirements up and down. For example, are you committed to a certain amount of storage every month, or can pay only for what you use each day, week or month?
  • Familiarity with your industry vertical. Choosing a storage provider that understands your business and your likely data requirements can make life much easier for you, and failing to choose a good provider that specializes in your industry is a clouds storage pitfall that could put you at a disadvantage compared to your competitors. That’s because service providers familiar with your industry may be better equipped to accommodate your industry’s usage patterns and performance requirements and to demonstrate compliance with relevant industry regulations.
  1. Neglecting connectivity

You may have a state of the art network in your data center running at 100Gbps or 10Gbps, with perhaps 10Gbps, 1Gbps or even 100Mbps in the rest of the organization. But when it comes to connectivity with the Internet your bandwidth will likely be much slower – perhaps as low as 10Mbps – and it may well be asymmetric (meaning uploads to a cloud storage provider will be much slower than downloads from it.)

Cloud storage gateways and other WAN optimization appliances can help alleviate the problem, but if the connectivity to your cloud storage provider is not sufficient then a move to cloud storage is unlikely to enable high enough storage performance to get many of the potential benefits.

  1. Not getting the service level agreement (SLA) right

Most cloud storage providers will offer you a boilerplate SLA outlining their obligations to you and what they will do if things go wrong. But there is no reason why you have to accept it – IDC estimates that about 80% of cloud customers accept the boilerplate SLA they are offered, but 20% negotiate alterations to this boilerplate to ensure that it more closely meets their needs.

For example, a provider may offer you “four nines” (i.e 99.99%) uptime guarantee, allowing 50 minutes’ downtime per year. But this may be calculated on an annual basis, so the service could be down for 50 minutes on the first day of the contract and you would have to wait until the end of the year to find out if the SLA had been breached and you were therefore entitled to any compensation.

 

In the meantime, you would have to bear any resultant losses yourself. To avoid this cloud storage problem, it may be possible to negotiate that while 50 minutes per year is permissible, there should be no more than (say) 15 minutes per month if that suits your business needs better.

  1. Overestimating the compensation, you might get if the provider breaches the SLA

It’s tempting to think of an SLA as some kind of insurance policy: your business can survive as long as the terms of the SLA are met, and if they are not you’ll be OK because your cloud storage provider will provide compensation that is tied to the impact on your business of the breach.

But that is simply not the case and it’s a common cloud storage problem. In most cases breach penalties come in the form of service credits (i.e. free storage for a few months), and in the case of a serious breach – such as all your data being lost – the most you should hope for is a monetary payment of three or four times your annual contract value. In many cases that will be nothing like the cost to your business of losing so much data.

Of course, it may be possible to negotiate higher compensation payments from your cloud storage provider, but then it’s likely you will have to pay much more for your storage. In most cases, it would work out cheaper to buy insurance cover from a third party.

  1. Failing to monitor your SLA effectively

Working with a cloud storage provider adds another layer of complexity between the business users who use corporate data and the data itself. The IT department, which monitors the SLA, is somewhere in the middle.

A common cloud storage pitfall when it comes to data access problems is that users or business units may bypass the IT department and go directly to the cloud storage provider’s help desk to resolve issues when they occur. If that happens then you can’t necessarily rely on the provider to record every problem that occurs, and that means accurate monitoring of the SLA is effectively impossible. Avoiding this cloud storage pitfall comes down to educating users that your IT helpdesk should be their first point of contact in all cases.

  1. Failing to get a clear understanding of how to get your data back or move it to another provider

Cloud storage providers may fall over themselves to make it easy for you to give them your data in the first place perhaps by collecting physical media such as hard disk drives from your data center or offering free data ingress over a network connection. But if you decide that you no longer want to use the provider’s services it can often prove unexpectedly difficult or expensive to get it back.

To avoid this cloud storage pitfall, it’s important to get satisfactory answers to the following questions:

  • How will your data be made available – over a network connection or can it be placed on physical storage media for collection?
  • How soon will it be available – will you be expected to wait for days or weeks?
  • How much bandwidth will be available if you plan to download your data? That’s important because even with a 1Gbps link, it would take almost two weeks to get 150TB of data back from a cloud storage provider to your data center.
  • What bandwidth costs will be involved if you move your data back over a network, and what are the costs of having it put on physical media?
  • How long will it take for copies and backups of your data to be deleted, and what formal confirmation can you expect that all copies have been deleted?
  • In what format, will data be made available – will it be provided in a .csv file or in some other more closed format?

 

  1. If using a cloud, storage provider absolves you of all security responsibilities

Cloud providers are meant to be experts at what they do, including keeping their clouds and the data within it secure. But if there is a data security breach then it is you that your customers will hold responsible and seek compensation from, and it is you that will suffer the embarrassment, loss of reputation and possible loss of business.

That means that to avoid this cloud storage problem it is up to you to do due diligence and satisfy yourself that the security offered by the cloud storage provider is good enough for your needs. To do this you will need to find out as much as possible about the security arrangements that are in place, and what guidelines and regulations (think HIPPA, PCI-DSS, SSAE 16) it has been certified to comply with.

  1. Fixating on costs without considering other factors

For many companies one of the key drivers for moving to the cloud is reduced costs, or at the very least a switch from a single large capital expenditure to small regular operating expenditures. While that may be beneficial from a business point of view it’s important to remember that as well as changing how you pay, you are also paying for something fundamentally different.

Cloud storage, in other words, is not the same as your existing data center storage, and as well as new security, compliance and accessibility challenges there are also new performance characteristics to consider. What this boils down to is that some applications that you run in your data center aren’t performance sensitive and are well suited to being used in conjunction with cloud storage. For other applications that’s not the case.

That means that if you decide to use cloud storage for these latter applications then the applications themselves may also have to run in the cloud, close to the cloud storage. And that in turn means that moving your data to cloud storage may need to be part of a far larger consideration of the viability of moving some or all your applications to the cloud.

https://ilovemystorage.wordpress.com/2016/12/13/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e04/

This is the end of Episode #5. Next Episode will come on 2017 because this Blog will step into the Holiday Season like I hope you will soon!!

Thank you for reading…Stay tuned and see you in 2017!

The Journey to a Hybrid Software Defined Storage Infrastructure S01E04

This is the fourth episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

To read previous episode check here:

E01

https://ilovemystorage.wordpress.com/2016/11/23/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e01/

E02

https://ilovemystorage.wordpress.com/2016/11/29/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e02/

E03

https://ilovemystorage.wordpress.com/2016/12/06/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e03/

Enjoy your reading!

8.    The SDS Hybrid Architecture

Moving to a SDS Hybrid Infrastructure is a journey and it needs to go through some specific steps.

Consolidate and Virtualize: Storage Virtualization is strategic and a Storage Monitor System to discover storage resources, check their dependency and track the changes is imperative as well.

Only through a standardized lifecycle management process we will be able to get:

  • Automated provisioning / de-provisioning
  • Virtual Storage Pools
  • Capture and catalog virtual images used in the data center
  • Management of the virtualized environment

 

Then Integrating virtualization management with IT service delivery processes the infrastructure can supply:

  • Elastic scaling
  • Pay for use
  • Self-service provisioning
  • Simplified deployment with virtual appliances
  • Workload / Virtual Servers provisioning and Workload Management
    • Virtual Servers / Hypervisors
    • Dedicated Storage
  • Integrated Infrastructure
    • Server, Storage and Network
    • Specialized storage services
    • Orchestration/Management of the virtualized environment
  • Hybrid Clouds
  • Business Policy Driven
  • Dynamic Infrastructure
  • Data and workload services
  • Based on business policy
  • QOS driven
  • Regulatory compliance
  • Cost and performance optimization
  • Extended Enterprise Infrastructure

SDS Hybrid infrastructure leverage the Cloud Services

screenhunter_04-dec-13-15-09

At the end of the Journey the main goal of a SDS Hybrid Infrastructure is to get a Storage for Workload Optimized System, in other words a Storage Infrastructure able to match the requirements of the workload. screenhunter_05-dec-13-15-10A simple SDS Hybrid Cloud picture can be depicted as follow:

screenhunter_06-dec-13-15-15This architecture applying different data transfer solution will be able for block data to:

  • Backup / Archive to the Cloud – physical media
  • Backup / Archive to the Cloud by network – restore from the Cloud to on premise
  • Pre-position data in the Cloud
  • Migrate workloads to the Cloud – run against pre-positioned data

 

 

And for NAS

  • Backup / Archive to the Cloud by network – restore from the Cloud to on premise
  • Pre-position data in the Cloud with Object Store gateway
  • Pre-position data in the Cloud with AFM
  • Migrate workloads to the Cloud – run against pre-positioned data

 

In a Cloud Environment, we can define some Storage Class as seen by Guest VM. Each storage class could have one or more tiers of storage behind it.

screenhunter_07-dec-13-15-16

An agnostic view of what is the Storage Technology for Cloud able to match Cloud Storage Classes, Cloud Storage Platform Services and Customer Workload can be summarized as following:

screenhunter_08-dec-13-15-16

As long as this study aim to show how is possible to build a SDS Hybrid Infrastructure, its target is to show how IBM SDS portfolio can match and achieve this goals as well.

In the next picture the IBM SDS Technology for Cloud Product Selection is shown.

screenhunter_09-dec-13-15-16

8.1    IBM Storage for the Cloud and Cognitive era

This flexibility makes hybrid cloud the ideal platform on which to build cognitive solutions. And because data is the resource on which all cognitive solutions depend, IBM storage is the foundation for hybrid cloud and cognitive solutions. It protects that data, delivering it when, where, and how it’s needed with the efficiency, agility, and performance that cognitive solutions demand.

IBM storage delivers a host of powerful capabilities that enable an enterprise to easily exploit the full value of its data while simultaneously reducing the cost of its management to achieve optimal data economics throughout its lifecycle. These are just a few. There are many more. The point is, whatever the use case, IBM storage offers a robust solution that satisfies it.

The key to agility, efficiency and performance in the modern data center is software defined flexibility.

Software Defined Storage takes the intelligence that is in traditional storage systems (which are a combination of proprietary hardware and software) and makes it available to run on commodity hardware.

By decoupling the software from the underlying hardware, the capabilities of a particular software stack can then be deployed wherever and consumed however they are needed – on premises or in the cloud –  as a fully-integrated solution, an appliance, software, a cloud service, or various combinations.

The next paragraph will describe the software defined storage solutions in IBM SDS portfolio.

 

8.1.1    Storage Infrastructure

IBM Spectrum Virtualize helps simplify storage management by virtualizing storage in heterogeneous environments. Among other benefits, virtualization simplifies deployment of new applications and new storage tiers, eases movement of data among tiers, and enables consistent, easy-to-use optimization technologies across multiple storage types.

IBM Spectrum Accelerate™, is a highly flexible storage solution that enables rapid deployment of block storage services for new and traditional workloads, on-premises, off-premises and in a combination of both. Designed to help enable cloud environments, it is based on the proven technology delivered in IBM XIV Storage System and in use on more than 100,000 servers worldwide.

IBM® Spectrum Scale™ is scale-out file storage for high performance, large scale workloads either on-premises or hybrid cloud. It unifies storage for cloud, big data and analytics workloads to accelerate insights and deliver optimal cost and ease of deployment. It combines enterprise features with performance-aware intelligence to position data across disparate storage hardware, making data available in the right place at the right time.

IBM Cloud Object Storage enables storing and retrieving object data on-premises, in-the-cloud, or both with the ability to easily and transparently move data between them.

8.1.2    Management

IBM® Spectrum Control provides storage and data optimization using monitoring, automation and analytics. It enables organizations to make an easy transition to virtualized, cloud-enabled, and software defined storage environments— because it provides a storage management solution for all types of data and storage. It helps significantly reduce storage costs by helping optimize storage tiers, and simplify capacity and performance management, storage provisioning and performance troubleshooting.

IBM Spectrum Protect provides a single platform for managing backups for virtual and physical machines, and cloud data.  Modern capabilities, such as scalable deduplication and cloud storage access, are delivered entirely in software, eliminating the requirement for deduplication appliances and cloud gateways in many instances.

IBM Spectrum Archive gives organizations an easy way to use cost-effective IBM tape drives and libraries within a tiered storage infrastructure. By using tape libraries instead of disks for Tier 2 and Tier 3 data storage—data that is stored for long-term retention organizations can improve efficiency and reduce costs.

 

8.1.3    IBM Storage for Cloud and Cognitive era

It’s from this software defined capabilities that four key storage platforms emerge for Virtualized Storage, Cloud, Big Data, and Business Critical storage needs. These are the cornerstones of a cognitive storage infrastructure.

screenhunter_10-dec-13-15-20

And because of their software defined flexibility, they are available in a range of deployment models including fully-integrated solutions, software, cloud services, and appliances. Notice also that we have all-flash offerings in every platform.

screenhunter_11-dec-13-15-20

Combined with the rest of our storage portfolio, they provide capabilities that enable a business to be more than digital but to marshal valuable data assets with the efficiency, performance, and agility required to be a truly cognitive enterprise.

IBM storage solutions offer flexibility in deployment to make data available where and how it’s needed, in the form most easily consumed by the applications that depend on it, and with the best data economics possible, whether on-premises or off.

screenhunter_12-dec-13-15-20

Cloud-Scale solutions based on IBM Spectrum Accelerate software are purpose-engineered for the demands of cloud deployments with strong support for multi-tenancy and Quality-of-Service, and deliver consistent high performance even with unpredictable workloads. They dramatically simplify scale-out and management by eliminating tuning, load-balancing, and most other storage management activities.  They offer extreme ease of use and task automation, reducing administrative overhead, and scale management to many petabytes in a single environment, and come with advanced mirroring, security and other enterprise capabilities including remote replication, multi-tenancy, snapshots, monitoring.

Versatile integration options make cloud infrastructure easy with rich integrations for the cloud, like a REST API, a thorough command line interface, OpenStack Cinder, and deep VMware and Microsoft integrations

The benefits of IBM Spectrum Accelerate are available as software, as a cloud-service, or in these fully-integrated solutions:

  • The field-proven and much-loved XIV Gen3 storage system which is our capacity-optimized offering.
  • IBM FlashSystem A9000, which integrates the extreme performance of IBM FlashCore technology, a full-featured data management stack, and flash-optimized data reduction in one very simple and efficient, all-inclusive 8U solution for cloud deployments.
  • And IBM FlashSystem A9000R, designed for the global enterprise with data-at-scale challenges. It is a grid-scale, highly parallel, all-flash storage platform designed to drive business into the cognitive era with performance, MicroLatency response time and the reliability, usability and efficiency needed by today’s enterprise businesses.

screenhunter_13-dec-13-15-20

All the products of the IBM Storage Portfolio will match the SDS Hybrid Infrastructure requirements and goals:

  1. To be ready for Private, Public and Hybrid Cloud
  2. Be flexible and agnostics thanks to Storage Virtualization layer
  3. Leverage Flash Technology for Business Critical and Analytics applications
  4. Match back up and DR customer requirements
  5. Easily managed with a top down view in a single pane product.

 

8.1.4    Transparent Cloud Tiering (aka Multi Cloud Storage Gateway)

Basing on the fact that the Storage in Cloud will play a crucial role in the Storage future, the Cloud or Multi Cloud Storage Gateway (TCT for IBM) will be one of the facility that will drive the data from on premise to off premise.

An agnostic vision about how the Multi Cloud Storage Gateway will give benefits is shown in the next picture:

screenhunter_14-dec-13-15-23

IBM has some technology formally called already present in some product (Spectrum Scale) and that will be ready on other SDS products as well in a short future. Currently this technology is called Transparent Cloud tiering (TCT), potentially in the future will have different name.

screenhunter_15-dec-13-15-24

 

screenhunter_16-dec-13-15-24

This technology potentially (Roadmap need to be confirmed time to time), will be present in any IBM SDS Portfolio products and in other storage subsystem that currently represent the IBM Enterprise Storage Subsystem as well, like DS8000 and XIV family.

screenhunter_17-dec-13-15-24

The MCStore technology is already available on IBM Spectrum Scale Technology and can give benefits in the following Use Case:

  • Enable a secure, reliable, transparent cloud storage tier in Spectrum Scale with single namespace
    • Based on GPFS Information LifeCycle Management (ILM) policies
    • Leveraging GPFS Light-Weight Event technology (LWE)
  • Supported Clouds
    • AWS S3 (Amazon) and Swift

 

screenhunter_18-dec-13-15-24This solution will do a couple of things for you.

  1. Because we are looking at the last read date, data that is still needed but the chance you will read it is highly unlikely can be moved automatically to the cloud. If a system needs the file/object there is no re-coding that needs to be done as the namespace doesn’t change.
  2. If you run out of storage and need to ‘burst’ out because of some monthly/yearly job you can move data around to help free up space on-perm or write directly out to the cloud.
  3. Data protection such as snapshots and backups can still take place. This is valuable to many customers as they know the data doesn’t change often but like the idea they do not have to change their recovery process every time they want to add new technology.
  4. Cheap Disaster Recovery. Scale does have the ability to replicate to another system but as these systems grow larger and beyond multiple petabytes, replication becomes more difficult. For the most part you are going to need to recover the most recent (~90 Days) of data that runs your business. Inside of Scale is the ability to create mirrors of data pools. One of those mirrors could be the cloud tier where your most recent data is kept in case there is a problem in the data center.
  5. It allows you to start small and work your way into a cloud offering. Part of the problem some clients have is they want to take on too much too quickly. Because Scale allows customers to have data in multiple clouds, you can start with a larger vendor like IBM and then when your private cloud on OpenStack is up and running you can use them both or just one. The migration would be simple as both share the same namespace under the same file system. This frees the client up from having to make changes on the front side of the application.

 

Today this feature is offered as an open beta only. The release is coming soon as they are tweaking and doing some bug fixes before it is generally available. Here is the link to the DevWorks page that goes into more about the beta and how to download a VM that will let you test these features out.

http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html

Addressing a solution with Data replicated in Cloud using MCStore or Multi Cloud Storage Gateway, more than Disaster Recovery, it is correct to talk about Back Up in Cloud solution as long as the data replicated, moved or backed up in cloud will be “Object”, hence their usage can’t be immediate for DR purposes with usual minimum RPO or RTO, but they will be available for restore purposes as needed.

Talking about IBM Spectrum Virtualize what we are expecting in a short future with V7.8 is shown in the following picture, where:

screenhunter_19-dec-13-15-25

 

 

 

  • User configures Back Up / DR for production volumes via Spectrum Virtualize GUI
  • MCS Gateway runs alongside Spectrum Virtualize and pulls full or incremental FlashCopy snapshots from Volumes
  • MCS Gateway applies encryption, integrity protection etc. as configured
  • MCS Gateway stores data and metadata to remote object store(s), thus snapshots can be incremental forever
  • User restores snapshot from cloud to original or new Volume via Spectrum Virtualize GUI

 A kind of built-in easy-to-use “cloud based time machine for enterprise block storage” for the following Use Case:

Built-in easy-to-use feature for various use cases:

  • Backup
  • Disaster recovery
  • Data sharing
  • Migration/archiving
  • Compliance/auditing

 

This technology, as already said, is something that is already present and something that will be available in a short term or medium term future.

By the way it is something need to take in consideration because its future evolution will be really interesting and will give tremendous benefit for Hybrid SDS Cloud infrastructures.

This is the end of Episode #4. Next Episode will come shortly

Thank you for reading…Stay tuned!

The Journey to a Hybrid Software Defined Storage Infrastructure S01E03

This is the third episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

To read previous episode check here:

https://ilovemystorage.wordpress.com/2016/11/23/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e01/

and

https://ilovemystorage.wordpress.com/2016/11/29/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e02/

 Enjoy your reading!

7.    Next Steps
Customers and Vendors who understood new business needs and trends will be called to answer the following questions:
7.1    How to balance between current infrastructure and new infrastructure?
•    What kind of cloud storage architecture should they target?
•    How does the organization manage the transition to its target state from legacy architecture to cloud-based services-oriented architecture?
•    What changes should be made to development and operating models?
•    What capability and cultural shifts does the organization need?

screenshot-from-2016-12-06-150304A lot of questions about Software Defined Storage Hybrid Cloud lately, particularly about how we transition our existing solutions onto the cloud platform.
How a company need to strike the balance between the applications that have built and the new cognitive and cloud products that can propelling the company forward?

They can move in new capabilities while realizing that not everything will or needs to change.
They can keep what’s working, but ensure it integrates with our new technology.
When deciding where to put investment dollars, they must think about existing solutions and new infrastructure as one portfolio, which makes their investments easier to balance

7.2    How does the organization manage the transition to its target state from legacy    architecture to cloud-based services-oriented architecture?
How long should the transition take and what are the key steps? Should the company wait until cloud and on premise products achieve parity?

screenshot-from-2016-12-06-150321The approach also forces them to think deeply about which types of product functionality deliver sought-after core customer experiences and what they have to emphasize to get that functionality right.
By putting a workable MVP with the most important features in user’s hands as quickly as possible, the team is able to both gather crucial initial customer feedback and rapidly improve on their cloud based development skills.
7.3    What kind of cloud storage architecture should they target?
Should customers use public infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) solutions or choose the private cloud?
Now, public and private cloud markets are no longer considered to be emerging.
From the storage system perspective, both public and private cloud represent significant parts of the market but impact the enterprise storage systems market in different ways.
Private cloud deployments can be implemented on-premises or off-premises and are largely tuned toward using external storage systems in the back end. The public cloud market is more diverse, covering a broad range of services delivered by a broad range of service providers. Some service providers continue to build their storage infrastructures on traditional arrays, while others move toward software-defined deployments, which leverage commodity x86 servers for the underlying hardware.

screenshot-from-2016-12-06-150335The largest service providers majorly source their servers out of the traditional OEM network, going to the most economical products delivered by ODMs. As this market continues to evolve, we will continue to see moves from traditional system OEMs targeted at the penetration of this market.

7.3.1    On Premise
Works when the solution has sufficient internal scale to achieve a comparable total cost of ownership to public choices. That typically means it employs several Storage Subsystems or Virtualized Storage Subsystems. It is also the right choice if at least one of the following five considerations is critical for the specific system or application and therefore precludes the use of the public cloud:
•    data security,
•    performance issues,
•    control in the event of an outage,
•    technology lock-in.
A final factor involves regulatory requirements that might restrict efforts to store customer data outside of set geographic boundaries or prevent the storage of data in multi-tenant environments.
7.3.2    Off Premise
Customers should consider Off Premise solution if the project lacks sufficient scale (will not involve hundreds of Storage Subsystems, for example) or a high degree of uncertainty exists regarding likely demand. Off Premise solution is a more capital-efficient approach, since building a private cloud requires significant resources that the company could probably invest more effectively in the mainstream business.
Another reason to go Off Premise could be the system or application is latency tolerant. Experience shows that the latency levels on public clouds can vary by as much as 20 times depending on the time of the day. It also makes sense if there are no specific regulatory requirements that applications store the data in a particular place beyond performance needs.
Finally: cost. Due to the nature of the off-premise solution, TCO is estimated to be lower than the traditional on premise solution (need to be evaluated time to time). Customer who are acceptable to go to the cloud could experience high $$ saving from moving application to a public cloud solution.
7.3.3    Hybrid
Probably is the best solution, even if companies decide to go On Premise for their most critical applications, many decide to go Off Premise for certain more basic use cases (dev/test workloads, additional temporary capacity or DR solutions without specific SLA in term of performance, RPO and RTO, for example).
7.4    What changes should be made to development and operating models?
•    Should customer methods be changed?
•    How could this shift affect Storage Software release cycles and compatibility?
•    Will the company have to change the way it engages with customers?

screenshot-from-2016-12-06-150350Customer that will move to Storage Cloud Solution must change their development and operating model to:
•    Make modularity work
•    Get better Technology rationalization
•    Adopt of standards
•    Get better scalability and upgrade flexibility

New SDS Hybrid Environment will not take longer to be release as in the past thanks to the advantages supplied by the Storage Cloud Facilities and the rationalization together with modularity, scalability and flexibility will make the new infrastructure easier to be deployed and upgraded w/o wasting time typically spent in test phase before to be released in production.

7.5    What capability and cultural shifts does the organization need?
How should a company build the necessary talent and capabilities, what mindset and behavioral changes do they need, and how do they select the right development, IT, and infrastructure approaches?

screenshot-from-2016-12-06-150406

 

 

 

 

 

7.5.1    5 years’ forecast
Scaling will become more business-oriented (budgeting, business objects criteria in scaling metrics) instead of technical (Storage subsystems, CPU, active sessions).
The technology aspect will be represented by lightweight technologies like containers, micro services and so on.
7.5.2    10 years’ forecast
As workloads become increasingly portable though the adoption of containerization, cloud brokerage will gain popularity.
The ability of cloud providers to support scaling will not only depend on their own ability to scale but also on the ease of integration with cloud brokers.
Cloud services commoditization will require cloud providers to adapt their business models and service granularity to seamlessly integrate with cloud brokers. Application scaling and resilience will be based on a cross-cloud provider model.
Brokerage compatibility could become more important than a specific scaling ability.
In support of the above capabilities, it is likely that an open source cloud management platform like OpenStack will continue to mature and play a key role providing compatible management APIs across providers.

 

This is the end of Episode #3. Next Episode will come shortly

Thank you for reading…Stay tuned!