This is the third episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.
Some episodes will follow and may be a S02 next year as well.
To read previous episode check here:
7. Next Steps
Customers and Vendors who understood new business needs and trends will be called to answer the following questions:
7.1 How to balance between current infrastructure and new infrastructure?
• What kind of cloud storage architecture should they target?
• How does the organization manage the transition to its target state from legacy architecture to cloud-based services-oriented architecture?
• What changes should be made to development and operating models?
• What capability and cultural shifts does the organization need?
A lot of questions about Software Defined Storage Hybrid Cloud lately, particularly about how we transition our existing solutions onto the cloud platform.
How a company need to strike the balance between the applications that have built and the new cognitive and cloud products that can propelling the company forward?
They can move in new capabilities while realizing that not everything will or needs to change.
They can keep what’s working, but ensure it integrates with our new technology.
When deciding where to put investment dollars, they must think about existing solutions and new infrastructure as one portfolio, which makes their investments easier to balance
7.2 How does the organization manage the transition to its target state from legacy architecture to cloud-based services-oriented architecture?
How long should the transition take and what are the key steps? Should the company wait until cloud and on premise products achieve parity?
The approach also forces them to think deeply about which types of product functionality deliver sought-after core customer experiences and what they have to emphasize to get that functionality right.
By putting a workable MVP with the most important features in user’s hands as quickly as possible, the team is able to both gather crucial initial customer feedback and rapidly improve on their cloud based development skills.
7.3 What kind of cloud storage architecture should they target?
Should customers use public infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) solutions or choose the private cloud?
Now, public and private cloud markets are no longer considered to be emerging.
From the storage system perspective, both public and private cloud represent significant parts of the market but impact the enterprise storage systems market in different ways.
Private cloud deployments can be implemented on-premises or off-premises and are largely tuned toward using external storage systems in the back end. The public cloud market is more diverse, covering a broad range of services delivered by a broad range of service providers. Some service providers continue to build their storage infrastructures on traditional arrays, while others move toward software-defined deployments, which leverage commodity x86 servers for the underlying hardware.
The largest service providers majorly source their servers out of the traditional OEM network, going to the most economical products delivered by ODMs. As this market continues to evolve, we will continue to see moves from traditional system OEMs targeted at the penetration of this market.
7.3.1 On Premise
Works when the solution has sufficient internal scale to achieve a comparable total cost of ownership to public choices. That typically means it employs several Storage Subsystems or Virtualized Storage Subsystems. It is also the right choice if at least one of the following five considerations is critical for the specific system or application and therefore precludes the use of the public cloud:
• data security,
• performance issues,
• control in the event of an outage,
• technology lock-in.
A final factor involves regulatory requirements that might restrict efforts to store customer data outside of set geographic boundaries or prevent the storage of data in multi-tenant environments.
7.3.2 Off Premise
Customers should consider Off Premise solution if the project lacks sufficient scale (will not involve hundreds of Storage Subsystems, for example) or a high degree of uncertainty exists regarding likely demand. Off Premise solution is a more capital-efficient approach, since building a private cloud requires significant resources that the company could probably invest more effectively in the mainstream business.
Another reason to go Off Premise could be the system or application is latency tolerant. Experience shows that the latency levels on public clouds can vary by as much as 20 times depending on the time of the day. It also makes sense if there are no specific regulatory requirements that applications store the data in a particular place beyond performance needs.
Finally: cost. Due to the nature of the off-premise solution, TCO is estimated to be lower than the traditional on premise solution (need to be evaluated time to time). Customer who are acceptable to go to the cloud could experience high $$ saving from moving application to a public cloud solution.
Probably is the best solution, even if companies decide to go On Premise for their most critical applications, many decide to go Off Premise for certain more basic use cases (dev/test workloads, additional temporary capacity or DR solutions without specific SLA in term of performance, RPO and RTO, for example).
7.4 What changes should be made to development and operating models?
• Should customer methods be changed?
• How could this shift affect Storage Software release cycles and compatibility?
• Will the company have to change the way it engages with customers?
Customer that will move to Storage Cloud Solution must change their development and operating model to:
• Make modularity work
• Get better Technology rationalization
• Adopt of standards
• Get better scalability and upgrade flexibility
New SDS Hybrid Environment will not take longer to be release as in the past thanks to the advantages supplied by the Storage Cloud Facilities and the rationalization together with modularity, scalability and flexibility will make the new infrastructure easier to be deployed and upgraded w/o wasting time typically spent in test phase before to be released in production.
7.5 What capability and cultural shifts does the organization need?
How should a company build the necessary talent and capabilities, what mindset and behavioral changes do they need, and how do they select the right development, IT, and infrastructure approaches?
7.5.1 5 years’ forecast
Scaling will become more business-oriented (budgeting, business objects criteria in scaling metrics) instead of technical (Storage subsystems, CPU, active sessions).
The technology aspect will be represented by lightweight technologies like containers, micro services and so on.
7.5.2 10 years’ forecast
As workloads become increasingly portable though the adoption of containerization, cloud brokerage will gain popularity.
The ability of cloud providers to support scaling will not only depend on their own ability to scale but also on the ease of integration with cloud brokers.
Cloud services commoditization will require cloud providers to adapt their business models and service granularity to seamlessly integrate with cloud brokers. Application scaling and resilience will be based on a cross-cloud provider model.
Brokerage compatibility could become more important than a specific scaling ability.
In support of the above capabilities, it is likely that an open source cloud management platform like OpenStack will continue to mature and play a key role providing compatible management APIs across providers.
This is the end of Episode #3. Next Episode will come shortly
Thank you for reading…Stay tuned!