The Journey to a Hybrid Software Defined Storage Infrastructure S01E03

This is the third episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

To read previous episode check here:

https://ilovemystorage.wordpress.com/2016/11/23/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e01/

and

https://ilovemystorage.wordpress.com/2016/11/29/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e02/

 Enjoy your reading!

7.    Next Steps
Customers and Vendors who understood new business needs and trends will be called to answer the following questions:
7.1    How to balance between current infrastructure and new infrastructure?
•    What kind of cloud storage architecture should they target?
•    How does the organization manage the transition to its target state from legacy architecture to cloud-based services-oriented architecture?
•    What changes should be made to development and operating models?
•    What capability and cultural shifts does the organization need?

screenshot-from-2016-12-06-150304A lot of questions about Software Defined Storage Hybrid Cloud lately, particularly about how we transition our existing solutions onto the cloud platform.
How a company need to strike the balance between the applications that have built and the new cognitive and cloud products that can propelling the company forward?

They can move in new capabilities while realizing that not everything will or needs to change.
They can keep what’s working, but ensure it integrates with our new technology.
When deciding where to put investment dollars, they must think about existing solutions and new infrastructure as one portfolio, which makes their investments easier to balance

7.2    How does the organization manage the transition to its target state from legacy    architecture to cloud-based services-oriented architecture?
How long should the transition take and what are the key steps? Should the company wait until cloud and on premise products achieve parity?

screenshot-from-2016-12-06-150321The approach also forces them to think deeply about which types of product functionality deliver sought-after core customer experiences and what they have to emphasize to get that functionality right.
By putting a workable MVP with the most important features in user’s hands as quickly as possible, the team is able to both gather crucial initial customer feedback and rapidly improve on their cloud based development skills.
7.3    What kind of cloud storage architecture should they target?
Should customers use public infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) solutions or choose the private cloud?
Now, public and private cloud markets are no longer considered to be emerging.
From the storage system perspective, both public and private cloud represent significant parts of the market but impact the enterprise storage systems market in different ways.
Private cloud deployments can be implemented on-premises or off-premises and are largely tuned toward using external storage systems in the back end. The public cloud market is more diverse, covering a broad range of services delivered by a broad range of service providers. Some service providers continue to build their storage infrastructures on traditional arrays, while others move toward software-defined deployments, which leverage commodity x86 servers for the underlying hardware.

screenshot-from-2016-12-06-150335The largest service providers majorly source their servers out of the traditional OEM network, going to the most economical products delivered by ODMs. As this market continues to evolve, we will continue to see moves from traditional system OEMs targeted at the penetration of this market.

7.3.1    On Premise
Works when the solution has sufficient internal scale to achieve a comparable total cost of ownership to public choices. That typically means it employs several Storage Subsystems or Virtualized Storage Subsystems. It is also the right choice if at least one of the following five considerations is critical for the specific system or application and therefore precludes the use of the public cloud:
•    data security,
•    performance issues,
•    control in the event of an outage,
•    technology lock-in.
A final factor involves regulatory requirements that might restrict efforts to store customer data outside of set geographic boundaries or prevent the storage of data in multi-tenant environments.
7.3.2    Off Premise
Customers should consider Off Premise solution if the project lacks sufficient scale (will not involve hundreds of Storage Subsystems, for example) or a high degree of uncertainty exists regarding likely demand. Off Premise solution is a more capital-efficient approach, since building a private cloud requires significant resources that the company could probably invest more effectively in the mainstream business.
Another reason to go Off Premise could be the system or application is latency tolerant. Experience shows that the latency levels on public clouds can vary by as much as 20 times depending on the time of the day. It also makes sense if there are no specific regulatory requirements that applications store the data in a particular place beyond performance needs.
Finally: cost. Due to the nature of the off-premise solution, TCO is estimated to be lower than the traditional on premise solution (need to be evaluated time to time). Customer who are acceptable to go to the cloud could experience high $$ saving from moving application to a public cloud solution.
7.3.3    Hybrid
Probably is the best solution, even if companies decide to go On Premise for their most critical applications, many decide to go Off Premise for certain more basic use cases (dev/test workloads, additional temporary capacity or DR solutions without specific SLA in term of performance, RPO and RTO, for example).
7.4    What changes should be made to development and operating models?
•    Should customer methods be changed?
•    How could this shift affect Storage Software release cycles and compatibility?
•    Will the company have to change the way it engages with customers?

screenshot-from-2016-12-06-150350Customer that will move to Storage Cloud Solution must change their development and operating model to:
•    Make modularity work
•    Get better Technology rationalization
•    Adopt of standards
•    Get better scalability and upgrade flexibility

New SDS Hybrid Environment will not take longer to be release as in the past thanks to the advantages supplied by the Storage Cloud Facilities and the rationalization together with modularity, scalability and flexibility will make the new infrastructure easier to be deployed and upgraded w/o wasting time typically spent in test phase before to be released in production.

7.5    What capability and cultural shifts does the organization need?
How should a company build the necessary talent and capabilities, what mindset and behavioral changes do they need, and how do they select the right development, IT, and infrastructure approaches?

screenshot-from-2016-12-06-150406

 

 

 

 

 

7.5.1    5 years’ forecast
Scaling will become more business-oriented (budgeting, business objects criteria in scaling metrics) instead of technical (Storage subsystems, CPU, active sessions).
The technology aspect will be represented by lightweight technologies like containers, micro services and so on.
7.5.2    10 years’ forecast
As workloads become increasingly portable though the adoption of containerization, cloud brokerage will gain popularity.
The ability of cloud providers to support scaling will not only depend on their own ability to scale but also on the ease of integration with cloud brokers.
Cloud services commoditization will require cloud providers to adapt their business models and service granularity to seamlessly integrate with cloud brokers. Application scaling and resilience will be based on a cross-cloud provider model.
Brokerage compatibility could become more important than a specific scaling ability.
In support of the above capabilities, it is likely that an open source cloud management platform like OpenStack will continue to mature and play a key role providing compatible management APIs across providers.

 

This is the end of Episode #3. Next Episode will come shortly

Thank you for reading…Stay tuned!

The Journey to a Hybrid Software Defined Storage Infrastructure S01E02

This is the second episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

To read previous episode check here: https://ilovemystorage.wordpress.com/2016/11/23/the-journey-to-a-hybrid-software-defined-storage-infrastructure-s01e01/

Enjoy your reading!

4.       Analyst predictions

From both Analyst Predictions and Client Surveys, there is a movement to Hybrid Cloud:

screenshot-29-nov-16-25-2454

 

 

 

Source: IDC 2016 Predictions

Service Providers will see two significant Opportunities here:

  • Service Providers are becoming a key part of Enterprise IT
  • Clients (existing and new) are looking to extend their IT with Cloud services, especially DRaaS & Backup/Restore

To be specific analysts see a strong demand for future Cloud Based Storage Service Investments in DR Recovery and Collaboration and more as following:

screenshot-29-nov-16-25-4555

 

 

 

 

 

Source: IDC 2016 Predictions

  5.     The real life

Analyst forecast try to predict what will happen in a short, medium or long term future and in the meanwhile customers realize that traditional infrastructure will not be longer able to correctly and quickly react to new business demand.

The only solution is to move towards a new IT infrastructure that will match the specific characteristics of:

  • Agility
  • Efficiency
  • Velocity and Simplicity
  • Cognitive

The new concept infrastructure will easily match the above characteristics if deployed in Cloud.

It is important to highlight that creating this new Infrastructure does not mean that the traditional infrastructure should no longer exist. There will be in the future a coexistence between Traditional Storage infrastructure and the new Cloud Storage environment. This will be probably a never-ending coexistence if not any kind of workload will match the Storage Cloud requirements and not any kind of workload will be in the position to get benefits from the new infrastructure. That is the main reason why we will talk about Software Define Hybrid Storage infrastructure.

The investment and the new infrastructure growth mostly will go to Cloud direction depending by the business requirements.

The infrastructure could be deployed in Private, Public or Hybrid Cloud

To match the above characteristics, the new IT Storage Infrastructure will use new Software Defined Storage Solution Technologies creating a new Software Define Hybrid Storage infrastructure.

6   Software Defined Storage infrastructure

The information in this chapter will refer to a 2013 TEC study about Software Defined Environment and aim to give you an overview of what is a Software Defined Storage Infrastructure

6.1    Challenges of traditional storage

Many were the traditional storage challenges:

  • Constrained business agility
  • Time that is required to deploy new or upgraded business function
  • Downtime that is required for data migration and technology refresh
  • Unplanned storage capacity acquisitions
  • Staffing limitations
  • Suboptimal utilization of IT resources
  • Difficulty predicting future capacity and service level needs
  • Peaks and valleys in resource requirements
  • Over-provisioning and under-provisioning of IT resources
  • Extensive capacity planning effort is needed to plan for varying future
  • Organizational constraints
  • Project-oriented infrastructure funding
  • Constrained operational budgets
  • Difficulty implementing resource sharing
  • No chargeback or show-back mechanism as incentive for IT resource conservation
  • IT resource management
  • Rapid capacity growth
  • Cost control
  • Service-level monitoring and support (performance, availability, capacity, security, retention, and so on)
  • Architectural open standardization

6.2       Needed functionality

New Smart Data Center needs the following Storage functionalities:

  • Dynamic scaling/provisioning (elasticity)
    • No more forced in the single storage box or subsystem, but able to scale and provision space in a click.
  • Faster deployment of storage resources
    • With unique reference storage architecture, the storage deployment can be faster
  • Reduced cost of managing storage
    • With unique reference storage architecture, it is possible to reduce TCO leveraging people’s skill
  • Greener data centers
    • Consolidation based on Storage Virtualization (base of SDS) is key factor for space utilization and optimization contributing to build a Green Data Center.
  • Multi-user file sharing
    • Make data available to different end user or platform
  • Self-service user portal
    • Make the end user aware of his storage provisioning. The process can be easily monitored and faster.
  • Integrated storage and service management
    • Improved efficiency of data management
  • Faster time to market

screenshot-29-nov-16-35-1556

6.3      Characteristics of Software Defined Storage

Software Defined Storage is characterized by several key architectural elements and capabilities that differentiate it from traditional infrastructure.

  • Commodity Hardware
    • All the intelligence in software-defined storage (SDS) is in the software layer
  • Scale-Out Architecture
    • Hardware in SDS needs to enable flexible and elastic configuration of storage resources through software by using a building-block approach to storage to dynamically add and remove resources.
  • Resource Pooling
    • The available storage resources are pooled into a unified logical entity that can be managed centrally
  • Abstraction
    • Physical storage resources are virtualized and presented to the control plane, which can then be configured and delivered as tiered storage services.
  • Automation
    • The storage layer provides extensive automation that enables it to deliver one-click, policy-based provisioning of storage. The system automatically configures and delivers storage as needed on the fly.
  • Programmability
    • The real power of Software Defined Storage lies in the ability to integrate itself with other layers of the infrastructure to build end-to-end application-focused automation.

6.4      Common Fallacies of Software Defined Storage

Following are some common fallacies that customers and provider need to be vary of.

  • “You can’t be software-defined storage unless you sell storage as just software”
    • Some storage vendors that sell software-only solutions have tried to argue that “software only” is the same as “software-defined.” There is a big difference between storage software and software defined storage. The former is a technology delivery model, while the latter is architecture for how storage is deployed, provisioned and managed. All storage systems require hardware, whether the installation of software happens in the field or before the product is shipped.
  • A Software Defined Storage system must run the storage controller in a virtual environment
    • Some storage vendors run their storage controllers in virtual machines. This trend developed independently from Software Defined Storage, and offers interesting possibilities such as virtual controller redundancy and the ability to dynamically convert a server with disks into a virtual storage appliance. But running the storage controller in a VM is by no means a requirement for software-defined it’s simply a delivery method for software.

screenshot-29-nov-16-36-2457

 

 

 

6.5      Software Defined Storage Hybrid Infrastructure new paradigm

New requirements are surfacing and new paradigm will be satisfied to deliver a Software Defined Storage Hybrid Infrastructure

6.5.1   Agile

Gone are the days where IT’s primary focus is to “keep the lights on” and save money. As technology plays an increasingly important role in all aspects of business, IT is now in the driver’s seat to help the business differentiate with new products, services, and routes to market.

IT agility for the modern datacenter has a direct impact on business agility.

Few major motivations to move to Hybrid-Cloud, in term of infrastructure’s “agility”:

  • It turns agile development into a truly parallel activity (unlimited testing/staging environments)
  • It enhances continuous integration and delivery (+ it eases code branching and merging)
  • It encourages innovation and experimentation
  • It lowers impact outages and upgrades and provides disaster preparedness
  • It enables virtually unlimited scalability

6.5.2   Efficient

Cost-efficient. Hybrid Cloud Storage Infrastructure, which combines cost effective but inflexible private resources and flexible but premium priced public cloud services, allows organizations to operate cost efficiently under demand volume uncertainty

If the IT department is not able to keep up, business owners will resort to other unofficial channels (a.k.a. “shadow IT”) to get their applications set up quickly to drive revenue and ensure time-to-market goals are achieved. If the business uses other non-sanctioned IT alternatives, then IT loses control of the user experience but is still responsible for ongoing support and security. That’s not good at all.

  • Few major motivations to move to Hybrid-Cloud, in term of infrastructure’s “efficiency” are:
  • It lowers impact outages and upgrades and provides disaster preparedness
  • It allows HA and DR solutions at lower costs
  • Scales up (eventually down) to continuously adapt to business needs (cost-effectiveness)
  • Infrastructure is always up to date (firmware and software upgrades
  • Dynamic resource allocation and scheduling in line with various requirements

6.5.3   Quick and Simple

This statement is true in general, but it depends.

Few major motivations to move to Hybrid Cloud, in term of infrastructure’s “easiness” are:

  • Ease of management and maintenance of the general infrastructure
  • No need of a deep technical knowledge to provision and implement new HW/SW but cloud skill able to allow integration with traditional IT environment
  • Takes advantages of the technical knowledge of the cloud providers to get always up to date, optimized and fully-efficient infrastructure
  • Enables deeper insights on data and workloads (when needed)

Considering the Cloud for new applications or business processes, as business needs evolve can significantly reduce time to market when rolling out new software or processes.

Take the case of a new Customer Relationship Management (CRM) system, for example, where the typical in house CRM application deployment could require 4-6 weeks in user requirements analysis, 4-5 weeks in vendor selection, and another 12-18 months in customization, development and implementation.

By comparison, a cloud based solution can have an organization operational in a little over two months.

Allows companies to focus on business logic, rather than on the «how-to»

6.5.4   Cognitive

Finally, “Cognitive” is probably the most important and trendy statement. Today’s Storage need to be “Cognitive”, hence:

  • Reacting to major IT issues is not enough.
  • Keeping mission-critical applications available constantly requires the ability to anticipate potential problems and resolve them quickly before major issues occur.
  • Self optimizing based on workload awareness, performance analysis and data content consciousness.

 

Cognitive capabilities in Hybrid-Cloud environments goes in this direction.

6.5.4.1    How Storage can be cognitive

The idea is based on a metric called data value, which is analogous to determining the value of a piece of art. The higher the demand and the rarer the piece typically means it will have a higher value, requiring tight security.

For example, if 1,000 employees are accessing the same files every day, the value of that data set should be very high, just like a priceless Van Gogh. A cognitive storage system would learn this and store those files on fast media like flash. In addition, the system would automatically have backed up these files multiple times. Lastly, the files may want to have extra security so they cannot be accessed without authorization.

Of course, there is also the opposite. A data set, which is rarely accessed, like PDF files of 20-year-old tax documents, should be stored on cold media like tape and only available upon request. A cognitive storage system would also know that tax records need to be kept for at least 7 years and that they can be deleted after that period.

In many situations, data value can also change over time and a cognitive storage system can also adapt.

One way to determine its value is to track the access patterns of the data or the frequency it is used. Individuals can also add metadata tags to the data to help train the system, depending on the context in which the data is used. For example, an astronomer may tag a data set coming from the Andromeda galaxy as highly important or less important.

As detailed in the paper published in the IEEE journal Computer, IBM scientists have tested cognitive storage using 1.77 million files across seven users. Using a simple ranking of class 1, 2 and 3 based on metadata including user ID, group ID, file size, file permissions, date and time of creation, file extension, and directories in the path. They then split the server data into data per user, as each user could define different classes of files they deem important.

The result, a data value prediction accuracy of nearly 100% for the smaller class set.

This is the end of Episode #2. Next Episode will come shortly

Thank you for reading…Stay tuned!

The Journey to a Hybrid Software Defined Storage Infrastructure S01E01

This is the first episode of “The Journey to a Hybrid Software Defined Storage Infrastructure”. It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.

Some episodes will follow and may be a S02 next year as well.

Happy reading.

1.    Scope of this Study

Storage environment and Storage infrastructure has seen a huge change in the last 15 years, starting from:

  • DASD, direct attached storage
  • SAN, Storage Area Network
  • Storage Virtualization
  • Cloud Storage

With the SAN starting from 2001 we started to see the introduction of Disaster Recovery (DR) solutions based on Storage Subsystem functionalities and FC connectivity as well

With Storage Virtualization starting from 2003 we started to see the Storage Copy Function moving from the storage subsystem to the SAN first and the leverage of some new important functionalities like Real Time Compression (RtC), Flash Storage and Data Deduplication later. Storage Virtualization set the base where to build the Software Defined Storage Infrastructure and the next Cloud Storage.

With Cloud Storage, thanks to storage virtualization, we are still leveraging all the functionalities exploited in the past years and in addition the iSCSI connectivity will see a sort of rebirth thanks to the new 20Gb and 40Gb speed with RDMA. Object Storage as well is something that is taking place too in the Storage Cloud arena.

This study aims to show how it is possible to leverage a Software Defined Storage technologies in the design of general storage solutions with an emphasis about Hybrid Storage Infrastructure on premise, off-premise, and for DR solutions, starting from actual customer business needs going through the benefits supplied by such kind of infrastructure down to the technology components that can be applied.

The consumers of this study would be Sales, Pre-sales and Architects, all having a picture of nowadays IT dilemma in satisfying business needs and to be proactive in their approaches to our Clients.

They will also understand what is the IBM Storage Strategy and Vision

2.        The Business problem (or the Business opportunity)

The global cloud storage market was expected to grow from USD 18.87 Billion in 2015 to USD 65.41 Billion by 2020, at a Compound Annual Growth Rate (CAGR) of 28.2% during the forecast period.

The forecast is usually focused on key adoption trends, future opportunities, and business cases in this market.

Analyst expects that increased adoption of cloud storage solutions across the healthcare & life science industry will drive this market towards high growth rate.

The factors such as increasing adoption of cloud storage solutions by the large enterprises are driving the cloud storage market globally.

The demand for this market is also being driven by big data and increasing adoption of cloud storage gateway.

Niche players provide most the cloud storage solutions and many other companies have emerged and are expected to evolve in the coming time.

In 2015, North America was estimated (data to be confirmed) to be the top contributor in the cloud storage market due to increasing technological acceptance and high awareness about emerging data storage concerns in the organization. However, APAC and some countries in MEA are expected to show tremendous growth in this market.

The cloud storage market is broadly segmented by:

  • solutions and services:
    • by solution: primary storage solution, backup storage solution, disaster recovery solution, cloud storage gateway solution, data movement and access solution;
    • by service: consulting services, system and networking integration, support training and education;
  • by deployment model: public, private, and hybrid;
  • by organizational size into SMBs and large enterprises;
  • by vertical:
    • BFSI,
    • manufacturing,
    • retail and consumer goods,
    • telecommunication and IT,
    • media and entertainment,
    • government, healthcare and life sciences,
    • energy and utilities,
    • research and education,
    • and others.

Many variables including political, economic, social, technological and economic factors can influence the impact on the market, considering the Country or the Region considered.

 

3.        Data explosion and new workload forecast

Today IT world and specific the Storage World is facing since some years a tremendous data explosion, as shown in the following picture as per 2014 forecast.

1

2

This data explosion will concern mainly the unstructured data.

The following picture shows the growing ration between unstructured and structured data in the next years.

Because of this explosion customer are facing some new challenges as following:

Of course, our customer challenges become our challenges.

More than this, the Service Provider are calling to face some different challenges in addition of their customer challenges as shown in the following picture:

Hence Service Provider and IT Enterprise Customer need to have a common approach that leads to:

  • Reduce Cost
  • Secure Data
  • Manage Oceans of Data
  • Agile and Flexible Deployments
  • Make Data Highly Available, especially in case of Disaster

9

This is the end of Episode #1. Next Episode will come shortly

Thank you for reading.

DR with Hybrid SDS infrastructure

Good piece Thank you Pier!

Software Defined Storage (SDS) provides the possibility to spin-up, on standard components, a look-alike storage technology to enable data mirror and to manage the entire data replication process

Source: DR with Hybrid SDS infrastructure | Pierluigi Buratti | Pulse | LinkedIn

Convert the WWPN when upgrading SVC nodes

The Port Configurator helps to configure the Fibre Channel port mapping on SVC nodes when perfoming hardware upgrades

 This application helps you with the Fibre Channel port mapping when migrating between different types of SVC nodes to make sure World Wide Port Names and the Port Masking stay the same.

Source: Convert the WWPN when upgrading SVC nodes

Thanks to

Roger Eriksson

Senior Consultant Professional at IBM Nordic Systems Lab Services

Next Gen V7000 and SVC – Spectrum Virtualize 7.7.1 – And All Flash

New all-flash offerings from IBM. Next generation SVC nodes (SV1) (and a corresponding FlashSystem V9000) and Storwize Gen2+ announcements today – including a raft of Statements of Direction – that’s the whole Spectrum Virtualize family we have refreshed this year! Read more here on my blog : https://lnkd.in/eQWNkFF