by Woody Hutsell, AppICU NVMe has taken the flash array market by storm if you consider the number of storage vendors getting in line to deploy NVMe SSDs inside their all flash arrays. NVMe inside…
New IP storage virtualization solution for Ethernet-based infrastructure
Thanks Pablo Russo, enjoy your reading.
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines is out.
Have a good reading.
Thank you Barry:
Here’s another topic I often get asked about. Things used to be quite simple and I covered this for many years in my Configuring for Optimal Performance series of technical university presentations (also here on the blog – parts 1, 2, 3) – and the basics are still the same when configuring RAID arrays, but now with DRAID, people are asking if this still applies. In general the same rules and concepts apply, but you may have to adjust some of the thinking, particularly when you are configuring large DRAID arrays behind an SVC system.
Enjoy your reading.
This is the episode number 7, the last one of this Journey named “The Journey to a Hybrid Software Defined Storage Infrastructure”.
It is a IBM TEC Study made by Angelo Bernasconi, PierLuigi Buratti, Luca Polichetti, Matteo Mascolo, Francesco Perillo.
To read previous episode check here:
Enjoy your reading!
This is the final episode of this Study; thus, we should come to some conclusions, but more we and you read the previous episodes we’ll definitely made some questions and we did!
The current level of technologies does not have all the answers, nor all the solution will fit all the customers’ needs or use case.
We will continue to look at on what technology enhancement will be available in the future, and how that will cover the missing piece.
By the way we are not considering this study completed but just the beginning of a new journey in a new Storage Technology era.
Here some open questions about the actual technologies:
- What state of readiness of new functionalities and the related market perception?
- What will be the future next steps of this technology?
- When those functions will be effectively developed, and delivered?
- When the required functions can be really adopted in production?
- What is the risk to move data to a non-well known storage architecture?
- What could be potential risk to move Storage to the cloud?
- Is data within the cloud more exposed to theft?
- How meet regulations with data? (banking and health industry much more exposed)
- Storage performance constraints on Cloud:
- How does Cloud provider infrastructure impacts the Storage performance?
- Compute scalability
- iSCSI and/or Fiber Channel and/or FCOE
- LAN bandwidth and latency
- Security: LAN Segregation and Overlays
- How does the connectivity to Cloud impact the Storage performance?
- How does Cloud provider infrastructure impacts the Storage performance?
- Storage economics on Cloud:
- What is the difference in costs among a traditional vs SDS on-premise and SDS on cloud?
- Is Cloud assuring lower cost and lower TCO?
At the end of this episodes it seems slogan or choruses of which you are likely to forget the meaning and scope.
All of us who work in Information Technology at the bottom should make examine our conscience.
How many times have we too strategic approach or even evangelical and how many times we put ourselves on the side of businesses?
How can we help our customer to overcome the resistance and concerns about the change?
And again, are we really able to offer them the path and the best solutions or more suitable to their needs?
In this complex scenario it is obvious that the real protagonists of the companies are the Data.
The Data growing should be read and interpreted like something that gives value to the business, which must be managed and which must be used when needed and where needed.
Here some questions received from some readers and some answer based on our point of view:
- What is today the Storage role in the success of this big change?
The role of the Storage may sound side than the trendiest topics such as the Internet of Things, Cognitive, Big Data, Analytics and Hybrid Cloud, but it is crucial. As per analysts, by 2020 we will have 42ZB data to manage. All these data will still end up somewhere … isn’t it?
The truth is that the Storage can be both a commodity and a crucial factor for business efficiency. The traditional Storage, the one used for traditional workloads, is set to change from its traditional rotating disk shape to All Flash. This is currently happening with Solid State Disk (SSD) first, Flash later and now with Read Intensive SSD.
By 2020 more than 74% of the disk space will be on SSD / Flash technology.
But technology does not stop and we are bound to see an evolution of storage to be used for new workloads, and this will come more and more from big data, analytic, mobile and Internet of Things workloads that will best fit with the cloud solutions and/or hyper-converged environment. This will most likely be driven by the evolution of new technologies like NVMe (Non-Volatile Memory Express) that will exploit the IP connectivity and FC together with RDMA architecture (remote direct memory access).
- What environment will be required?
Definitely a virtualized environment, agile, efficient, hybrid, more and more driven by the software that is aligned with the Software Defined Infrastructure. Then you can make use of all opportunities of Cognitive Storage. A smart storage capable to understand the value of data and placing them on the most suitable media at their probable utilization. This will happen dynamically, depending on the needs and interests of the business. The Cognitive Storage will enable a highly efficient data management and an evident cost saving, because it will avoid to use advanced solutions for the management of little-used data and perhaps useless.
- What workload will be moved to the cloud and which one will be kept on a traditional infrastructure?
It is possible that the traditional workload will remain in “the house” for a while, or at least until the application layer will not change so as it will be able to talk directly with the Storage layer to make self-provisioning or de-committing, orchestration and Disaster Recovery.
It will be different for new applications that will born to be placed in the cloud and it will be designed to take advantage of new storage technologies at the most since the beginning.
One of the drivers will be given by the total cost. Bring data in the cloud will not necessarily mean to save money, but definitely invest in a different way to get something better.
- What solutions and choice, the customer will need to take seen that the needs could change quickly?
Definitely a mixed, hybrid solution that offers a wide technological coverage and not niche. There is not and there will not be a single solution for all, that is why IBM offers a portfolio of diverse storage solutions capable of covering all application requirements, architectural and economic. It also gives you the opportunity to use it in different ways: as integrated solutions, such as software or as a cloud service, depending on the business requirements and budget types (Capex and Opex). All of this, obviously, will mean great freedom of choice and flexibility for customers.
IBM Redbooks | Fabric Resiliency Best Practices is out.
Have a good read.