IBM on Tuesday said it is expanding its entire flash storage portfolio with the company's latest 3D TLC flash technology, which increases storage density by three times that of previous generations while providing higher performance and flash endurance.
IBM also introduced significant enhancements to its storage software portfolio, including updates that work better with hybrid clouds, provide consumption-based pricing, and take advantage of the new flash storage technology improvements.
The enhanced IBM storage portfolio was introduced Tuesday at IBM's Flash Storage Milan conference in Milan, Italy.
[Related: IBM Reports Storage Revenue Growth In Q3]
IBM is looking to take the lead in flash storage technology with the introduction of the company's 3D TLC flash technology, said Andy Walls, IBM Fellow and chief technology officer for IBM FlashSystem.
By increasing the size of individual flash memory cells over previous generations IBM was able to introduce three bits per cell, which increases capacity by 50 percent, Walls said. IBM at the same time is stacking the cells in three dimensions as opposed to two dimensions to further increase flash storage density, he said.
The result is a tripling of storage capacity in the latest versions of the company's FlashSystem series of all-flash arrays, including the FlashSystem V9000 for mixed enterprise environments and workload consolidation, the FlashSystem A9000 targeted at cloud service providers and enterprises with data-intensive requirements, and the FlashSystem 900 for accelerating performance in targeted workloads.
The FlashSystem V9000 consists of the FlashSystem 900 combined with the IBM Spectrum Virtualize software, while the FlashSystem A9000 and A9000R combine the FlashSystem 900 with the IBM Spectrum Accelerate software.
IBM also features data reduction and compression technology that reduces latency for data that is compressible, instead of adding it because of any additional overhead, Walls said. The company is also updating the array's software to take advantage of high-performance NVMe-based flash storage devices, he said.
"We're all about performance," he said.
Eric Herzog, chief marketing officer and vice president of worldwide storage channels for IBM's storage division, said the enhancements would result in both capex and opex savings for customers.
"Instead of buying four of our arrays to get to 220 TBs with our current models, you buy one," Herzog said. "We don't charge you four times as much. We are managing one array, which is much less expensive than managing four arrays, cutting our operational expense."
Furthermore, IBM's in-line compression and de-duplication technologies do not impact service-level agreements, Herzog said. And performance is consistent whether the array is nearly empty or full. "That's not true of many of our competitors," he said. "When the array gets full, that Oracle database slows down. When the array gets full, Hadoop slows down."
The FlashSystem 900 now has a graphic user interface to replace the older hard-to-use command line interface, he said.
"In the year of modernizing older gear and transforming your hybrid and cognitive world, everything needs to be automated and simplistic," he said. "The FlashSystem 900 interface allows you to easily configure the system. No more command line interfaces. That's the old world. That's not what you need when you need when you move forward into a hybrid cloud and cognitive world."
All the FlashSystem arrays are now NVMe-ready, Herzog said.
On the software side, IBM introduced IBM Spectrum Virtualize software for easy data migration and disaster recovery of data between on-premises environments and the IBM Public Cloud, Herzog said.
Spectrum Virtualize works with over 400 non-IBM-logo'd arrays in addition to IBM storage devices, he said. "[This provides] peace of mind with real-time disaster recovery replicating your data," he said. "And to save money, you could migrate some of your data to the public cloud infrastructure, keep some of it on-premises, and cut your cost by migrating to IBM Public Cloud, a great solution to cut your costs."
IBM is also introducing block storage support for containers as a way to manage cloud-native operations with stateful containers, Herzog. This provides persistent storage for Docker container environments, and supports orchestration using Kubernetes, Docker Swarm, and IBM Cloud Private, which is IBM's private cloud platform based in IBM data centers, he said.
Herzog also introduced a beta version of Storage Insights Foundation, a new storage management offering based on cognitive and artificial intelligence technology aimed at providing better support to customers by delivering system health and capacity insights and suggestions for improving performance.
Storage Insights Foundation, which will go into IBM's early customer program during the fourth quarter of 2017, brings the cognitive capabilities and artificial intelligence that IBM has been using in other parts of its business to storage, he said.
"What does that help you do?" Herzog said. "It empowers your users and your storage administrators to have a better-functioning system. If they really do need to call IBM support, we can easily and quickly resolve problems. … With our IBM Storage Insights Foundation, the artificial intelligence can help you, or can help our support guys, get you up and going rapidly, in minutes not in days."
Storage Insights Foundation will also make suggestions related to where to move storage to save money and increase utilization, Herzog said. That will go far to take advantage of the average 70 percent of capacity that is currently being wasted industry-wide, he said.
"With our new cognitive technology, you'll be able to get better storage utilization," he said. "What does that mean to you? Save more money. Get better use of the money you spend. IBM is helping you cut your costs across the board for your cloud and cognitive environments."
Most IBM FlashSystem arrays will be available with a consumption pricing model, Herzog said. This includes the VersaStack converged infrastructure offering combining FlashSystem storage with Cisco's UCS servers and networking technology, he said.
"Across our portfolio of all-flash arrays, we're going to be offering this as an option," he said. "You can still buy in the traditional model, if you like. You can buy as part of a VersaStack, or you can buy as a utility model. And, in fact, our VersaStack solutions are even available as a utility model. IBM and Cisco have worked together to deliver a fully automated utility model where … if you don't use it as much, your cost will drop."
For customers who for regulatory or other reasons are unable to use hybrid or public clouds, IBM is introducing IBM Spectrum Access for IBM Cloud Private, Herzog said. IBM Cloud Private provides a secure private system that leverages public cloud technologies across IBM's data centers, he said.
"IBM Spectrum Access allows you to quickly spin-up the storage on-premises and transparently move the data back and forth from on-premises to the private cloud," he said. "Moving data back and forth, and saving you time and money. It gives you instant access to that data whether it's in IBM Cloud Private or on-premises."
IBM's new flash storage technology and related software enhancements are a sign of how far flash storage technology has matured, said Mike Piltoff, senior vice president of strategic marketing at Champion Solutions Group, a Boca Raton, Fla.-base solution provider and long-time IBM channel partner.
"Flash storage has started to go mainstream," Piltoff told CRN. "Vendors are not just in a hurry to get flash on the market. They are interested in performance, and consistent performance."
The focus on consistent performance is a big win for both developers and end users, Piltoff said.
"Performance consistency is an awareness issue," Piltoff said. "Customers are often not aware they might get cache flooding and hotspots in their flash. They invested in flash, and got a boost in performance, but later found that applications can slow down because of hot spots. It's often hard to troubleshoot such issues because of the storage controllers."
The public cloud model is driving the move by IBM and others to embrace bringing some storage under an operating expense model, Piltoff said.
"To compete, the vendors are now offering their own solutions for on-premises consumption models," he said. "They have to compete to win."
However, Piltoff said, customers are often not ready for consumption-based on-premises storage.
"It will take time for them to understand the technology, and to understand their own storage growth patterns," he said. "This is a good opportunity for solution providers like Champion to help them understand their needs and future growth. Before, we focused on tuning the infrastructure. Now we are focused on the financial side. We have to become financial experts and understand the consumption models. We have done that."