The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity.
The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.
For many of us, commuting in rush hour with its traffic jams is an unpleasant fact of life. But I once had a job on the outer edge of a metropolitan area. I was westbound when most were eastbound. I often felt a little sorry for the mass of people stuck in traffic as I zoomed–with a smile on my face–in the opposite direction. Today there is a massive flow of workloads and their associated storage to the public cloud. But there are also a lot of companies moving workloads off the public cloud, and their reason is cloud economics.
DCIG is pleased to announce the availability of the DCIG 2016-17 Midmarket Enterprise Storage Array Buyer’s Guide as the first Buyer’s Guide Edition developed from this body of research. Other Buyer’s Guides based on this body of research will be published in the coming weeks and months, including the 2016-17 Midrange Unified Storage Array Buyer’s Guide and the 2016-17 High End Storage Array Buyer’s Guide.
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
Ethernet adapters began migrating to LAN on motherboard solutions in the late 1990s. Yet this practice never took hold for other technologies like Fibre Channel. The Fibre Channel (FC) market even today, as Gen 6 (32Gb) is being introduced, is dominated by host bus adapters (HBAs). In this second installment in my interview with QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, he explains why 32Gb FC HBAs are still installed separately in servers, as well provides insight into what new features may be released in the Gen 7 FC protocol
All-flash arrays, cloud computing, cloud storage, and converged and hyper-converged infrastructures may grab many of today’s headlines. But the decades old Fibre Channel protocol is still a foundational technology present in many data centers with it holding steady in the U.S. and even gaining increased traction in countries such as China. In this first installment, QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, provides some background as to why fibre channel (FC) remains relevant and how all-flash arrays are one of the forces driving the need for 32Gb FC.
DCIG is pleased to announce the availability of its 2016-17 FC SAN Utility Storage Array Buyer’s Guide and 2016-17 Utility SAN Storage Array Buyer’s Guide that each weight more than 100 features and rank 62 arrays from thirteen (13) different storage providers. These Buyer’s Guide Editions are products of DCIG’s updated research methodology where DCIG creates specific Buyer’s Guide Editions based upon a larger, general body of research on a topic. As past Buyer’s Guides have done, it continues to rank products as Recommended, Excellent, Good and Basic as well as offer the product information that organizations need to make informed buying decisions on FC SAN Utility and multiprotocol Utility SAN storage arrays.
DCIG is pleased to announce the availability of its 2016-17 iSCSI SAN Utility Storage Array Buyer’s Guide that weights more than 100 features and ranks 67 arrays from fourteen (14) different storage providers. This Buyer’s Guide Edition reflects the first use of DCIG’s updated research methodology where DCIG creates specific Buyer’s Guide Editions based upon a larger, general body of research on a topic. As past Buyer’s Guides have done, it continues to rank products as Recommended, Excellent, Good and Basic as well as offer the product information that organizations need to make informed buying decisions on iSCSI SAN utility storage arrays.
Almost any hybrid or all-flash storage array will accelerate performance for the applications it hosts. Yet many organizations need a storage array that scales beyond just accelerating the performance of a few hosts. They want a solution that both solves their immediate performance challenges and serves as a launch pad to using flash more broadly in their environment.
In the early 2000’s I was a big believer in appliance and/or array-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage array-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise. It did not and still does not sufficiently scale to meet enterprise requirements. The big appeal to me of storage virtualization appliances and/or array controllers was that they could aggregate all of an infrastructure’s storage arrays and their capacity into one giant pool of storage which could then…
n the early 2000’s I was a big believer in appliance and/or controller-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage control-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise.
DCIG is preparing to release the DCIG 2015-16 Enterprise Midrange Array Buyer’s Guide. The Buyer’s Guide will include data on 33 arrays or array series from 16 storage providers. The term “Enterprise” in the name Enterprise Midrange Array, reflects a class of storage system that has emerged offering key enterprise-class features at prices suitable for mid-sized budgets. The DCIG 2015-16 Enterprise Midrange Array Buyer’s Guide will provide organizations with a valuable tool to cut time and cost from the product research and purchase process.
It has been said that everyone knows what “normal” is but that it is often easier to define “abnormal” than it is to define “normal.” To a certain degree that axiom also applies to defining “high end storage arrays.” Everyone just seems to automatically assume that a certain set of storage arrays are in the “high end” category but when push comes to shove, people can be hard-pressed to provide a working definition as to what constitutes a high end storage array in today’s crowded storage space.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
One of the more difficult tasks for anyone deeply involved in technology is the ability to see the forest from the trees. Often responsible for supporting the technical components that make up today’s enterprise infrastructures, to step back and recommend which technologies are the right choices for their organization going forward is a more difficult feat. While there is no one right answer that applies to all organizations, five (5) technologies – some new as well as some old technologies that are getting a refresh – merit that organizations prioritize them in the coming months and years.
Establishing a standard as to how an organization uses proprietary and open source code is at best difficult for most organizations. But iXsystems has essentially bet its future on the continued use of open source code in its product line. This makes it an imperative that it get this decision right to continue fostering support for its product in the open source community. This fifth entry in my interview series with iXsystems’ CTO Jordan Hubbard discusses his thoughts on iXsystems’ responsibility toward the open source community for their contributions and how it draws the line between proprietary and open source code.
In this second blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss microsecond latencies and how the recently announced Gemini X-series scale-out all-flash platform performs against the competition.
Providing high levels of capacity is only relevant if a storage array can also deliver high levels of performance. The number of CPU cores, the amount of DRAM and the size of the flash cache are the key hardware components that most heavily influence the performance of a hybrid storage array. In this second blog entry in my series examining the Oracle ZS3 Series storage arrays, I examine how its performance compares to that other leading enterprise storage arrays using published performance benchmarks.
Recognized as an innovator in storage system technology, Thomas Isakovich sat down with DCIG to discuss the development, capabilities, and innovation in Nimbus Data’s latest release: the Gemini X. In this first blog entry, he guides us through the development of the X-series, and where he sees it fitting into the current market.