Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




VMware vSphere and Nutanix AHV Hypervisors: A Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two common choices but key differences between them persist.

In the last couple of years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform as each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its most recent DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors available for a complimentary download.

This succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  1. Breadth of partner ecosystem
  2. Enterprise application certification
  3. Guest OS support
  4. HCIA management capabilities
  5. Overall corporate direction
  6. Software licensing options
  7. Virtual desktop infrastructure (VDI) support

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Software-defined Data Centers Have Arrived – Sort of

Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.

The concept of software-defined data centers is really nothing new. This topic has been discussed for decades and was the subject of one of the first articles I ever published 15 years ago (though the technology was more commonly called virtualization at that time.) What is new, however, is the fact that the complementary, supporting set of hardware technologies needed to enable the software-defined data center now exists.

More powerful processors, higher capacity memory, higher bandwidth networks, scale-out architectures, and other technologies have each contributed, in part, to making software-defined data centers a reality. The recent availability of solid state drives (SSDs) may have been perhaps the technology that ultimately enabled this concept to go from the drawing boards into production. SSDs reduce data access times from milliseconds to microseconds helping to remove one of the last remaining performance bottlenecks to making software-defined data centers a reality.

Yet as organizations look to replace their hardware defined infrastructure with a software-defined data center, they must still proceed carefully. Hardware defined infrastructures may currently cost a lot more than software-defined data centers but they do offer distinct benefits that software-defined solutions currently are still hard-pressed to match.

For instance, the vendors who offer the purpose-built appliances for applications, backup, networking, security, or storage used in hardware defined infrastructures typically provide hardware compatibility lists (HCLs). Each HCL names the applications, operating systems, firmware, etc., for which the appliance is certified to interact with and which the vendor will provide support. Deviate from that HCL and your ability to get support suddenly gets sketchy.

Even HCLs are problematic due to the impossibly large number of possible configurations that exist in enterprise environments which vendors can never thoroughly vet and test.

This has led to the emergence of converged infrastructures. Using these, vendors guarantee that all components in the stack (applications, servers, network, and storage along with their firmware and software) are tested and certified to work together. So long as organizations use the vendor approved and tested hardware and software component in this stack and keep them in sync with the vendor specifications, they should have a reliable solution.

Granted, obtaining solutions that satisfy these converged infrastructure requirements cost more. But for many enterprises paying the premium was worth it. This testing helps to eliminate situations such as I once experienced many years ago.

We discovered in the middle of a system wide SAN upgrade that a FC firmware driver on all the UNIX systems could not detect the LUNs on the new storage systems. Upgrading this driver required us to spend nearly two months with individuals coming in every weekend to apply this fix across all these servers before we could implement and use the new storage systems.

Software-defined data centers may still encounter these types of problems. Even though the software itself may work fine, it cannot account for all the hardware in the environment or guarantee interoperability with them. Further, since software-defined solutions tend to go into low cost and/or rapidly changing environments, there is a good possibility the HCLs and/or converged solutions they do offer are limited in their scope and may have not been subjected to the extensive testing that production environments.

The good news is that software-defined data centers are highly virtualized environments. As such, copies of production environments can be made and tested very quickly. This flexibility mitigates the dangers of creating unsupported, untested production environments. It also provides organizations an easier, faster means to failback to the original configuration should the configuration now work as expected.

But here’s the catch. While software-defined data centers provide flexibility, someone must still possess the skills and knowledge to make the copies, perform the tests, and do the failbacks and recoveries if necessary. Further, software-defined data centers eliminate neither their reliance on underlying hardware components nor the individuals who create and manage them.

Interoperability with the hardware is not a given and people are known to be unpredictable and/or unreliable from time to time, the whole system could go down or function unpredictably without a clear path to resolution. Further, if one encounters interoperability issues initially or at some point in the future, the situation may get thornier. Organizations may have to ask and answer questions such as:

  1. When the vendors start finger pointing, who owns the problem and who will fix it?
  2. What is the path to resolution?
  3. Who has tested the proposed solution?
  4. How do you back out if the proposed solution goes awry?

Software-defined data centers are rightfully creating a lot of buzz but they are still not the be all and end all. While the technology now exists at all levels of the data center to make it practical to deploy this architecture and for companies to realize significant hardware savings in their data center budgets, the underlying best practices and support needed to successfully implement software-defined data are still playing catch-up. Until those are fully in place or you have full assurances of support by a third party, organizations are advised to proceed with caution on any software-defined initiative, data center or otherwise.




Veritas Lays Out Its Own Agenda for Enterprise Data Management at Veritas Vision

This year’s Veritas Vision 2016 conference held a lot of intrigue for me. The show itself was not new. The Vision show has been an ongoing event for years though this was the first time in more than a decade that Veritas was free to set its own agenda for the entire show. Rather the intrigue was in what direction it would take going forward. This Veritas did by communicating that it plans to align its product portfolio and strategy to deliver on an objective that has, to date, eluded enterprise organizations and vendors alike for at least two decades:  enterprise data management.

Veritas’ intent to deliver comprehensive data management to enterprise organizations is likely to be welcomed by executives in large organizations but also viewed with a certain amount of apprehension. This is not the first time that a major technology provider has cast a vision to bring all data under centralized management as providers such as EMC and IBM have both done so in the past. However, each of these prior attempts resulted in outcome that may be best classified as somewhere between abject failure and total disaster.

Now Veritas stands before leaders in enterprise organizations and asks them to believe that it can accomplish what no prior technology provider has successfully been able to deliver. To its credit, it does have more proof points to support its claim that it can succeed where its competitors failed. Here’s why.

  1. Veritas probably protects more enterprise data using NetBackup than all of its competitors combined. This breadth of enterprise backup data under NetBackup’s purview puts Veritas in a unique position as compared to its competitors to understand more than just how much data resides in their archival, backup, and production data stores. It also gives NetBackup unparalleled visibility into the data stored in these various storage silos using the metadata that it has captured and stored over the years. Using this metadata, Veritas can create an Information Map that informs organizations where their data resides as well as provide insight into what information these data stores contain and how frequently the data has been accessed. This breadth and quantify of historical information contained in NetBackup is insight that Veritas’ competitors simply lack.
  2. Veritas (for the most part) remains a pure software play which aligns with the shift to software defined data centers that enterprise want to make. While Veritas admittedly does sell a NetBackup appliance (and it sells a lot of them,) there has been almost no discussion from Veritas executives at this event about expanding its hardware presence. Just the opposite, in fact. Veritas wants to boldly go into the software defined storage (SDS) software market and equip both cloud providers and enterprise organizations with the SDS software that they need to first create and then centrally manage a heterogeneous storage infrastructure. While I can envision Veritas building upon the success it has experienced with its NetBackup appliance in order to create a turnkey SDS software appliance, I see that more as a demand imposed upon them by their current and prospective customer base than a strategic initiative that they will aggressively seek to promote or grow.
  3. The technologies as well as the political structure within data centers have sufficiently evolved and matured to permit the adoption of an enterprise data management platform. 20 years ago, 10 years ago, and even a couple of years ago, enterprise organizations simply were not internally technically or politically ready for an initiative as aggressive as enterprise data management. The climate on both of those fronts has changed. On the technical side, the advent of flash, hyper-converged infrastructures (HCI), and software-defined flash have contributed to enterprise organizations getting more performance and consolidating their infrastructure at ever lower costs. As these infrastructure consolidations have occurred, IT head counts have remained flat or even declined leaving them no time to pursue strategic initiatives such as implementing an enterprise data management available from third parties. However, by using Veritas NetBackup as the foundation for an enterprise data management platform, enterprise organizations can lay the groundwork to achieve this strategic initiative using the people and resources they already possess.

Veritas outlined an aggressive and potentially a controversial vision for its own future as a software company as its plan is not without many potential pitfalls. Even as I sat through the multiple briefings at Vision, I have many reservations about the viability of its plan on its ability to gather information as comprehensively throughout organizations as it hopes to do as well as delivering an SDS software solution that works the way enterprises will need it to work for them in production to rely upon it.

That said, Veritas sits in the catbird seat from an enterprise perspective. I have to agree with its internal assessment that it is better positioned than any other enterprise software company to deliver on the vision as it has currently laid it out. The real task before Veritas now is to begin to execute upon this vision and show some successes in the field which it appears it has already begun to do. If that is the case, enterprise organizations may see the first vestiges of an enterprise data management platform that actually delivers on what it promises sooner than later.




Software Defined Storage Moving Up to Host More Tier One Applications; Interview with Nexenta’s Chairman and CEO, Tarkan Maner, Part 3

Organizations may view true software defined storage (SDS) software as only appropriate to host their tier two and tier three applications. However, many known and named accounts now use SDS software to host their tier one applications.  In this third and last installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he explains where SDS software initially gets a foothold in organizations and why it rapidly gains traction and moves up to host tier one applications.

Jerome:  Where are organizations primarily deploying SDS in their infrastructure now?

Tarkan:  Deployments range from cold archive all the way to high performance computing. We see it being used to host home directories of file share applications, high performance computing for research, high-end trading applications at financial institutions, high-end databases and transactional ERP systems.

However, we usually start in home directories, file shares, backups, archives, active archives, open stack archives and cold archives. This is our sweet spot because we have such a new solution and new way of going to market. Customers like to start with the tier two and tier three type of applications to prove the value, and then find themselves naturally going up to tier one.

Companies like GoDaddy, and Qualcomm; many agencies in the public sector like NASA, Department of Defense, Department of Energy, Department of Treasury, government of Brazil, Bertelsmann in Germany– the reasons they chose Nexenta are because they have a lot of backup data, a lot of archival data, and they have a lot of tier one storage. This costs them [as much as] $1,000 per terabyte (TB). They want to first prove the value of Nexenta in the backups, the archives, and in the home directories and file shares, and then move up from there. All of these companies started using Nexenta with their tier two and tier three applications and moved up to tier one.

Today at Korea Telecom, we have 200 terabytes under production. At GoDaddy, more than 30 petabytes under production. Nexenta is a very smart technology service provider. I will tell you, when VMware started its journey 15 years ago, nobody believed or gave credit to VMware to virtualize the servers, especially the server companies. Server companies mocked VMware. Look where they are today.

The same thing is happening within the storage industry. The storage industry mocks software-defined storage. They belittle us. They say that we have no understanding of the storage market. They say customers love monolithic, expensive, legacy, mainframe hardware.

Guess what, the times are changing. We are pushing very hard to change the industry. Obviously, we believe all of the other players in the space, even though they might be in different categories, , are also trying to change the industry in a big way. We believe we are in the best position to do so because of our customer base and our solution support for a variety of workloads.

To give one example, the CIO of Cambridge University is going to be at OpenStack Summit giving a keynote presentation on how they are doing genomic research with Nexenta’s scale-out product, changing the game, end-to-end. We are really excited about the workloads we are supporting as we move forward.

Jerome:  Are they doing separate deployments of Nexenta for separate tier one, two, and three applications? Or are there creating one large pool of storage?

Tarkan: It depends on the deployment. In most cases, storage pools are divided; partly because organizations are so screwed up due to all of the different technology purchases they have made in the last three decades. We see them using Nexenta as a bridge to connect these storage pools together, starting with tier two and tier three type of applications before moving up into tier one.

High Bridge

Jerome:  Can you talk about an event that really started people talking about software-defined storage?

Tarkan: Dell buying EMC and other consolidation events that have happened, like NetApp doing acquisitions and so forth. showed that customers are prioritizing lower cost solutions. Customers also realize they can get their storage solutions at a lower cost because software-defined storage software is a reality.

The inflection point in the marketplace happened in the last twelve months as evidenced by all of these mergers and acquisitions. This shows that customers are looking for super levels of cost cutting. Further, we are hearing from analyst firms that inquiries around software-defined storage are doubling, from month to month, quarter to quarter. That’s also a sign that we are seeing.

In Part 1 of this interview series, Tarkan provides his definition of software defined storage (SDS) softwae and then calls out storage providers for holding their customers hostage with overpriced and inflexible storage solutions.

In Part 2 of this interview series, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.




SDS’s Impact on the Storage Landscape; Interview with Nexenta Chairman and CEO, Tarkan Maner, Part 2

In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.

Jerome: You have made comments along the lines that VMware views companies such as DataCore and Falconstor as competitors as opposed to Nexenta. Can you elaborate upon your comment?

Tarkan:  There are three categories of companies. One is storage vendors that take an old school approach with legacy systems and hardware appliances that have some storage software. Although they call themselves software-defined storage (SDS), at the end of the day, the customer buys a box.

A second category is the companies that come from old legacy storage resource management (SRM) and storage virtualization worlds. Although there are some great companies, who have done some great things, their solutions are a little bit more static and the way they deliver software-defined storage isn’t as pure software.

Then the third category is true software-only software-defined storage. These are companies that are 100 percent software companies. Their software runs on any infrastructure, on any server, on any flash, and can integrate with any kind of a stack.

NexentaStor

Source: Nexenta

Nexenta’s software-defined storage runs with Red Hat, Canonical, Docker, VMware, Citrix, and Microsoft. We also work with VMware hand-in-hand as partners in the marketplace. Rather than being a software defined storage solution for a specific stack, our vision is a much more expansive, open one.

Our vision is keeping our software-defined storage platform truly independent. Nexenta is the only private company with a customer base of almost 6,000 customers with a very large partner base and OEM relationships that can deliver on any stack, on any server, any disk. That’s our differentiation from the rest of the world.

Jerome:  What challenges does that create for you?

Tarkan: We are still a small company, although we are growing at a healthy rate, (with 80-plus gross margins,) we still fight a big fight with very large vendors. Some of these large companies might see open source as a little niche play. But let me tell you, we have a strategic solution supporting a lot of our partner systems like server vendors, disk vendors, flash vendors, and cloud platform vendors. Our open model is expansive; but, at the same time, is very open and aggressive in the marketplace through our partnerships.

Ultimately, our customers tell us this is the way to go. That is the reason we are not shooting for the sun but for the Milky Way. Hopefully we are going to minimally end up at Mars, not in San Carlos.

In Part 1 of this interview series, Tarkan provides his definition of Software-Defined Storage (SDS) and then calls out storage providers for holding their customers hostage with overpriced and inflexible storage solutions.

In the next and final installment in this interview series, Tarkan elaborates on how SDS is moving up to host more tier one applications.




SDS can Free You from your Storage Captors; Interview with Nexenta’s CEO Tarkan Maner, Part 1

Any organization that looks at the cost of networked storage for the first time may suffer from sticker shock as they look to deploy a solution. Conversely, those who already have a networked storage solution in place may feel bound to keep using the same provider going forward. Nexenta’s Chairman and CEO, Tarkan Maner, unabashedly addresses these concerns in this first part of my interview series with him as he first defines Software-Defined Storage (SDS) and then calls out storage providers for holding their customers hostage with overpriced and inflexible storage solutions.

Tarkan Maner HnS shot

Jerome: What do you see as the primary pain points that organizations are trying to solve when selecting an SDS appliance??

Tarkan: To understand the overall story here is, one first has to define SDS, because it is contextual. Nexenta does not sell an appliance. It actually sells pure software that runs on any existing commodity server; on any disk or flash that is integrated to any kind of a workload, running on any type of platform whether it is VMware, Citrix, Microsoft, OpenStack, Docker, (or any kind of a container,) or RedHat Linux, it does not matter to us.

That is a very important delineation to understand before answering this question. We do not consider companies just selling appliances that sell software built into them and refer to themselves as “SDS” as real SDS.

Having clarified that, we see the pain point is exactly that, a wrong definition by the customer buying these appliances that cost an average of $1,000 per TB, if you rely upon IDC’s and Gartner’s numbers. Nexenta brings that cost down 80, 90 percent, to the $100-200 level on any commodity infrastructure.

The second big pain point for these customers: They have been in what you might call the Stockholm Syndrome for the past three decades, held hostage by their captors, these so-called storage vendors. Their hostages are so-called CIOs, trying to figure out how to break through these walls to make sure they have freedom of choice using their storage management software on any infrastructure, any disk, any flash, in a server, for any workload, on a perpetual license.

Nexenta has solutions right now at 3, 4, 5 cents per GB with a perpetual license for any server, disk or flash, supporting any workload on any platform, again from VMware, Citrix, Microsoft, or containers.

So to succinctly answer your question as what problems organizations are trying to solve;

  • Number 1, the cost
  • Number two, the freedom of choice

This lack of flexibility in cost and choice are driving them to SDS, to true software-defined storage software.

JeromePlease describe what you see as the primary attributes/features that an SDS software solution should possess?

Tarkan: Number one, it should be totally software. Number two, it should support different architectures and support and service behind it for any vision, meaning you are not trying to throw software to a customer and let them figure things out.

You still help them adapt in the market, making sure their solutions are certified, tested and proof pointed solutions around, whether that’s Dell, Cisco, HP, Quanta, Super Micro, supporting any kind of a disk or flash, from Seagate, Toshiba, SanDisk, and everything in between.

Again integration points and service and support around platforms like VMware, Citrix, Microsoft, containers, and OpenStack. Those are the business model attributes I think the customers are looking for in differentiation.

For the functionality, intellectual property (IP) perspective, one key attribute is an open system which delivers block, file, and object interfaces – again for any workload, from one single pane of a glass with orchestration which we provide that works end-to-end.

This gives them an opportunity to use our solution for any workload. Workloads ranging from full archives, active archives, all the way to high performance tier one applications. From a functional perspective make sure the SDS software has the flexibility, attributes and feature functionality to run in a scale-up environment for scale-up type of applications, supporting block, file, and object as well as run in scale-out environments, again supporting block, file, and object from one single pane for orchestration.

I gave you bit of a long answer, but to summarize, the three attributes that an SDS software solution should possess include:

  • One, end-to-end service and support
  • Two, reference architectures with the ecosystem, making sure a customer has a rock solid solution
  • Three, openness and end-to-end functionality to deliver solutions to the customers that move them out of their expensive model of existence that they have endured for the past three decades

In Part 2 of this interview series, Tarkan talks about Nexenta’s competitors and how Nexenta distinguishes itself from them.