I happened upon the notion of software-defined storage several years ago when I was still leading software engineering at Dell’s storage business unit. One of the benefits of working at a place so large is that there are usually quite a few people around championing all sorts of new ideas because they might involve selling more servers. You get to review a lot of far-out stuff. With this in mind, it wasn’t without some skepticism that I listened to the following pitch from one of my colleagues: “Take the storage-controller firmware and get it to run on Linux as a virtual machine. We can bundle it with our servers and sell a ton of it.”
I asked how such a thing would be supported. Truthfully, I probably said something like, “Are you going to call us when the commodity gizmos in your boxes have a bug that causes the customer to lose data?” This caused my colleague to ponder support questions and other practical realities. However, the idea kept coming back from all corners of the organization.
The basic idea -- running a storage array on a piece of commodity-server hardware -- acquired the name software-defined storage (SDS), and over the next year, we spent a large amount of time researching how to develop the concept without compromising the requirements of enterprise infrastructure. Meanwhile, a number companies quickly seized on the concept’s immaturity to christen their products as SDS. The proponents of the idea got their inspiration from the similarly named software-defined networking, which is a related concept but, as I’ll discuss later, fundamentally different in a few significant ways.
Although we spent a great deal of time and money talking to customers and building an SDS product, it became obvious that we would not be able to meet the criteria to take it to market. That said, the team acquired a lot of knowledge in the process. Below is a summary of the learnings of some of the world’s smartest data storage engineers as they attempted to grapple with this concept, why the rest of the storage industry continued to catch on, and how you can use those engineers’ insights to inform your next enterprise storage purchasing decision.
1. There’s no standard definition of SDS
A particularly vexing part of SDS is that its marketing has gotten way ahead of its definition. Unlike its cousin SDN, which has all sorts of detailed architectures and associated manifestos, the definition of SDS depends largely on the person explaining it. The promise of SDS is that it will enable enterprise IT to provide a more on-demand and agile experience for business users who rely on IT infrastructure. But beyond this basic definition, every vendor seems to have its own product vision. Is SDS a virtual machine that runs storage-array firmware? A cloud gateway? Perhaps it’s Linux running on a standard Intel server with NFS? The answer could be yes to all of these questions, depending on who’s selling.
In many ways, SDN is simpler to understand. Network switches have been closed systems for decades. Leading networking vendors had their own proprietary operating systems, as well as custom hardware that usually included complex ASIC technology to align costs -- and profit margins -- with business expectations. Only recently has the market evolved to the point where a reference hardware platform for switching could be specified.
Now, SDN is fundamentally doing to networking what the PC did to computing in the 1980s: It’s allowing software to be developed independently of hardware. History shows that if you unbundle the two, wonderful things can happen.
The analogy for storage breaks down here, because for the most part, storage arrays have already been running on reference Intel x86 hardware platforms for a long time. A switch is a much more constrained platform. There aren’t multivendor add-in cards. There isn’t a third-party BIOS to manage. There aren’t NIC cards from different providers.
Oh yes, one more thing: There aren’t any disks. When you’re shopping for a commercially available storage array, it may appear to be more of a rebundling of commodity components than anything like a closed system -- and you wouldn’t be wrong to make this assumption.
As frustrating as this fact might seem, recognizing it can give your business the upper hand during a storage purchase. Use your understanding to gauge the fairness of a system’s costs, knowing that many of its components are priced independently by many retailers, and work with partners, such as systems integrators, to help your team design and execute the SDS approach that’s right for you.
2. Becoming your own hardware vendor probably isn’t a realistic option
Recently, I was reminded of my admittedly ambivalent view on SDS while I was traveling and meeting with customers and partners. I had a number of interesting conversations that, upon further reflection, indicated some cause for alarm. People seem to want free advice on what kind of SSDs to buy and which servers are most reliable. I’m always happy to chat about hardware topics, but to be honest, my recommendations have a useful life of about six months. Assembling and integrating systems requires a far more deliberate research strategy than having a beer with me. However, many storage and IT pros are attempting a do-it-yourself approach to SDS.
When shopping for a new storage platform, you’ll likely need to outsource assistance. Don’t join the ranks of organizations that have learned this lesson the hard way. If you’re an enterprise IT shop and have 20 people testing disk drives and fixing open source bugs, it’s likely you’re already overinvested in an expensive, noncore activity.
Unbundling storage software and hardware requires someone to spend a lot of time and money to create new bundles. Furthermore, they will have to repeat this process regularly to keep pace with supply-chain changes. In some cases of exquisite irony, companies have needed to hire hardware groups to implement their SDS.
Building hardware bundles for storage can carry a seven-digit annual price tag, and you have to buy a lot of capacity to justify that expense. For some businesses, this may make sense as a long-run investment. A number of huge Web businesses are building out their own infrastructure, but they are the minority. If you’re still considering the do-it-yourself route, consider the following factors regarding the way hardware relates to storage:
In many ways, SDN is simpler to understand. Network switches have been closed systems for decades. Leading networking vendors had their own proprietary operating systems, as well as custom hardware that usually included complex ASIC technology to align costs -- and profit margins -- with business expectations. Only recently has the market evolved to the point where a reference hardware platform for switching could be specified.
Now, SDN is fundamentally doing to networking what the PC did to computing in the 1980s: It’s allowing software to be developed independently of hardware. History shows that if you unbundle the two, wonderful things can happen.
The analogy for storage breaks down here, because for the most part, storage arrays have already been running on reference Intel x86 hardware platforms for a long time. A switch is a much more constrained platform. There aren’t multivendor add-in cards. There isn’t a third-party BIOS to manage. There aren’t NIC cards from different providers.
Oh yes, one more thing: There aren’t any disks. When you’re shopping for a commercially available storage array, it may appear to be more of a rebundling of commodity components than anything like a closed system -- and you wouldn’t be wrong to make this assumption.
As frustrating as this fact might seem, recognizing it can give your business the upper hand during a storage purchase. Use your understanding to gauge the fairness of a system’s costs, knowing that many of its components are priced independently by many retailers, and work with partners, such as systems integrators, to help your team design and execute the SDS approach that’s right for you.
2. Becoming your own hardware vendor probably isn’t a realistic option
Recently, I was reminded of my admittedly ambivalent view on SDS while I was traveling and meeting with customers and partners. I had a number of interesting conversations that, upon further reflection, indicated some cause for alarm. People seem to want free advice on what kind of SSDs to buy and which servers are most reliable. I’m always happy to chat about hardware topics, but to be honest, my recommendations have a useful life of about six months. Assembling and integrating systems requires a far more deliberate research strategy than having a beer with me. However, many storage and IT pros are attempting a do-it-yourself approach to SDS.
When shopping for a new storage platform, you’ll likely need to outsource assistance. Don’t join the ranks of organizations that have learned this lesson the hard way. If you’re an enterprise IT shop and have 20 people testing disk drives and fixing open source bugs, it’s likely you’re already overinvested in an expensive, noncore activity.
Unbundling storage software and hardware requires someone to spend a lot of time and money to create new bundles. Furthermore, they will have to repeat this process regularly to keep pace with supply-chain changes. In some cases of exquisite irony, companies have needed to hire hardware groups to implement their SDS.
Building hardware bundles for storage can carry a seven-digit annual price tag, and you have to buy a lot of capacity to justify that expense. For some businesses, this may make sense as a long-run investment. A number of huge Web businesses are building out their own infrastructure, but they are the minority. If you’re still considering the do-it-yourself route, consider the following factors regarding the way hardware relates to storage: