Google-infused storage startup Cohesity reveals itself

Armed with $70M in venture funding, Cohesity aims to streamline secondary storage mess

cohesity ceo Mohit Aron

Mohit Aron has a tough act to follow: His previous startup, Nutanix, may be on the cusp of filing for an IPO that values the hyperconverged infrastructure company at $2.5 billion. But Aron is off to a good start with his new venture, Cohesity, which this week emerges from stealth mode with $70 million in venture funding, reference-able customers such as Tribune Media, and a focus on a potentially big market in converging the secondary storage that houses so much DevOps, data protection, analytics and other unstructured data.

Part of Cohesity’s attraction to investors and early customers is its rich Google pedigree: Aron worked on the Google File System that the search giant relies on for core data storage and access, and about a quarter of the 30 engineers on his 50-person team come from Google as well. What’s more, Google Ventures is among Cohesity’s backers (at least Google makes some money off its ex-employees’ efforts this way, the 41-year-old entrepreneur quips). Google, which has gained a reputation for building its own infrastructure technology, isn’t using the startup’s gear yet, but Aron says maybe someday…

I spoke with the computer science Ph.D.-wielding CEO earlier this week to learn about how the idea for Cohesity was hatched and where the Santa Clara company is headed. Here’s an edited transcript of that discussion:

(MORE: 10 hot enterprise storage companies to watch)

 

Tell me the story of how Cohesity started up.

I spent more than 3 years at Nutanix: the technology was mature, and hyperconvergence was already taking over the world. But there was this one problem that I saw: Hyperconvergence applied to primary storage for basically, virtualization environments. But the bulk of the data actually sits in secondary storage, which we are redefining to be not just data protection but all kinds of storage involved in applications that aren’t mission critical [and handled in primary storage] including data protection, test and development, and analytics. So I saw a whole bunch of problems in secondary storage that could benefit from a different form of convergence. I left Nutanix in early 2013, thought over how best to fix the problem in secondary storage and came up with the idea for Cohesity, which was incorporated in the summer of 2013.

So why didn’t you just try to stretch what Nutanix was doing to address the secondary storage problem?

Our vision, and this spans my experience building storage systems for the past 10 to 15 years, is that the data center consists of two kinds of storage: primary [the small tip of the iceberg above the water] and secondary [the bigger chunk below]. And when you address one aspect of storage then you are focused on the value-add that applies in that. In primary storage what’s more important for customers is stuff like high performance and strict SLAs. So systems get architected for those purposes. Whereas with secondary storage, it should really be separate. Some people talk about converged primary and secondary storage, but in my mind that doesn’t make sense. If you have a bug in that system it’s not only going to take down your primary storage but also your secondary storage. So secondary storage is really separate and the workflows it addresses are separate. Just look at data protection: what kind of environments can you back up and how often? How much can you scale? The scalability you require is much more general purpose than in a virtualization environment. The solution I implemented in Nutanix would work very well when you would do file I/O but would not scale very well when you do name space operations like creating or deleting files. Our vision now is to converge all the secondary workflows into one infinitely scalable platform. [Aron added that while Nutanix is a mature company and his new one is not, the time could be right at some point for the two to partner.]

You mention your background working on the Google File System, and Cohesity says its Data Platform uses a Google-like, web-scale architecture. Can you elaborate on how working on the Google File System has informed your ventures since then?

As I graduated with my Ph.D [in computer science from Rice University] I worked in a scale-out company called Zambeel in the early 2000s, and the architects had put in assumptions that if something failed then that something would probably come back up in a few minutes. When I worked at Google I saw a different view of the world. I saw a world where the smallest systems comprised 5,000 to 10,000 server nodes back when Google had millions rather than gazillions like now. When you’re talking about that scale you cannot babysit these systems. When something goes down it will probably stay down for an extended period of time and there is no hope that an admin will come along and have time to fix it. One of the ways in which the Google File System was different in terms of interruptions is that it said hey, if any component fails and stays down for an extended period of time, you can design around that so that the system can heal, almost like if an organ of the body is going to die and you work around it, not waiting for the doctor to come and implant a second organ. That is one philosophy on which the Google File System works and has carried over to the systems I’ve worked on henceforth.

Another thing is a lot of systems, going back to the first company I joined, they used to have a database sitting on the side that all the transactions went through and the company used to claim scalability. But the reality was that this database was a bottleneck. So it became an exercise in making it work on the most powerful piece of hardware that we could, but eventually the scalability is limited by that. The philosophy behind the Google File System is that there’s no single bottleneck [early versions did have a unique master but later versions eliminated that].

This stuff is not taught in textbooks. So unfortunately, people who come out of Stanford or [UC] Berkeley and feel they can build a cheaper system, they’re in for a

1 2 Page 1
Download the CIO October 2016 Digital Magazine
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.