SM: You were able to find a starting point in the open source community?
MM: I found a good model from a different domain that we experimented with, and it worked well. That was the point at which we started making the protocol. At the point at which we started making the protocol Serban was still working, and he continued working at his job for the next six months. He left it in December of 2003. We launched the company in 2004 – it took us a full year to get the protocol developed.
We basically worked on that, and by the fall of 2004 we had two opportunities to present it. The first opportunity was with the media company that knew me and had pushed me in this direction, Warner Brothers Studios. The second one was a defense contractor that became my first customer. They were both impressed. The defense contractor benchmarked our prototype software, which was just command line software, and they said they would buy it if we ported it into Windows. We then started porting it into Windows as fast as possible.
SM: Were you then able to grow organically from that first customer?
MM: The most significant day was the notice of our first purchase order on February 4, 2004. It was a $20,000 purchase order, which I think was end-of-year money for this defense contractor. They thought they would throw it our way, and that is what we used to get launched. That is how we went to our first NAB as a company, it is how we got our first business guy, and it is how we got our next batch of customers.
Warner Brothers became our first media customer that summer, although it was ultimately a different group that made the purchase. It was those two evaluations, the meetings and follow-on evaluations that gave us the confidence to push on through. We had better benchmarks than anything these guys had tried before.
SM: What were you being benchmarked against at that time?
MM: Multiple things. Warner Brothers had TCP accelerators, which were devices they put in line to improve the flow control of TCP, and when FTP was run over that, it resulted in a faster transfer. That is what we were up against, and we blew them away.
In the government case I never knew exactly what we were being benchmarked against, but what they did tell me was that we needed to run under heavy delay and packet loss. It was extreme, well beyond the commercial space. That actually turned out to be our advantage. We performed so well that they ended up deploying us in radical situations where they did not have other solutions. It went into Korea and Iraq after that. They were very specialized scenarios where they had smaller links like microwave or RF links. They would write in with their problems, and they were having tremendous packet delay and they had very dirty and high-loss links. The main issue was a reliable, consistent stream of data.
SM: What did you do to the protocol that allowed you to achieve such performance?
MM: We did three things. First, we made a design space that was smaller than TCP acceleration. We said our design goal was bulk data transfer. We defined that to be basically file transfer. You have a source of data that needs to be transferred across a network and write to a synch of data on the other end.
The second thing Serban and I did was define it as bulk data. That is important, because bulk data means the amount of data significantly exceeds the bandwidth delay product of the link. In those applications you can afford to do out of order delivery instead of making a window like TCP does, where the receiver has to get all of the data in the window before the application can move forward in transferring. In these kind of applications you can write the data you receive out to disk any place in the file. It is all useful up to the point where you finish the file. Your window is as big as the file.
Finally, this is where we really break off of simple UDP data blasters that do some kind of retransmission: we realized in our independent study that the main problem with all the blasters is terrible duplicate data transmission. They were resending data that was already on its way, already received, which caused lost data and request for retransmit, which resulted in an explosion of packet loss. The sender slows down to catch up. In the worst case some of them just stop. Either they were sending massive amounts of duplicate data, which wastes bandwidth, or they just stop. Flow control breaks down under normal wide area network conditions. At that point we tackled that problem.