Sramana Mitra: They basically asked you to become the CEO of this entity that was already funded?
Eric Burns: I actually became the Chief Technology Officer because my competencies at that time were that of an engineer. I’m now the Chief Product Officer and I’ve done a lot of the CEO duties. We have this unorthodox operating structure that ended up working very well for us.
Sramana Mitra: Bill, who was your professor, was the Chairman of the company and was also the acting CEO. You were CTO. There was no CEO?
Eric Burns: That’s right. Bill was not one of my professors. He was just a person I knew through connections in the university. He is, I would say, a venture capitalist foremost, being a Senior Partner at Saturn. Over time, he became the Executive Chairman. I started off in technology and moved in to support and marketing and I am now one of the two key company leaders.
Sramana Mitra: When you were saying earlier that you started being able to sell and you had more videographers, was that Panopto or was it something else?
Eric Burns: That was still within Carnegie-Melon. We were all employees of the university who were working on the CS department. The CS department, between the funding that it was receiving from the university’s Disabilities Office and with the assumption that this would go somewhere and raise CMU’s prestige, bootstrapped a service division within CMU that was more or less implementing what would become Panopto’s product.
Sramana Mitra: What I’ve heard so far in terms of technology transfer is there was some sort of search type technology that you built in the online library project. Then there were people who were doing manual video recording as a service.
Eric Burns: We transferred all of the technology not so much because we got a few patents, but that all of the IP transferred directly into the new entity. We hired the programmers at CMU who had taken over after I left. We inherited all of the source code. CMU and University of Pittsburgh became our customers.
Sramana Mitra: What was specifically in that product?
Eric Burns: The core of it was a data model that we had designed and patented at CMU. It could handle any combination of media streams and some other technology that we’ve worked on to make video search work a little bit better in a way that wasn’t the way the web search was working.
Sramana Mitra: What were the different feed streams that you were trying to incorporate and why?
Eric Burns: Classroom instruction is very unlike consumer or entertainment video in that it tends to have two points of focus that can alternate very quickly during the presentation. For example, if you watch TED talks online, a lot of times you’ll see a camera cut between the speaker walking around and gesticulating, and their content on the screen. We recognized that we needed to capture both of these but that there were three main types of content that were in use in teaching at CMU – all of which had to be supported for this to work well. It was PowerPoint, Keynote slides, screen captures of software demos, and finally, a lot of instructors that wanted to use chalk boards and white boards. We came up with a solution for each of those so that you could combine the streams of the instructor and the stream of their content or if you want something fancy, you could have one stream for the instructor, one for the slides, one for the screen, and one for the whiteboard.
To this day, I believe, Panopto is still the only solution that could support that combination. When you’re talking about the instructors’ pedagogy, the last thing that you can say is, “You’re going to have to do it this way if you want to use this software.” You have to adapt to whatever they do and that’s one of the key pieces of IP at the company.
Sramana Mitra: Can you talk a bit more from a software architecture and computer science point of view? How do you handle these three streams of feeds?
Eric Burns: One of the big innovations that led to Panopto’s viral adoption at CMU was that we realized that there were two main stakeholders in the recording process. There was the videographer, which is usually a guy who’s hustling from class to class. He’s focused on setting up the camera and getting good recording. The presenter has a totally different focus – on just giving a great presentation. The way that a lot of systems would solve this problem was to basically cable a camera directly into a podium computer and force the instructor to use that computer. It led to this weird friction between the videographer and the instructor. It was very cumbersome.
Our insight was, “The videographer can have a cheap computer. The instructors already got their computer. Why don’t we capture it at each of those endpoints and then have a third participant – a server – that would do clock synchronization across the two capturing points.” So the technology that we wound up patenting for this was the combination of the data model that could understand that these streams of data would start and stop at different times. The instructor and the videographer didn’t have to coordinate and that maybe even the screen capture starts and stops multiple times during a recording – and the means to synchronize those streams using this server so that nobody had to do this stream alignment or synchronize slides with video in post-production. The second we solved that problem where there was no post-production required, that’s where it really started to take off.