Sramana Mitra: Our audience is highly sophisticated and highly technical. Can you double-click down a bit and help us understand the technology?
Jack Porter: If you do an artificial intelligence or data science project, you normally have two different technologies to choose from. You either use machine learning which is some type of statistical form of math or neural math. Our specific technology uses stack neural maths. We do what’s called deep learning. The uniqueness of what we’re doing is that in most situations, a very intelligent data scientist is crafting the actual architecture of the neural math. Our product is Big Brain.
Big Brain is basically a data science AI. Our data science AI crafts the entire architecture. The reason for that is that for the projects we’re working on, the data is so huge that there’s no way a human can consume it. For example, one of our telecommunications companies gets 145 terabytes of data every single day. There’d be a billion rows of data. There’s no way for you to look at that as a human and say, “I can detect a trend here.” We bring the system in. We’re in the training part with the telecommunications company right now, so we haven’t rolled it to the big platform yet.
The system is 8GPU computers. That’s what the brain is. Then the memory for the brain is a hundred computers each with four terabytes of digital drive. These 150 terabytes hit the hundred computers first and go into digital memory. Then the 64 computers go after it and start learning on that information in memory. When you’re building a network, you’re going to build a deep learning network of somewhere between 75 and 100-layer neurons. At each layer, you have to determine which of the neural networks you’re going to use. There’s 20 different architectures.
There’s the fully connected network where every neuron on one layer is connected to the layer below it. We typically use it for cleansing and pre-training the data before it goes into the network. Then we have somewhere between 50 to 70 convolution layers. Convolution layers are looking for specific features and are looking at those features at different frequencies. The next layer would be where we’re using things like LSTM. Now we see the features and we can watch those features through time.
Finally we’ll do a fully connected network at the top which is our supervised layer where we’re looking for specific things. Now we’re actually looking for fraud and churn. As it moves up these layers, somebody will have to decide which of the 20 architectures you’re going to use and what the hyper-parameters are at each of those layers. That’s how you construct it. This is going to be a big trend. These problems are so big that humans can’t possibly do them. Other companies will also start introducing data science AI’s. Most of our customers view it in the very near future. In the next three or four years, many of their employees won’t be humans. They’ll be AI’s. That’s not science fiction anymore.