At Stream HPC we optimize the performance of software such that data is processed in less time. For deep learning this is also important once the models have built. Optimizing a model algorithmically or find a new approach is fully in the domain of AI, but computationally optimizing the model’s throughput takes a specialism that can be found at Stream HPC.
We have built in-house tools and processes to find and solve compute bottlenecks in any type of software. One of these, benchmark.io, we are commercially selling. We also wrote foundational libraries for AMD GPUs that are used by software like TensorFlow and PyTorch, which means we are aware of how optimized each piece of library is – also for Nvidia’s version of the libraries.
Why performance is important
Your business-goals are implemented by your AI-models and software. If the models can be trained faster, if the inference can be done with less energy, if the training-costs go down, if more models can run at the same time – all these influence how well your business goals are attained.
The reason why we start offering this service is that progress in AI can be so opaque, such that “throwing more engineers at the problem” seems to become a solution where many AI-projects end. We think that control can be regained, by careful benchmarking and focusing on removing bottlenecks.
Would this work for you?
No AI is the same. We’d like to understand where your bottlenecks are. If compute optimizations are not the right direction for you, then we’ll advise you where to go next.
Contact us to initiate the conversation