Latent AI, a startup based in San Francisco, is claiming that it can compress common artificial intelligence models 10 times. The company offers the service to train and optimize AI models using its proprietary technology.
Latent AI says it can compress common AI models 10 times without causing any loss inaccuracy. The company’s new technology is expected to help engineers speed up their work by making complex computations easier than ever before.
Latent AI hopes that this product will allow many more people to use powerful machine learning systems, which are currently only used by large companies with big budgets because of their high costs.
About Latent AI
Created by three Stanford computer scientists, Latent AI is a company that applies deep learning to speech and text analysis. It turns audio or written content into data sets for machine-learning systems; the process can take just seconds with their technology!
The startup’s latest funding round brings it up to $22 million in total funds raised – all from investors who were wowed by its pitch during TechCrunch’s Battlefield competition last year.
The company says it is designing software to train, adapt and deploy edge AI neural networks. The software can work with any type of machine or device even if they are running on less powerful chips like the ones found in a smartphone that would be used at an intelligent edge. It also claims their compression mechanisms save power by only using what’s needed for where you’re located and when circumstances change.
Latent AI Efficient Inference Platform (LEIP)
With LEIP, the inference process for AI models is efficient and modular. Users can train their neural networks on a server or in the cloud, then quantize them (to reduce size) to deploy at different levels of device-based computing power – all without having to rebuild from scratch each time!
The Latent AI Efficient Inference Platform offers users an easy solution when it comes to training data sets that are too large. With LEIP Compress you’ll be able to compress your model’s weights by up to 90%.
The platform also employs cutting-edge technology so that developers have more options available when they’re making decisions about how much compute capacity should go into designing their application logic versus preprocessing data as well as optimizing neural network architecture with embedded quantization tools.
LEIP is designed for AI, embedded and software application developers to easily enable, deploy and manage their neural networks. The modular workflow allows users to train the network on a desktop PC then compress it so that they can run it anywhere else – even in an IoT device!
Utilizing LEIP Compress will save you time by reducing your data size while maintaining accuracy with only a minor loss of performance or speed.