Tenstorrent NLP Solution available on Eden AI
New provider

Tenstorrent NLP Solution available on Eden AI

We are pleased to announce that Tenstorrent NLP API has been integrated into Eden AI.

What is Tenstorrent?

Tenstorrent develops innovative and powerful AI products for both inference and training, with a focus on NLP and vision models. They offer AI chips designed to efficiently run on a single platform, along with software solutions that cater to ML developers' needs for execution, scaling, and hardware control.

Tenstorrent also provides high-performing RISC-V CPU technology, optimized graph processing accelerators, and customizable computing solutions, including licensing their CPU design IP and partnering for custom designs.

Why do we offer Tenstorrent in addition to other NLP APIs?

Eden AI offers Tenstorrent NLP solutions on its platform amongst several other technologies. We want our users to have access to multiple AI engines and manage them in one place so they can reach high performance, optimize cost and cover all their needs.

There are many reasons for using multiple AI APIs : ‍

Fallback provider is the ABCs.

You need to set up an AI API that is requested if and only if the main AI API does not perform well (or is down). You can use the confidence score returned or other methods to check provider accuracy.

Performance optimization.

After the testing phase, you will be able to build a mapping of AI vendors' performance that depends on the criteria that you chose. Each data that you need to process will be then sent to the best API.

Cost - Performance ratio optimization.

This method allows you to choose the cheapest provider that performs well for your data. Let's imagine that you choose Google Cloud API for customer "A" because they all perform well and this is the cheapest. You will then choose Microsoft Azure for customer "B", a more expensive API but Google performances are not satisfying for customer "B". (this is a random example)

Combine multiple AI APIs.

This approach is required if you look for extremely high accuracy. The combination leads to higher costs but allows your AI service to be safe and accurate because AI APIs will validate and invalidate each other for each piece of data.

Interview with Tenstorrent's Head of Developer Relations

We had the chance to talk to Shubham Saboo, Tenstorrent's Head of Developer Relations, who agreed to answer some of our questions:

When were you created, and what inspired you to start your company?

Tenstorrent, under the leadership of industry veteran Jim Keller, builds computers for AI. With a strong focus on innovation and power, Tenstorrent develops the most innovative and powerful AI products in the industry that do both inference and training. 
These AI chips are specifically designed to efficiently run both NLP and vision models on a single silicon platform with a single easy-to-use software stack.
Our software approach caters to ML developers by providing seamless execution and scaling, while also offering bare metal access and kernel-level control for developers who require maximum hardware control.
Tenstorrent has also developed the world's most high-performing RISC-V CPU technology that features a modular and composable design that seamlessly integrates with our AI chips.

What solution do you provide?

Tenstorrent has developed AI/ML accelerators specifically optimized for graph processing tasks that provide a unique alternative to conventional GPUs. Our approach to deep learning processing involves mapping different layers of a deep learning graph to different cores on the chip, ensuring optimal utilization and efficient data flow. 
Using that methodology, we deliver performant, scalable AI and ML products directly to end customers in the form of cards and systems. 
We also provide a cloud service for those who prefer to ‘rent’ vs ‘buy’ their compute capabilities. In addition, we can license our superscalar RISC-V CPU design Intellectual Property to chip producers for productization and integration into systems. 
Finally, Tenstorrent can partner for custom-designed, best-in-class computing solutions.  Based on our modular chiplet architecture, we can produce AI computers to specification efficiently.
To support these products we will offer two separate software solutions:  one designed to take high-level AI frameworks (like PyTorch, TensorFlow, and JAX) and compile them into executable code that runs efficiently on Tenstorrent's hardware.  The second is a low-level framework available for those wanting to program at the kernel level and directly access the matrix and vector arithmetic capabilities of our chips.

Who are your customers?

Our mission is to address the open-source compute demands through industry-leading AI/ML accelerators, high-performing RISC-V CPUs, and infinitely-configurable ML and CPU chiplets.
Our products will scale from small, low-power cards to very large servers allowing customers to deploy in their own specific environment. This variety of product offerings naturally drives a diverse customer base.  
Customer industries range from automotive all the way to High-Performance Compute companies; our hardware and software are scalable and customizable to solve specific customer needs.  

How will your product evolve?

Neural Networks and AI Models have been exploding over the past couple of years. We have seen a large step function occur in NLP, embedded, and IoT applications. This has been primarily driven by the ability of the hardware to handle increasing dataset sizes and the ability to train large-scale AI models.
As models and datasets have continued to scale, the underlying hardware has needed the ability to both act as an efficient single-chip solution, as well as a scalable architecture, without running into networking bottlenecks or making it power or cost prohibitive.
As the innovation continues to advance, the hardware must be efficient for current implementations and accommodate future AI workloads that we do not know about yet. Our simple goal is to create outstanding products that will evolve with the ever-changing needs of our customers. 
We also are committed to open-source products in order to support continued innovation in this fast-paced segment.

 

What motivated you to integrate Eden AI?

Eden AI is a first-of-its-kind platform that offers APIs for nearly all ML tasks under one roof. It’s an easy-to-use interface and streamlined platform for the end users that prompted us to integrate our ML offerings.
Because of our unique hardware and software stack, we are in the position to offer affordable inference solutions for a variety of ML tasks, and through Eden AI, we can directly reach the end users and improve our offerings based on bottom-up feedback. We anticipate this collaboration to help us understand the needs of end users that will help reshape our next generation of products and services.

How to use Tenstorrent on Eden AI?

You'll need some documentation to use Tenstorrent's NLP technologies on Eden AI. Then call the API:

Eden AI is a must-have

Eden AI is the future of AI usage in companies. Our platform not only allows you to call multiple AI APIs but also gives you :

  • Centralized and fully monitored billing for all AI APIs
  • A unified API for all providers: simple and standard to use, quick switch between providers, access to the specific features of each provider
  • Standardized response format: the JSON output format is the same for all suppliers thanks to Eden AI's standardization work. The response elements are also standardized thanks to Eden AI's powerful matching algorithms.
  • Best Artificial Intelligence APIs of the market: big cloud providers (Google, AWS, Microsoft, and more specialized engines)
  • Data protection: Eden AI will not store or use any data. Possibility to filter to use only GDPR engines.

You can see Eden AI documentation here.

Related Posts

Try Eden AI for free.

You can directly start building now. If you have any questions, feel free to schedule a call with us!

Get startedContact sales