This post explores the audio generation capabilities of the NVIDIA Jetson Orin NX. It covers transcribing audio using Whisper, setting up text-to-speech (TTS) and automatic speech recognition (ASR) with Llamaspeak, and preparing the RIVA server for advanced speech AI applications. Detailed instructions and command examples are provided, making it easy for developers to experiment with these tools on the Jetson platform.
This post explores the powerful capabilities of NVIDIA Jetson Orin NX for text generation tasks. It covers the setup and use of models like Llama 2 and Llama 3, including installation steps, performance benchmarks, and examples of interactive AI sessions. Additionally, it provides insights into using the Jetson platform for deploying AI models efficiently, with practical tips on getting started and maximizing performance. Whether you’re a developer or AI enthusiast, this guide offers a hands-on look at harnessing NVIDIA Jetson’s potential for advanced AI applications.
In this guide, I document the process of setting up and flashing the NVIDIA Jetson Orin NX, a powerful embedded AI computer ideal for advanced robotics and generative AI applications. The post covers preparation steps, including selecting the right SSD and downloading the necessary Ubuntu image. I provide a detailed walkthrough of the installation using NVIDIA’s SDK Manager, along with troubleshooting tips based on my experience. Whether you’re new to the Jetson Orin NX or looking to optimize your setup, this guide offers practical insights and step-by-step instructions to get you started.
In this blog post, I guide you through creating and running your first Kubeflow pipeline. We’ll start with the “Hello World” example, demonstrate how to manage sequential and shared pipelines, and explore artifact storage with MinIO. Additionally, I’ll introduce K9s, a powerful terminal-based UI for managing your Kubernetes clusters efficiently. By the end, you’ll have a solid understanding of setting up and managing Kubeflow pipelines in your machine learning workflows.
In this post, we explore KServe, a model inference platform on Kubernetes designed for scalability. Building on our previous Kubeflow guide, we detail how to set up your first KServe endpoint, make predictions, and troubleshoot common issues. Follow our step-by-step instructions to seamlessly integrate KServe with your Kubeflow environment and enhance your machine learning deployment process.