This guide walks you through setting up the Wave Rover, including hardware connections and software configuration. It covers common issues like missing drivers and file errors, providing step-by-step solutions for problems such as the CP2102N USB to UART Bridge Controller and missing libraries like ArduinoJson or Adafruit_GFX. Whether you’re troubleshooting hardware connections or resolving compilation errors, this guide ensures a smooth setup process.
This post guides you through setting up a secure, immutable Kubernetes cluster using Talos Linux. It covers installing Talos on control and worker nodes, configuring local storage with hostPath and Local Path Provisioner, and setting up the Kubernetes Dashboard with an admin user for cluster management. With Talos Linux, you achieve a minimal, API-managed Kubernetes environment without SSH or systemd, making it ideal for a secure and reliable homelab or production setup.
This post explores the audio generation capabilities of the NVIDIA Jetson Orin NX. It covers transcribing audio using Whisper, setting up text-to-speech (TTS) and automatic speech recognition (ASR) with Llamaspeak, and preparing the RIVA server for advanced speech AI applications. Detailed instructions and command examples are provided, making it easy for developers to experiment with these tools on the Jetson platform.
This post explores the powerful capabilities of NVIDIA Jetson Orin NX for text generation tasks. It covers the setup and use of models like Llama 2 and Llama 3, including installation steps, performance benchmarks, and examples of interactive AI sessions. Additionally, it provides insights into using the Jetson platform for deploying AI models efficiently, with practical tips on getting started and maximizing performance. Whether you’re a developer or AI enthusiast, this guide offers a hands-on look at harnessing NVIDIA Jetson’s potential for advanced AI applications.
This post demonstrates how to use KEDA, a Kubernetes-based Event Driven Autoscaler, to dynamically scale Kafka consumer workloads. Building on a previous setup with Kafka on MicroK8s, the guide walks through the installation of KEDA, configuring Kafka consumers, setting up secrets for authentication, and creating a ScaledObject to manage scaling based on message load. The post also includes practical examples of scaling under different loads, showcasing how KEDA automates horizontal scaling without requiring changes to the microservices code, making it easier to manage workloads in a Kubernetes environment.
In this guide, I document the process of setting up and flashing the NVIDIA Jetson Orin NX, a powerful embedded AI computer ideal for advanced robotics and generative AI applications. The post covers preparation steps, including selecting the right SSD and downloading the necessary Ubuntu image. I provide a detailed walkthrough of the installation using NVIDIA’s SDK Manager, along with troubleshooting tips based on my experience. Whether you’re new to the Jetson Orin NX or looking to optimize your setup, this guide offers practical insights and step-by-step instructions to get you started.
My homelab is a playground for experimenting with various tools and setups. However, for Proof of Concept (POC) environments, a lightweight and portable setup is often more suitable. In this post, I will guide you through setting up a MicroK8s environment in a virtual machine using Multipass. This POC demonstrates how Kafka can be set up in this environment.
In this blog post, I guide you through creating and running your first Kubeflow pipeline. We’ll start with the “Hello World” example, demonstrate how to manage sequential and shared pipelines, and explore artifact storage with MinIO. Additionally, I’ll introduce K9s, a powerful terminal-based UI for managing your Kubernetes clusters efficiently. By the end, you’ll have a solid understanding of setting up and managing Kubeflow pipelines in your machine learning workflows.
As we transition from Lucidchart to draw.io for team diagramming, this guide outlines the steps to integrate draw.io and PlantUML with GitLab. I’ll configure the Diagrams.net server, enable integration, and demonstrate creating and editing diagrams within GitLab. Additionally, I’ll cover the setup and integration of PlantUML for creating detailed design diagrams. Follow along to seamlessly incorporate these powerful diagramming tools into your GitLab workflow.
In this post, we explore KServe, a model inference platform on Kubernetes designed for scalability. Building on our previous Kubeflow guide, we detail how to set up your first KServe endpoint, make predictions, and troubleshoot common issues. Follow our step-by-step instructions to seamlessly integrate KServe with your Kubeflow environment and enhance your machine learning deployment process.