2023
[Artificial Intelligence] Receipt OCR with LangChain, OpenAI and PyTesseract
Embarking on a receipt OCR adventure inspired by the LangChain for LLM Application Development course, I explore the synergy of LangChain, OpenAI, and PyTesseract. With PyTesseract, I unlock OCR potential using OpenCV and showcase code for comprehensive text extraction. Integrating OpenAI, I create a prompt to merge and format OCR results. LangChain's LLM-Math tool joins the fray, verifying OCR accuracy by calculating and comparing amounts. Witness the power of combining these technologies for precise receipt data extraction and validation. Dive into the journey, explore the code, and enhance your data processing skills!
2023
[Artificial Intelligence] Autofill PDF with LangChain and LangFlow
In this journey, I explore automating PDF autofill using LangChain and LangFlow. Leveraging LangFlow and OpenAI, I streamline the employment form completion process, demonstrating steps to install LangFlow and set up a PostgreSQL table. Despite encountering challenges in prototyping with LangFlow, the exploration progresses to auto-fill PDFs. After extracting form fields and LLaMA model setup, I employ LangChain to fetch PostgreSQL data. Concluding with Python manipulation to interpolate and update the PDF, the process achieves seamless auto-fill. Dive into the details, overcome challenges, and witness the power of LangChain and LangFlow in revolutionizing PDF automation.
2023
[Artificial Intelligence] Running GPT4All for your PostgreSQL with LangChain
In this exploration, I guide you through setting up GPT4All on a Windows PC and demonstrate its synergy with SQL Chain for PostgreSQL queries using LangChain. Utilizing Jupyter Notebook and prerequisites like PostgreSQL and GPT4All-J v1.3-groovy, I install dependencies and showcase LangChain and GPT4All model setup. Navigating an Open Source Shakespeare database, I provide an ER diagram for clarity. Querying GPT4All through LangChain, we delve into PostgreSQL queries and also compare responses with OpenAI. The comprehensive walkthrough empowers you to seamlessly integrate GPT4All into your PostgreSQL workflows for efficient and dynamic interactions.
2023
[Artificial Intelligence] Running LLaMA server in local machine
In continuation from my previous post, I prepared the environment using Pipenv and installed the OpenAI-like web server with specific CMAKE arguments. Running the server with a provided model was straightforward. To create an SSH tunnel to the remote Ubuntu machine from my Windows PC, I used PuTTY, configuring it to forward port 8888. Connecting from BYO-GPT involved adjusting the server endpoint in the Dart file. This seamless integration allowed me to access the Open API for the LLAMA CPP server and successfully connect BYO-GPT to the specified server.
2023
[Artificial Intelligence] Building ChatBot for your PDF files with LangChain
In this post, I extend the use case from my previous post to demonstrate building a ChatBot for PDF files using LangChain. In the preparation phase, I install Chroma, an open-source embedding database, and ingest a PDF file using PyPDFLoader. I then split the document into chunks and use Chroma's default embeddings. Due to a potential issue, I provide an alternative embedding approach. Next, I load a local LLaMA model, prepare for question-answering, and run queries using RetrievalQAWithSourcesChain. I also touch on running with OpenBLAS for optimization. The guide empowers users to explore personalized question-answering over their PDF documents.
2023
[Artificial Intelligence] Building a basic Chain with LangChain
With the LangChain framework and a setup from a previous post, I delve into building a basic chain using Llama.cpp within LangChain. Following preparations, I install required packages and run interactive Python code to set up the LLM model. The process involves formatting a prompt template and creating a chain. I explore memory integration, adding a conversation buffer for context. The conversation with AI is initiated and continued through user inputs. Stay tuned for more explorations in upcoming posts!
2023
[Artificial Intelligence] Running LLaMA model locally
In this thorough guide, I prepared my Ubuntu machine (32GB) for the LLaMA (Language Model) build. Following Georgi Gergano's llama.cpp, I executed CMake commands, ensuring the correct tag and building the model successfully. I downloaded Microsoft's Phi2 model in GGUF format, enabling local execution without exposing prompts or data. Running the Phi2 model showcased its capabilities in a few-shot interaction, providing accurate responses. Additionally, I explored optional OpenBLAS integration for improved speed, offering insights into the installation and rebuild process.
2023
[Frontend] Developing BYO-GPT with Flutter
I dedicate around 10 minutes to create BYO-GPT, a Flutter app that allows easy interaction with ChatGPT through OpenAI's API. After installing Flutter, setting up the project, and creating necessary widgets and models, I utilize the OpenAI API for chat completion. The app includes user and GPT message bubbles, as well as a user input section with a GPT icon. By employing the Provider package, the app efficiently manages state changes. Additionally, I provide the option to switch models for experimentation. Overall, BYO-GPT provides a user-friendly interface for seamless communication with ChatGPT.