Projects with this topic
-
This project provides a Dockerized setup for running Ollama as a local LLM service with GPU support. It includes a pre-configured Docker Compose setup, automated model management, and support for multiple LLM models like llama3.1 (chat) and snowflake-arctic-embed2 (embeddings). The container exposes an API for interacting with models via CLI or HTTP requests. Configuration is handled via .env, allowing easy customization. Designed for quick deployment, this setup serves as a flexible starting point for integrating local AI models.
Updated -
This repository accompanies a three-day hands-on workshop on Large Language Models (LLMs), Prompt Engineering, and Retrieval-Augmented Generation (RAG). The workshop is designed to equip participants with practical skills and foundational knowledge to understand, deploy, and evaluate LLM-based applications across a wide range of fields. The program combines a lecture series covering theoretical foundations with a programming workshop in Jupyter Notebooks. Participants explore core technologies such as OpenAI APIs, LangChain, vector databases, document parsing, and AI toolchains—supported by real-world examples and scientific documents. The workshop is suitable for professionals, researchers, and students who seek a structured and practice-oriented introduction to state-of-the-art AI workflows based on LLMs.
Updated -
The Prototype Document Extractor is a lightweight, containerized service designed to extract structured content from PDF files using the Unstructured IO library. It exposes a minimal HTTP API that allows users to submit PDFs and receive parsed content in JSON format. This project includes:
A backend service that handles PDF parsing using Unstructured IO. A Python client library for programmatically interacting with the API from within your code. Docker configurations to run the service in a portable, reproducible environment.Updated -
This project provides a Dockerized setup for running Weaviate as a local vector database. It includes a pre-configured Docker Compose setup, support for manual vector handling, and optional integration with vectorization modules using OpenAI or a local embedding model with Ollama. The setup features an example Jupyter notebook demonstrating Weaviate’s CRUD workflow and client library usage. Configuration is managed via .env, ensuring flexibility for different vectorization approaches. Designed for quick deployment, this setup serves as a practical starting point for building AI-powered applications with vector search capabilities.
Updated