W
weaviate

  • This repository accompanies a three-day hands-on workshop on Large Language Models (LLMs), Prompt Engineering, and Retrieval-Augmented Generation (RAG). The workshop is designed to equip participants with practical skills and foundational knowledge to understand, deploy, and evaluate LLM-based applications across a wide range of fields. The program combines a lecture series covering theoretical foundations with a programming workshop in Jupyter Notebooks. Participants explore core technologies such as OpenAI APIs, LangChain, vector databases, document parsing, and AI toolchains—supported by real-world examples and scientific documents. The workshop is suitable for professionals, researchers, and students who seek a structured and practice-oriented introduction to state-of-the-art AI workflows based on LLMs.

    llm docker AI weaviate RAG
    Updated
    Updated
  • This project provides a Dockerized setup for running Weaviate as a local vector database. It includes a pre-configured Docker Compose setup, support for manual vector handling, and optional integration with vectorization modules using OpenAI or a local embedding model with Ollama. The setup features an example Jupyter notebook demonstrating Weaviate’s CRUD workflow and client library usage. Configuration is managed via .env, ensuring flexibility for different vectorization approaches. Designed for quick deployment, this setup serves as a practical starting point for building AI-powered applications with vector search capabilities.

    Updated
    Updated