L
llm

  • This project provides a Dockerized setup for running Ollama as a local LLM service with GPU support. It includes a pre-configured Docker Compose setup, automated model management, and support for multiple LLM models like llama3.1 (chat) and snowflake-arctic-embed2 (embeddings). The container exposes an API for interacting with models via CLI or HTTP requests. Configuration is handled via .env, allowing easy customization. Designed for quick deployment, this setup serves as a flexible starting point for integrating local AI models.

    Updated
    Updated
  • This repository accompanies a three-day hands-on workshop on Large Language Models (LLMs), Prompt Engineering, and Retrieval-Augmented Generation (RAG). The workshop is designed to equip participants with practical skills and foundational knowledge to understand, deploy, and evaluate LLM-based applications across a wide range of fields. The program combines a lecture series covering theoretical foundations with a programming workshop in Jupyter Notebooks. Participants explore core technologies such as OpenAI APIs, LangChain, vector databases, document parsing, and AI toolchains—supported by real-world examples and scientific documents. The workshop is suitable for professionals, researchers, and students who seek a structured and practice-oriented introduction to state-of-the-art AI workflows based on LLMs.

    llm docker AI weaviate RAG
    Updated
    Updated