{"id":2114,"date":"2026-01-01T20:07:49","date_gmt":"2026-01-01T20:07:49","guid":{"rendered":"https:\/\/ai-box.eu\/?p=2114"},"modified":"2026-01-01T20:10:31","modified_gmt":"2026-01-01T20:10:31","slug":"installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/","title":{"rendered":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs"},"content":{"rendered":"<p data-path-to-node=\"1\">Anyone working with Large Language Models who also needs to analyze large volumes of documents knows the problem: simple chat interfaces are not enough when it comes to extracting and understanding specific information from PDFs, Word documents, or other file formats. For me, the solution was clear: I use <b data-path-to-node=\"1\" data-index-in-node=\"195\">RAGFlow<\/b> on my X86 Ubuntu Server with two <b data-path-to-node=\"1\" data-index-in-node=\"230\">NVIDIA RTX A6000 GPUs<\/b> as a professional RAG system (Retrieval-Augmented Generation) for intelligent document analysis and knowledge processing.<\/p>\n<p data-path-to-node=\"2\">In this post, I will show you how I installed and configured <b data-path-to-node=\"2\" data-index-in-node=\"30\">RAGFlow<\/b> on my Ubuntu Server to set up a complete RAG system for analyzing documents, PDFs, Word files, and other formats. RAGFlow combines modern RAG technology with agent capabilities and offers a professional solution for businesses of any size. The best part: the installation is done entirely via Docker and is completed in about 30-45 minutes.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#The_Basic_Idea_Professional_RAG_System_for_Intelligent_Document_Analysis\" >The Basic Idea: Professional RAG System for Intelligent Document Analysis<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_1_Check_System_Requirements\" >Phase 1: Check System Requirements<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_2_Clone_RAGFlow_Repository\" >Phase 2: Clone RAGFlow Repository<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_3_Start_RAGFlow_Server_Simple\" >Phase 3: Start RAGFlow Server (Simple)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_4_Configure_LLM_API_Key\" >Phase 4: Configure LLM API Key<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_5_Configure_RAGFlow_for_Production_Deployment_Complex\" >Phase 5: Configure RAGFlow for Production Deployment (Complex)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Phase_6_Upload_and_Analyze_First_Documents\" >Phase 6: Upload and Analyze First Documents<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Troubleshooting_Common_Problems_and_Solutions\" >Troubleshooting: Common Problems and Solutions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Managing_Containers\" >Managing Containers<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Rollback_Removing_RAGFlow_again\" >Rollback: Removing RAGFlow again<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Summary_Conclusion\" >Summary &amp; Conclusion<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#Next_Step_Advanced_Configuration_and_Integration\" >Next Step: Advanced Configuration and Integration<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"4\"><span class=\"ez-toc-section\" id=\"The_Basic_Idea_Professional_RAG_System_for_Intelligent_Document_Analysis\"><\/span>The Basic Idea: Professional RAG System for Intelligent Document Analysis<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"5\">Before I dive into the technical details, an important point: <strong>RAGFlow<\/strong> is a leading open-source RAG engine (Retrieval-Augmented Generation) that combines state-of-the-art RAG technology with agent functions to create a superior context layer for LLMs. Unlike simple chat interfaces, RAGFlow can understand complex documents, extract information, and prepare it intelligently for LLM queries. My experience shows that RAGFlow excels particularly in analyzing PDFs, Word documents, Excel files, and even scanned documents.<\/p>\n<p data-path-to-node=\"6\">What makes it special: RAGFlow offers a <strong>template-based chunking method<\/strong> that allows documents to be segmented intelligently while preserving semantic meaning. Furthermore, RAGFlow supports <strong>grounded citations<\/strong> with reduced hallucinations \u2013 meaning every answer is provided with concrete source references, so you know exactly which document and section the information comes from. Installation is done via Docker Compose with pre-built containers that already include all necessary components such as Elasticsearch, MySQL, Redis, and MinIO.<\/p>\n<p data-path-to-node=\"7\"><strong>What you need:<\/strong><\/p>\n<ul data-path-to-node=\"8\">\n<li>\n<p data-path-to-node=\"8,0,0\">An X86 Ubuntu Server (20.04 or newer) with at least 4 CPU cores<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,1,0\">At least 16 GB RAM (recommended: 32 GB or more for larger document collections)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,2,0\">At least 50 GB free disk space (recommended: 100 GB+ for documents and indices)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,3,0\">Two NVIDIA RTX A6000 GPUs (or other CUDA-capable GPUs) for GPU-accelerated document processing<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,4,0\">Docker &gt;= 24.0.0 installed and configured for GPU access<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,5,0\">Docker Compose &gt;= v2.26.1 installed<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,6,0\">NVIDIA Container Toolkit installed<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,7,0\">Basic knowledge of terminal commands, Docker, and REST APIs<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,8,0\">Optional: gVisor installed if you want to use the Code Executor function (sandbox)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,9,0\">An LLM API key (OpenAI, Anthropic, or other supported LLM providers)<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"9\"><span class=\"ez-toc-section\" id=\"Phase_1_Check_System_Requirements\"><\/span>Phase 1: Check System Requirements<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"10\">For the rest of this guide, I am assuming that you are sitting directly in front of the server or have SSH access. First, I check if all necessary system requirements are met. To do this, I open a terminal on my Ubuntu Server and run the following commands.<\/p>\n<p data-path-to-node=\"10\">The following command shows you if Docker is installed:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker --version<\/code><\/p>\n<p data-path-to-node=\"10\">You should see Docker 24.0.0 or newer. Next, I check Docker Compose:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker compose version<\/code><\/p>\n<p data-path-to-node=\"10\">You should see Docker Compose v2.26.1 or newer.<\/p>\n<p data-path-to-node=\"10\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> If Docker or Docker Compose is not installed, you can install Docker as follows:<\/p>\n<p data-path-to-node=\"10\">First, I add the Docker GPG key:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get update<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get install ca-certificates curl gnupg<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo install -m 0755 -d \/etc\/apt\/keyrings<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>curl -fsSL https:\/\/download.docker.com\/linux\/ubuntu\/gpg | sudo gpg --dearmor -o \/etc\/apt\/keyrings\/docker.gpg<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo chmod a+r \/etc\/apt\/keyrings\/docker.gpg<\/code><\/p>\n<p data-path-to-node=\"10\">Now I add the Docker repository to the Apt sources:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>echo \"deb [arch=$(dpkg --print-architecture) signed-by=\/etc\/apt\/keyrings\/docker.gpg] https:\/\/download.docker.com\/linux\/ubuntu $(. \/etc\/os-release &amp;&amp; echo \"$VERSION_CODENAME\") stable\" | sudo tee \/etc\/apt\/sources.list.d\/docker.list &gt; \/dev\/null<\/code><\/p>\n<p data-path-to-node=\"10\">Now I install Docker Engine, Docker CLI, and Docker Compose:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get update<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin<\/code><\/p>\n<p data-path-to-node=\"10\">Now I enable the Docker service and add my user to the Docker group:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo systemctl enable --now docker<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo usermod -aG docker $USER<\/code><\/p>\n<p data-path-to-node=\"10\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important:<\/b> After adding yourself to the Docker group, you must log out and back in or open a new terminal for the change to take effect.<\/p>\n<p data-path-to-node=\"10\">Now I check if the GPUs are recognized:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">You should now see both RTX A6000 GPUs. If this command fails, you must install the NVIDIA drivers first.<\/p>\n<p data-path-to-node=\"10\">An important step: For Docker to access the GPUs, the <strong>NVIDIA Container Toolkit<\/strong> must be installed. If the following test command fails, install the toolkit as described below:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker run --rm --gpus all nvidia\/cuda:12.0-base-ubuntu22.04 nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">If this command returns an error like <code>could not select device driver \"\" with capabilities: [[gpu]]<\/code>, you need to install the NVIDIA Container Toolkit. I use a robust method here that also works if automatic distribution detection causes problems:<\/p>\n<p data-path-to-node=\"10\">First, I add the GPG key:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>curl -fsSL https:\/\/nvidia.github.io\/libnvidia-container\/gpgkey | sudo gpg --dearmor -o \/usr\/share\/keyrings\/nvidia-container-toolkit-keyring.gpg<\/code><\/p>\n<p data-path-to-node=\"10\">Now I add the repository. I use the stable method provided directly by NVIDIA, which also works with Ubuntu 24.04:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>curl -fsSL https:\/\/nvidia.github.io\/libnvidia-container\/stable\/deb\/nvidia-container-toolkit.list | sed 's#deb https:\/\/#deb [signed-by=\/usr\/share\/keyrings\/nvidia-container-toolkit-keyring.gpg] https:\/\/#g' | sudo tee \/etc\/apt\/sources.list.d\/nvidia-container-toolkit.list<\/code><\/p>\n<p data-path-to-node=\"10\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> If you already have a corrupted file (e.g., with HTML content instead of the package list), you can delete it first: <code>sudo rm \/etc\/apt\/sources.list.d\/nvidia-container-toolkit.list<\/code> and then run the command above again.<\/p>\n<p data-path-to-node=\"10\">Now I update the package list and install the toolkit:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get update<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo apt-get install -y nvidia-container-toolkit<\/code><\/p>\n<p data-path-to-node=\"10\">After installation, I configure Docker for GPU support. I use the recommended setup command here:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo nvidia-container-toolkit q --setup<\/code><\/p>\n<p data-path-to-node=\"10\">Now I restart the Docker daemon:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo systemctl restart docker<\/code><\/p>\n<p data-path-to-node=\"10\">Now the GPU test should work:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker run --rm --gpus all nvidia\/cuda:12.0-base-ubuntu22.04 nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">This command should now show both RTX A6000 GPUs. If it still doesn&#8217;t work, check the troubleshooting section further down.<\/p>\n<p data-path-to-node=\"10\">An important step for RAGFlow: I check the value of <code>vm.max_map_count<\/code>, which is important for Elasticsearch:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sysctl vm.max_map_count<\/code><\/p>\n<p data-path-to-node=\"10\">The value should be at least 262144. If not, I set it as follows:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>sudo sysctl -w vm.max_map_count=262144<\/code><\/p>\n<p data-path-to-node=\"10\">To make this setting permanent, I add it to <code>\/etc\/sysctl.conf<\/code>:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>echo \"vm.max_map_count=262144\" | sudo tee -a \/etc\/sysctl.conf<\/code><\/p>\n<div id=\"attachment_XXXX\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/UBUNTU_SERVER-nvidia_smi.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-XXXX\" class=\"wp-image-XXXX size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/UBUNTU_SERVER-nvidia_smi-1024x694.png\" alt=\"Ubuntu Server - NVIDIA-SMI RTX A6000\" width=\"1024\" height=\"694\" \/><\/a><p id=\"caption-attachment-XXXX\" class=\"wp-caption-text\">Ubuntu Server &#8211; NVIDIA-SMI RTX A6000<\/p><\/div>\n<h3 data-path-to-node=\"17\"><span class=\"ez-toc-section\" id=\"Phase_2_Clone_RAGFlow_Repository\"><\/span>Phase 2: Clone RAGFlow Repository<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"18\">RAGFlow runs in Docker containers that already include all necessary components. This makes installation much easier, as we don&#8217;t have to worry about Python dependencies or build processes. I simply clone the RAGFlow repository from GitHub:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>git clone https:\/\/github.com\/infiniflow\/ragflow.git<\/code><\/p>\n<p data-path-to-node=\"18\">After cloning, I change into the directory:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>cd ragflow<\/code><\/p>\n<p data-path-to-node=\"18\">Optional: If you want to use a specific version, you can switch to a stable tag:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>git checkout v0.23.1<\/code><\/p>\n<p data-path-to-node=\"18\">This step ensures that the <code>entrypoint.sh<\/code> file in the code matches the Docker image version. For the latest version, you can skip this step.<\/p>\n<p data-path-to-node=\"18\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> RAGFlow Docker images are built for x86 platforms. If you are working on an ARM64 platform, you must build the image yourself \u2013 for my X86 server, this is not necessary.<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_3_Start_RAGFlow_Server_Simple\"><\/span>Phase 3: Start RAGFlow Server (Simple)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now I start RAGFlow with the default settings to verify basic functionality. I first change to the docker directory:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>cd docker<\/code><\/p>\n<p data-path-to-node=\"25\">By default, RAGFlow uses the CPU for DeepDoc tasks (document processing). For my server with two RTX A6000 GPUs, I want to enable GPU acceleration. To do this, I add the GPU configuration to the <code>.env<\/code> file:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>echo \"DEVICE=gpu\" | sudo tee -a .env<\/code><\/p>\n<p data-path-to-node=\"25\">Now I start RAGFlow with Docker Compose:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml up -d<\/code><\/p>\n<p data-path-to-node=\"25\">This command downloads all necessary container images and starts them in the background. Depending on your internet speed, the download may take a few minutes. RAGFlow uses several containers:<\/p>\n<ul data-path-to-node=\"26\">\n<li>\n<p data-path-to-node=\"26,0,0\">RAGFlow Main Container (with web interface and API)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\">Elasticsearch (for full-text and vector search)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,2,0\">MySQL (for metadata)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,3,0\">Redis (for caching)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,4,0\">MinIO (for object storage)<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"25\">I check the status of the containers:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker compose ps<\/code><\/p>\n<p data-path-to-node=\"25\">All containers should have the status &#8220;running&#8221;. To see the logs of the RAGFlow main container:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker logs -f docker-ragflow-cpu-1<\/code><\/p>\n<p data-path-to-node=\"25\">You should see an output containing the following information:<\/p>\n<pre data-path-to-node=\"27\"><code data-path-to-node=\"27\">        ____   ___    ______ ______ __\r\n       \/ __ \\ \/   |  \/ ____\/\/ ____\/\/ \/____  _      __\r\n      \/ \/_\/ \/\/ \/| | \/ \/ __ \/ \/_   \/ \/\/ __ \\| | \/| \/ \/\r\n     \/ _, _\/\/ ___ |\/ \/_\/ \/\/ __\/  \/ \/\/ \/_\/ \/| |\/ |\/ \/\r\n    \/_\/ |_|\/_\/  |_|\\____\/\/_\/    \/_\/ \\____\/ |__\/|__\/\r\n\r\n * Running on all addresses (0.0.0.0)\r\n<\/code><\/pre>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important Note:<\/b> If you don&#8217;t see this confirmation and try to access RAGFlow directly, your browser might show a &#8220;network abnormal&#8221; error because RAGFlow might not be fully initialized at that point. Therefore, wait until the above output appears.<\/p>\n<p data-path-to-node=\"25\">Now I can open RAGFlow in the browser. With default settings, RAGFlow is available on port 80:<\/p>\n<p data-path-to-node=\"25\"><strong>URL:<\/strong> <code>http:\/\/&lt;IP-Address-Server&gt;<\/code><\/p>\n<p data-path-to-node=\"25\">Replace <code>&lt;IP-Address-Server&gt;<\/code> with the IP address of your server. You can find out the IP address with the following command:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>hostname -I<\/code><\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_4_Configure_LLM_API_Key\"><\/span>Phase 4: Configure LLM API Key<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">RAGFlow requires an LLM service like Ollama locally or an LLM API key from a service provider like Google or OpenAI to communicate with language models. By default, RAGFlow supports various LLM providers such as OpenAI, Anthropic, Gemini, and many more, but also classic open-source frameworks like vLLM or Ollama. I am configuring Ollama because I have it running and operate my LLMs and embedding models locally.<\/p>\n<div id=\"attachment_2106\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-1024x500.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2106\" class=\"wp-image-2106 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-1024x500.jpg\" alt=\"RAGFlow LLM Service setup\" width=\"1024\" height=\"500\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-1024x500.jpg 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-300x146.jpg 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-768x375.jpg 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-1536x750.jpg 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-2048x1000.jpg 2048w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_auto_setup-1080x527.jpg 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2106\" class=\"wp-caption-text\">RAGFlow LLM Service setup<\/p><\/div>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> RAGFlow also supports local LLMs via Ollama or vLLM. If you are using a local LLM server, you must adjust the corresponding configuration in the <code>service_conf.yaml.template<\/code>. You can find more information in the <a href=\"https:\/\/ragflow.io\/docs\/dev\/llm_api_key_setup\" target=\"_blank\" rel=\"noopener\">official RAGFlow documentation<\/a>.<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_5_Configure_RAGFlow_for_Production_Deployment_Complex\"><\/span>Phase 5: Configure RAGFlow for Production Deployment (Complex)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">For production use, I want to adjust some important configurations. First, I check the <code>.env<\/code> file in the docker directory:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>cat docker\/.env<\/code><\/p>\n<p data-path-to-node=\"25\">Here you can configure important settings such as the HTTP port, MySQL passwords, and MinIO passwords. By default, RAGFlow runs on port 80. If you want to use a different port, you can edit the <code>docker-compose.yml<\/code> file:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>nano docker\/docker-compose.yml<\/code><\/p>\n<p data-path-to-node=\"25\">Look for the line <code>80:80<\/code> and change it to <code>&lt;YOUR_PORT&gt;:80<\/code>, e.g., <code>8080:80<\/code> for port 8080.<\/p>\n<p data-path-to-node=\"25\">Another important point: RAGFlow uses Elasticsearch as the document engine by default. If you want to use Infinity instead (a faster alternative), you must first stop all containers:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml down -v<\/code><\/p>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Warning:<\/b> The <code>-v<\/code> parameter deletes the Docker container volumes, and all existing data will be lost. Make sure you really want to delete all data before running this command.<\/p>\n<p data-path-to-node=\"25\">Then set it in the <code>.env<\/code> file:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>echo \"DOC_ENGINE=infinity\" | sudo tee -a docker\/.env<\/code><\/p>\n<p data-path-to-node=\"25\">And restart the containers:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml up -d<\/code><\/p>\n<p data-path-to-node=\"25\">For my two RTX A6000 GPUs, I already enabled GPU acceleration in Phase 3. If you haven&#8217;t done that yet, you can do it now:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>echo \"DEVICE=gpu\" | sudo tee -a docker\/.env<\/code><\/p>\n<p data-path-to-node=\"25\">And restart the containers:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml restart<\/code><\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_6_Upload_and_Analyze_First_Documents\"><\/span>Phase 6: Upload and Analyze First Documents<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">After RAGFlow has been successfully started, I can now upload and analyze the first documents. I open RAGFlow in the browser and log in (you have to create an account on the first start).<\/p>\n<p data-path-to-node=\"25\">The RAGFlow user interface is very intuitive:<\/p>\n<ul data-path-to-node=\"26\">\n<li>\n<p data-path-to-node=\"26,0,0\"><strong>Knowledge Bases:<\/strong> Here you create knowledge bases for your documents<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><strong>Upload Documents:<\/strong> Here you can upload PDFs, Word files, Excel files, images, and more<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,2,0\"><strong>Chat:<\/strong> Here you can ask questions about your documents<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,3,0\"><strong>Settings:<\/strong> Here you configure LLM settings, chunking templates, and more<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"25\">For a first test, I create a new knowledge base and upload a PDF document. RAGFlow processes the document automatically, creates chunks, and indexes the content. After processing, I can ask questions about the document and receive answers with source citations.<\/p>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Tip:<\/b> RAGFlow supports different chunking templates that you can select depending on the document type. For technical documents, I recommend the &#8220;Technical Document&#8221; template; for general texts, the &#8220;General&#8221; template.<\/p>\n<div id=\"attachment_2108\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-1024x501.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2108\" class=\"size-large wp-image-2108\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-1024x501.jpg\" alt=\"RAGFlow - dataset documents\" width=\"1024\" height=\"501\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-1024x501.jpg 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-300x147.jpg 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-768x376.jpg 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-1536x751.jpg 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-2048x1001.jpg 2048w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_dataset_documents-1080x528.jpg 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2108\" class=\"wp-caption-text\">RAGFlow &#8211; dataset documents<\/p><\/div>\n<h3 data-path-to-node=\"52\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Common_Problems_and_Solutions\"><\/span>Troubleshooting: Common Problems and Solutions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"53\">During my time with RAGFlow on the Ubuntu Server, I have encountered some typical problems. Here are the most common ones and how I solved them:<\/p>\n<ul data-path-to-node=\"54\">\n<li>\n<p data-path-to-node=\"54,0,0\"><b data-path-to-node=\"54,0,0\" data-index-in-node=\"0\">&#8220;vm.max_map_count&#8221; Error:<\/b> Elasticsearch requires an increased value for <code>vm.max_map_count<\/code>. Set it to at least 262144 with <code>sudo sysctl -w vm.max_map_count=262144<\/code> and add it to <code>\/etc\/sysctl.conf<\/code> so the setting persists after a restart.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,0,1\"><b data-path-to-node=\"54,0,1\" data-index-in-node=\"0\">Containers do not start:<\/b> Check the logs with <code>docker compose logs<\/code>. Often the problem is due to missing environment variables or port conflicts. Also check if port 80 is already being used by another service.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,0,2\"><b data-path-to-node=\"54,0,2\" data-index-in-node=\"0\">GPU is not recognized:<\/b> If the command <code>docker run --rm --gpus all nvidia\/cuda:12.0-base-ubuntu22.04 nvidia-smi<\/code> returns an error like <code>could not select device driver \"\" with capabilities: [[gpu]]<\/code>, even though <code>nvidia-smi<\/code> works on the host, the NVIDIA Container Toolkit is missing. Install it using the commands from Phase 1 and then run <code>sudo nvidia-container-toolkit q --setup<\/code> and <code>sudo systemctl restart docker<\/code>. Important: After restarting Docker, you may need to log out and back in or open a new terminal.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,0,3\"><b data-path-to-node=\"54,0,3\" data-index-in-node=\"0\">Error during apt-get update after NVIDIA Container Toolkit installation:<\/b> If <code>apt-get update<\/code> shows errors like &#8220;404 Not Found&#8221; or HTML content in the error message, the repository list was not created correctly. This often happens if the distribution variable was not set correctly. Solution: Delete the faulty file with <code>sudo rm \/etc\/apt\/sources.list.d\/nvidia-container-toolkit.list<\/code> and then use the stable path from Phase 1: <code>curl -fsSL https:\/\/nvidia.github.io\/libnvidia-container\/stable\/deb\/nvidia-container-toolkit.list | sed 's#deb https:\/\/#deb [signed-by=\/usr\/share\/keyrings\/nvidia-container-toolkit-keyring.gpg] https:\/\/#g' | sudo tee \/etc\/apt\/sources.list.d\/nvidia-container-toolkit.list<\/code>. After that, <code>sudo apt-get update<\/code> should work again.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,1,0\"><b data-path-to-node=\"54,1,0\" data-index-in-node=\"0\">LLM API Key Error:<\/b> Check the <code>service_conf.yaml.template<\/code> file and ensure the API key is entered correctly. After making changes, you must restart the containers.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,2,0\"><b data-path-to-node=\"54,2,0\" data-index-in-node=\"0\">Memory Problems:<\/b> RAGFlow requires enough RAM for Elasticsearch. For larger document collections, I recommend at least 32 GB RAM. Check memory usage with <code>docker stats<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,3,0\"><b data-path-to-node=\"54,3,0\" data-index-in-node=\"0\">Documents are not processed:<\/b> Check the logs of the RAGFlow container with <code>docker logs -f docker-ragflow-cpu-1<\/code>. Often the problem is due to missing LLM API keys or network problems when accessing external LLM services.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,4,0\"><b data-path-to-node=\"54,4,0\" data-index-in-node=\"0\">Firewall blocks access:<\/b> If a firewall is active, you must open port 80 (or your configured port): <code>sudo ufw allow 80<\/code> or corresponding iptables rules.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Managing_Containers\"><\/span>Managing Containers<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"57\">To check the status of all containers:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker compose ps<\/code><\/p>\n<p data-path-to-node=\"57\">To stop all containers (without deleting them):<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml stop<\/code><\/p>\n<p data-path-to-node=\"57\">To start all containers:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml start<\/code><\/p>\n<p data-path-to-node=\"57\">To remove all containers (but keep volumes):<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml down<\/code><\/p>\n<p data-path-to-node=\"57\">To remove all containers and volumes (deletes all data):<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml down -v<\/code><\/p>\n<p data-path-to-node=\"57\">To show the logs of a specific container:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker logs -f docker-ragflow-cpu-1<\/code><\/p>\n<p data-path-to-node=\"57\">To see the resource usage of all containers:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker stats<\/code><\/p>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Rollback_Removing_RAGFlow_again\"><\/span>Rollback: Removing RAGFlow again<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"60\">If you want to completely remove RAGFlow from the server, run the following commands on the system:<\/p>\n<p data-path-to-node=\"60\">First, stop all containers:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>cd ragflow\/docker<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker compose -f docker-compose.yml down -v<\/code><\/p>\n<p data-path-to-node=\"60\">If you also want to remove the container images:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker images | grep ragflow<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker rmi &lt;IMAGE_ID&gt;<\/code><\/p>\n<p data-path-to-node=\"60\">To also remove unused Docker containers and images:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker system prune -a<\/code><\/p>\n<p data-path-to-node=\"60\">If you also want to remove the cloned repository:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>cd ~<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>rm -rf ragflow<\/code><\/p>\n<blockquote data-path-to-node=\"62\">\n<p data-path-to-node=\"62,0\"><b data-path-to-node=\"62,0\" data-index-in-node=\"0\">Important Note:<\/b> These commands remove all RAGFlow containers, images, and data. Make sure you really want to remove everything and have backed up important data before running these commands.<\/p>\n<\/blockquote>\n<h2 data-path-to-node=\"64\"><span class=\"ez-toc-section\" id=\"Summary_Conclusion\"><\/span>Summary &amp; Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p data-path-to-node=\"65\">Installing RAGFlow on my Ubuntu Server with two NVIDIA RTX A6000 GPUs is surprisingly straightforward. In about 30-45 minutes, I set up a complete RAG system that can analyze complex documents and answer intelligent questions.<\/p>\n<p data-path-to-node=\"66\">What excites me in particular: The performance of the two RTX A6000 GPUs is fully utilized, and the Docker-based installation makes the setup much easier than a manual installation. RAGFlow offers a professional solution for document analysis suitable for both small teams and larger enterprises.<\/p>\n<p data-path-to-node=\"67\">I also find it particularly practical that RAGFlow works with template-based chunking, which significantly improves the quality of document processing and allows companies to establish a kind of standard for themselves via these templates. The grounded citations with source references make it easy to trace back which document and section the information came from \u2013 this is especially important for trustworthy AI applications.<\/p>\n<p data-path-to-node=\"68\">For teams or developers who need a professional RAG system, RAGFlow is a perfect solution: a central server with full GPU power where documents can be intelligently analyzed and searched. The intuitive web interface makes it easy to upload documents and ask questions, while the API allows for seamless integration into existing applications.<\/p>\n<p data-path-to-node=\"69\">If you have questions or encounter problems, feel free to check the <a href=\"https:\/\/ragflow.io\/docs\/dev\/\" target=\"_blank\" rel=\"noopener\">official RAGFlow documentation<\/a> or the <a href=\"https:\/\/github.com\/infiniflow\/ragflow\" target=\"_blank\" rel=\"noopener\">RAGFlow GitHub repository<\/a>. The community is very helpful, and most problems can be solved quickly.<\/p>\n<h3 data-path-to-node=\"71\"><span class=\"ez-toc-section\" id=\"Next_Step_Advanced_Configuration_and_Integration\"><\/span>Next Step: Advanced Configuration and Integration<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"72\">You have now successfully installed RAGFlow and analyzed your first documents. The basic installation works, but that&#8217;s just the beginning. The next step is configuration for your specific requirements.<\/p>\n<p data-path-to-node=\"73\">RAGFlow offers many configuration options for production use: creating custom chunking templates, configuring various LLM providers locally or publicly, setting up API integrations, or managing multiple knowledge bases for different projects. The documentation shows you how to optimize these settings for your workloads.<\/p>\n<p data-path-to-node=\"74\">I also find the agent functions of RAGFlow particularly interesting, which allow for creating complex workflows and processing documents automatically. With support for Confluence, S3, Notion, Discord, and Google Drive, RAGFlow can also be connected directly to existing data sources.<\/p>\n<p data-path-to-node=\"75\">Good luck experimenting with RAGFlow on your Ubuntu Server. I&#8217;m excited to see what applications you develop with it! Let me and my readers know here in the comments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anyone working with Large Language Models who also needs to analyze large volumes of documents knows the problem: simple chat interfaces are not enough when it comes to extracting and understanding specific information from PDFs, Word documents, or other file formats. For me, the solution was clear: I use RAGFlow on my X86 Ubuntu Server [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2111,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[162,50],"tags":[934,353,932,933,714,935,305,940,941,939,936,938,937],"class_list":["post-2114","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-large-language-models-en","category-top-story-en","tag-ai-knowledge-base","tag-docker","tag-docker-installation","tag-document-analysis","tag-gpu-acceleration","tag-intelligent-document-processing","tag-llm-en","tag-nvidia-rtx-a6000","tag-open-source-rag","tag-rag-system","tag-ragflow","tag-retrieval-augmented-generation","tag-ubuntu-server","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"description\" content=\"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-01T20:07:49+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-01T20:10:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1135\" \/>\n\t<meta property=\"og:image:height\" content=\"672\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs\",\"datePublished\":\"2026-01-01T20:07:49+00:00\",\"dateModified\":\"2026-01-01T20:10:31+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/\"},\"wordCount\":2491,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/RAG_Flow_UI.jpg\",\"keywords\":[\"AI knowledge base\",\"Docker\",\"Docker installation\",\"document analysis\",\"GPU acceleration\",\"intelligent document processing\",\"LLM\",\"NVIDIA RTX A6000\",\"Open Source RAG\",\"RAG-System\",\"RAGFlow\",\"Retrieval-augmented generation\",\"Ubuntu Server\"],\"articleSection\":[\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/\",\"name\":\"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/RAG_Flow_UI.jpg\",\"datePublished\":\"2026-01-01T20:07:49+00:00\",\"dateModified\":\"2026-01-01T20:10:31+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"description\":\"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/RAG_Flow_UI.jpg\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/RAG_Flow_UI.jpg\",\"width\":1135,\"height\":672,\"caption\":\"RAGFlow\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\\\/2114\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box","description":"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/","og_locale":"en_US","og_type":"article","og_title":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box","og_description":"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.","og_url":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2026-01-01T20:07:49+00:00","article_modified_time":"2026-01-01T20:10:31+00:00","og_image":[{"width":1135,"height":672,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg","type":"image\/jpeg"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs","datePublished":"2026-01-01T20:07:49+00:00","dateModified":"2026-01-01T20:10:31+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/"},"wordCount":2491,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg","keywords":["AI knowledge base","Docker","Docker installation","document analysis","GPU acceleration","intelligent document processing","LLM","NVIDIA RTX A6000","Open Source RAG","RAG-System","RAGFlow","Retrieval-augmented generation","Ubuntu Server"],"articleSection":["Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/","url":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/","name":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg","datePublished":"2026-01-01T20:07:49+00:00","dateModified":"2026-01-01T20:10:31+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"description":"Learn how to install and configure RAGFlow on an Ubuntu Server with NVIDIA GPUs. This professional RAG system guide covers Docker setup, GPU acceleration, and intelligent document analysis.","breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/RAG_Flow_UI.jpg","width":1135,"height":672,"caption":"RAGFlow"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-ragflow-on-ubuntu-server-setting-up-a-rag-system-with-two-nvidia-rtx-a6000-gpus\/2114\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Installing RAGFlow on Ubuntu Server: Setting up a RAG system with two NVIDIA RTX A6000 GPUs"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2114","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=2114"}],"version-history":[{"count":1,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2114\/revisions"}],"predecessor-version":[{"id":2115,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2114\/revisions\/2115"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/2111"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=2114"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=2114"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=2114"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}