{"id":1903,"date":"2025-12-21T22:20:26","date_gmt":"2025-12-21T22:20:26","guid":{"rendered":"https:\/\/ai-box.eu\/?p=1903"},"modified":"2026-01-05T06:44:29","modified_gmt":"2026-01-05T06:44:29","slug":"open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/","title":{"rendered":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network"},"content":{"rendered":"<p data-path-to-node=\"1\">After showing you in the last post how to install Ollama on the <b data-path-to-node=\"1\" data-index-in-node=\"75\">Gigabyte AI TOP ATOM<\/b> and configure it for network access, the next logical step follows: a user-friendly chat interface that everyone in the network can use. <b data-path-to-node=\"1\" data-index-in-node=\"200\">Open WebUI<\/b> is exactly that \u2013 a self-hosted, extensible AI interface that works completely offline and integrates directly with Ollama.<\/p>\n<p data-path-to-node=\"2\">In this post, I will show you how I installed <b data-path-to-node=\"2\" data-index-in-node=\"30\">Open WebUI<\/b> on my Gigabyte AI TOP ATOM and configured it to be accessible throughout the entire network. Together with the already running Ollama server, you will then have a complete chat solution \u2013 similar to ChatGPT, but locally hosted and powered by your own GPU. Since the system is based on the same platform as the <b data-path-to-node=\"2\" data-index-in-node=\"200\">NVIDIA DGX Spark<\/b>, the official NVIDIA playbooks work just as reliably here. For my reports here on my blog, I received the Gigabyte AI TOP ATOM from the Munich-based company <a href=\"https:\/\/www.mifcom.de\/\" target=\"_blank\" rel=\"noopener\">MIFCOM<\/a>.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#The_Basic_Idea_A_ChatGPT-like_Interface_for_Your_Own_Network\" >The Basic Idea: A ChatGPT-like Interface for Your Own Network<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_1_Configure_Docker_Permissions\" >Phase 1: Configure Docker Permissions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_2_Download_Open_WebUI_Container_Image\" >Phase 2: Download Open WebUI Container Image<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_3_Start_Open_WebUI_Container\" >Phase 3: Start Open WebUI Container<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_4_Configure_Network_Access_optional\" >Phase 4: Configure Network Access (optional)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_5_Create_Administrator_Account\" >Phase 5: Create Administrator Account<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Phase_6_Test_Your_First_Chat_Message\" >Phase 6: Test Your First Chat Message<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Trying_Out_More_Models\" >Trying Out More Models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Container_Management_Start_Stop_Restart\" >Container Management: Start, Stop, Restart<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Troubleshooting_Common_Problems_and_Solutions\" >Troubleshooting: Common Problems and Solutions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Updating_the_Container\" >Updating the Container<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Rollback_Removing_Everything_Again\" >Rollback: Removing Everything Again<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#Summary_Conclusion\" >Summary &amp; Conclusion<\/a><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"4\"><span class=\"ez-toc-section\" id=\"The_Basic_Idea_A_ChatGPT-like_Interface_for_Your_Own_Network\"><\/span>The Basic Idea: A ChatGPT-like Interface for Your Own Network<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"5\">Before I dive into the technical details, an important point: Open WebUI is a web application that runs via Docker and includes an integrated Ollama server. This means you can either use your already installed Ollama server or use the integrated Ollama. I am using the integrated Ollama here because it is easier to manage and everything runs in one container.<\/p>\n<p data-path-to-node=\"6\">The installation is done via Docker, which means everything is cleanly isolated and easy to remove. Open WebUI then runs on port 8080 (or another port of your choice) and is accessible from any computer in the network via the browser. No complex configuration, no API calls via cURL \u2013 just open it in the browser and chat.<\/p>\n<p data-path-to-node=\"7\">What you need for this:<\/p>\n<ul data-path-to-node=\"8\">\n<li>\n<p data-path-to-node=\"8,0,0\">A Gigabyte AI TOP ATOM (or NVIDIA DGX Spark) connected to the network<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,1,0\">Docker installed and functional (standard in DGX OS)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,2,0\">Terminal access to the AI TOP ATOM (via SSH or directly)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,3,0\">A computer in the same network with a modern browser<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,4,0\">Basic knowledge of Docker and terminal commands<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,5,0\">Enough storage space for the container image and model downloads (recommended: at least 50 GB free)<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"10\"><span class=\"ez-toc-section\" id=\"Phase_1_Configure_Docker_Permissions\"><\/span>Phase 1: Configure Docker Permissions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"11\">First, I ensure that I can execute Docker commands without <code data-path-to-node=\"11\" data-index-in-node=\"70\">sudo<\/code>. This makes the work much easier. I open a terminal on my AI TOP ATOM and test Docker access:<\/p>\n<p data-path-to-node=\"11\"><strong>Command:<\/strong> <code data-path-to-node=\"12\">sudo docker ps<\/code><\/p>\n<p data-path-to-node=\"13\">If you receive an error message like &#8220;permission denied while trying to connect to the Docker daemon socket&#8221; as seen in my image below, you must add your user to the Docker group:<\/p>\n<div id=\"attachment_1874\" style=\"width: 986px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1874\" class=\"wp-image-1874 size-full\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker.png\" alt=\"GIGABYTE AI TOP ATOM - docker\" width=\"976\" height=\"277\" \/><\/a><p id=\"caption-attachment-1874\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; docker<\/p><\/div>\n<p data-path-to-node=\"13\"><strong>Command:<\/strong> <code data-path-to-node=\"14\">sudo usermod -aG docker $USER<\/code><\/p>\n<p data-path-to-node=\"13\"><strong>Command:<\/strong><code data-path-to-node=\"14\">newgrp docker<\/code><\/p>\n<p data-path-to-node=\"15\">The command <code data-path-to-node=\"15\" data-index-in-node=\"4\">newgrp docker<\/code> activates the new group membership immediately without you having to log in again. I test again with <code data-path-to-node=\"15\" data-index-in-node=\"100\">docker ps<\/code> \u2013 it should now work without errors.<\/p>\n<h3 data-path-to-node=\"17\"><span class=\"ez-toc-section\" id=\"Phase_2_Download_Open_WebUI_Container_Image\"><\/span>Phase 2: Download Open WebUI Container Image<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"18\">Now I download the Open WebUI container image. I use the <code data-path-to-node=\"19\">main<\/code> version due to the fact that I have installed Ollama alreday, and I will use that installed version of Ollama for my setup:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code data-path-to-node=\"19\">docker pull ghcr.io\/open-webui\/open-webui:main<\/code><\/p>\n<p data-path-to-node=\"20\">Depending on the internet speed, the download can take a few minutes \u2013 the image is about 2-3 GB. I just let the download run and do something else in the meantime.<\/p>\n<div id=\"attachment_1876\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker-1024x551.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1876\" class=\"wp-image-1876 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker-1024x551.png\" alt=\"GIGABYTE AI TOP ATOM - Open-WebUI-docker\" width=\"1024\" height=\"551\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker-1024x551.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker-300x161.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker-768x413.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-docker.png 1046w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1876\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Open-WebUI-docker<\/p><\/div>\n<p data-path-to-node=\"20\">Note: I had many download interruptions with the following message:<\/p>\n<div id=\"attachment_1879\" style=\"width: 876px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-error.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1879\" class=\"wp-image-1879 size-full\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-error.png\" alt=\"GIGABYTE AI TOP ATOM - Open-WebUI docker error\" width=\"866\" height=\"629\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-error.png 866w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-error-300x218.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-error-768x558.png 768w\" sizes=\"(max-width: 866px) 100vw, 866px\" \/><\/a><p id=\"caption-attachment-1879\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Open-WebUI docker error<\/p><\/div>\n<p data-path-to-node=\"20\">My solution attempt, which was successful, was to limit the maximum number of parallel downloads to 1 download. To do this, execute the following command:<\/p>\n<p data-path-to-node=\"20\"><strong>Command:<\/strong> <code>sudo nano \/etc\/docker\/daemon.json<\/code><\/p>\n<p data-path-to-node=\"20\">With this command, you create a new <code>daemon.json<\/code> file with the following content:<\/p>\n<p data-path-to-node=\"20\"><code>{<\/code><br \/>\n<code>\"max-concurrent-downloads\":1<\/code><br \/>\n<code>}<\/code><\/p>\n<p data-path-to-node=\"20\">For the adjustment to take effect, restart the Docker service with the following command:<\/p>\n<p data-path-to-node=\"20\"><strong>Command: <\/strong><code>sudo systemctl restart docker<\/code><\/p>\n<p data-path-to-node=\"20\">Afterwards, I didn&#8217;t get the error anymore and was able to successfully download Open WebUI for Ollama.<\/p>\n<p data-path-to-node=\"20\">When the download is complete, you will see a message like &#8220;Status: Downloaded newer image for ghcr.io\/open-webui\/open-webui:ollama&#8221;.<\/p>\n<div id=\"attachment_1882\" style=\"width: 876px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-success.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1882\" class=\"wp-image-1882 size-full\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-success.png\" alt=\"GIGABYTE AI TOP ATOM - Open-WebUI docker success\" width=\"866\" height=\"651\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-success.png 866w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-success-300x226.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-docker-success-768x577.png 768w\" sizes=\"(max-width: 866px) 100vw, 866px\" \/><\/a><p id=\"caption-attachment-1882\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Open-WebUI docker success<\/p><\/div>\n<h3 data-path-to-node=\"22\"><span class=\"ez-toc-section\" id=\"Phase_3_Start_Open_WebUI_Container\"><\/span>Phase 3: Start Open WebUI Container<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"23\">Now I start the Open WebUI container. It is important here to configure the port so that it is accessible in the network. I use port 3000, but you can choose another port if this one is already occupied:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>docker run -d -p 0.0.0.0:3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:\/app\/backend\/data --name open-webui --restart always ghcr.io\/open-webui\/open-webui:main<\/code><\/p>\n<p data-path-to-node=\"25\"><strong>Let me explain the parameters:<\/strong><\/p>\n<ul data-path-to-node=\"26\">\n<li>\n<p data-path-to-node=\"26,0,0\"><code data-path-to-node=\"26,0,0\" data-index-in-node=\"4\">-d<\/code>: Container runs in the background (detached mode)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><code data-path-to-node=\"26,1,0\" data-index-in-node=\"4\">-p 0.0.0.0:3000:8080<\/code>: Port mapping \u2013 port 8080 of the container is mapped to port 8080 of the host and is accessible with 0.0.0.0 via the intranet from all computers.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><code>--add-hos<\/code>: This parameter adds an entry to the internal <code data-path-to-node=\"4,1,0\" data-index-in-node=\"47\">\/etc\/hosts<\/code> file of the container. It tells the container: &#8220;When you call the address <code data-path-to-node=\"4,1,0\" data-index-in-node=\"144\">host.docker.internal<\/code>, forward the request to the IP address of the host gateway (your DGX).&#8221;<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><code data-path-to-node=\"26,3,0\" data-index-in-node=\"4\">-v open-webui:\/app\/backend\/data<\/code>: Persistent volume for Open WebUI data (chats, settings)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"26,1,0\"><code data-path-to-node=\"26,4,0\" data-index-in-node=\"4\">-v open-webui-ollama:\/root\/.ollama<\/code>: Persistent volume for downloaded models<\/p>\n<\/li>\n<li><code data-path-to-node=\"26,5,0\" data-index-in-node=\"4\">--name open-webui<\/code>: Name of the container for easy management<\/li>\n<\/ul>\n<p data-path-to-node=\"27\">After starting, the container should be running. I check this with:<\/p>\n<p data-path-to-node=\"27\"><strong>Command:<\/strong><code data-path-to-node=\"28\">docker ps<\/code><\/p>\n<p data-path-to-node=\"29\">You should see the &#8220;open-webui&#8221; container in the list. If not, check the logs with <code data-path-to-node=\"29\" data-index-in-node=\"40\">docker logs open-webui<\/code> to see what went wrong.<\/p>\n<p data-path-to-node=\"13,1,0\">Check the logs of the container using the following command to see if the backend has actually started:<\/p>\n<p data-path-to-node=\"13,1,0\"><strong>Command<\/strong>: <code>docker logs -f open-webui<\/code><\/p>\n<h3 data-path-to-node=\"31\"><span class=\"ez-toc-section\" id=\"Phase_4_Configure_Network_Access_optional\"><\/span>Phase 4: Configure Network Access (optional)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"32\">By default, Open WebUI should already be accessible in the network since we exposed port 8080. I had no problems, but here are some tips on what you can do. I first check the IP address of my AI TOP ATOM:<\/p>\n<p data-path-to-node=\"32\"><strong>Command:<\/strong><code data-path-to-node=\"33\">hostname -I<\/code><\/p>\n<p data-path-to-node=\"34\">I note the IP address (e.g., <code data-path-to-node=\"34\" data-index-in-node=\"50\">192.168.1.100<\/code>). If a firewall is active, I must open port 8080:<\/p>\n<p data-path-to-node=\"34\"><strong>Command:<\/strong><code data-path-to-node=\"35\">sudo ufw allow 8080<\/code><\/p>\n<p data-path-to-node=\"36\">Now I open a browser on another computer in the network and navigate to <code data-path-to-node=\"36\" data-index-in-node=\"80\">http:\/\/192.168.1.100:8080<\/code> (again with your IP address). The Open WebUI interface should open.<\/p>\n<p data-path-to-node=\"37\"><b data-path-to-node=\"37\" data-index-in-node=\"0\">Important Note:<\/b> During the first start, it can take a few minutes for the page to load. Open WebUI initializes and may download initial models. Simply wait and reload the page if necessary.<\/p>\n<h3 data-path-to-node=\"39\"><span class=\"ez-toc-section\" id=\"Phase_5_Create_Administrator_Account\"><\/span>Phase 5: Create Administrator Account<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"40\">Once the Open WebUI interface has loaded, you will see a welcome page. I click the <b data-path-to-node=\"40\" data-index-in-node=\"80\">&#8220;Get Started&#8221;<\/b> button at the bottom of the screen.<\/p>\n<p data-path-to-node=\"41\">Now I must create an administrator account. This is a local account that only applies to this Open WebUI installation \u2013 no connection to external services. I fill out the form:<\/p>\n<ul data-path-to-node=\"42\">\n<li>\n<p data-path-to-node=\"42,0,0\">Username (e.g., &#8220;admin&#8221; or your name)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"42,1,0\">Email address (optional, but recommended)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"42,2,0\">Password (choose a secure password, as the server is accessible in the network)<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"43\">After clicking &#8220;Register&#8221; or &#8220;Create Account&#8221;, I should be redirected directly to the main interface. If you see error messages, check the container logs with <code data-path-to-node=\"43\" data-index-in-node=\"80\">docker logs open-webui<\/code>.<\/p>\n<p data-path-to-node=\"43\"><strong>Set Up Connection to Ollama<\/strong><\/p>\n<p data-path-to-node=\"43\">Now you need to establish the connection to your Ollama installation under the settings of Open WebUI. You do this as follows.<\/p>\n<p data-path-to-node=\"43\">To do this, click on Settings -&gt; Administration and open the Connections entry. I then entered the IP address of my AI TOP ATOM followed by the port as seen in the following image. After restarting the computer, everything worked perfectly.<\/p>\n<div id=\"attachment_1887\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-1024x455.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1887\" class=\"wp-image-1887 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-1024x455.png\" alt=\"GIGABYTE AI TOP ATOM - Open-WebUI ollama connection\" width=\"1024\" height=\"455\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-1024x455.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-300x133.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-768x341.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection-1080x480.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-ollama-connection.png 1155w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1887\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Open-WebUI ollama connection<\/p><\/div>\n<h3 data-path-to-node=\"51\"><span class=\"ez-toc-section\" id=\"Phase_6_Test_Your_First_Chat_Message\"><\/span>Phase 6: Test Your First Chat Message<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"52\">Now I can finally chat! I type into the chat input field at the bottom of the interface: <b data-path-to-node=\"52\" data-index-in-node=\"80\">&#8220;Why is the sky blue?&#8221;<\/b> and press Enter. At the top left, a language model that you have already downloaded with Ollama should already be selected.<\/p>\n<p data-path-to-node=\"53\">The model should now generate an answer. You see the answer in real-time as it is written \u2013 just like with ChatGPT. The GPU of the AI TOP ATOM works in the background and calculates the answer. Depending on the model size and complexity of the request, this may take a few seconds.<\/p>\n<div id=\"attachment_1890\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-1024x980.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1890\" class=\"wp-image-1890 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-1024x980.png\" alt=\"GIGABYTE AI TOP ATOM - Open-WebUI Interface\" width=\"1024\" height=\"980\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-1024x980.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-300x287.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-768x735.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI-1080x1033.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png 1295w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1890\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Open-WebUI Interface<\/p><\/div>\n<p data-path-to-node=\"54\">If everything works, you now have a complete chat solution that can be used by any computer in the network. Just open the IP address of the AI TOP ATOM in the browser and get started!<\/p>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Trying_Out_More_Models\"><\/span>Trying Out More Models<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"57\">The beauty of Open WebUI is the wide selection of available models. Via the model dropdown, you can download more models from the <a href=\"https:\/\/ollama.com\/library\" target=\"_blank\" rel=\"noopener\">Ollama Library<\/a>. For example, I have also tested the following models:<\/p>\n<ul data-path-to-node=\"58\">\n<li>\n<p data-path-to-node=\"58,0,0\"><b data-path-to-node=\"58,0,0\" data-index-in-node=\"4\">llama3.1:8b<\/b> &#8211; Very versatile and fast<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"58,1,0\"><b data-path-to-node=\"58,1,0\" data-index-in-node=\"4\">codellama:13b<\/b> &#8211; Excels at code generation<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"58,2,0\"><b data-path-to-node=\"58,2,0\" data-index-in-node=\"4\">qwen2.5:32b<\/b> &#8211; Optimized for Blackwell GPUs, very powerful<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"58,3,0\"><b data-path-to-node=\"58,3,0\" data-index-in-node=\"4\">phi3.5:3.8b<\/b> &#8211; Compact and fast, perfect for simpler tasks<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"59\">Each model has its strengths. Just try out which one best fits your requirements. The models are stored in the persistent volume, so you don&#8217;t have to download them again every time the container restarts.<\/p>\n<h3 data-path-to-node=\"61\"><span class=\"ez-toc-section\" id=\"Container_Management_Start_Stop_Restart\"><\/span>Container Management: Start, Stop, Restart<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"62\">If you want to stop the container (e.g., to free up resources), use:<\/p>\n<p data-path-to-node=\"62\"><strong>Command:<\/strong><code data-path-to-node=\"63\">docker stop open-webui<\/code><\/p>\n<p data-path-to-node=\"64\">To start it again:<\/p>\n<p data-path-to-node=\"64\"><strong>Command:<\/strong><code data-path-to-node=\"65\">docker start open-webui<\/code><\/p>\n<p data-path-to-node=\"66\">To restart the container:<\/p>\n<p data-path-to-node=\"66\"><strong>Command:<\/strong><code data-path-to-node=\"67\">docker restart open-webui<\/code><\/p>\n<p data-path-to-node=\"68\">To check the status:<\/p>\n<p data-path-to-node=\"68\"><strong>Command:<\/strong><code data-path-to-node=\"69\">docker ps -a | grep open-webui<\/code><\/p>\n<p data-path-to-node=\"70\">The persistent volumes remain intact even if the container is stopped. All chats, settings, and downloaded models are preserved.<\/p>\n<h3 data-path-to-node=\"72\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Common_Problems_and_Solutions\"><\/span>Troubleshooting: Common Problems and Solutions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"73\">During my time with Open WebUI on the AI TOP ATOM, I encountered some typical problems. Here are the most common ones and how I solved them:<\/p>\n<ul data-path-to-node=\"74\">\n<li>\n<p data-path-to-node=\"74,0,0\"><b data-path-to-node=\"74,0,0\" data-index-in-node=\"0\">&#8220;Permission denied&#8221; with Docker commands:<\/b> The user is not in the Docker group. Execute <code data-path-to-node=\"74,0,0\" data-index-in-node=\"80\">sudo usermod -aG docker $USER<\/code> and restart the terminal session.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"74,1,0\"><b data-path-to-node=\"74,1,0\" data-index-in-node=\"0\">Page does not load in browser:<\/b> Check if the container is running with <code data-path-to-node=\"74,1,0\" data-index-in-node=\"50\">docker ps<\/code>. During the first start, it can take a few minutes. Also check the logs with <code data-path-to-node=\"74,1,0\" data-index-in-node=\"100\">docker logs open-webui<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"74,2,0\"><b data-path-to-node=\"74,2,0\" data-index-in-node=\"0\">Model download fails:<\/b> Check the internet connection and available storage space. You can check the storage space with <code data-path-to-node=\"74,2,0\" data-index-in-node=\"60\">df -h<\/code>.<\/p>\n<\/li>\n<li><b data-path-to-node=\"74,4,0\" data-index-in-node=\"0\">Port 8080 already occupied:<\/b> Use a different port, e.g., <code data-path-to-node=\"74,4,0\" data-index-in-node=\"40\">-p 8081:8080<\/code> in the docker run command. Then Open WebUI is accessible under port 8081.<\/li>\n<li>\n<p data-path-to-node=\"74,5,0\"><b data-path-to-node=\"74,5,0\" data-index-in-node=\"0\">Slow inference:<\/b> The model might be too large for the available GPU memory. Try a smaller model or check the GPU usage with <code data-path-to-node=\"74,5,0\" data-index-in-node=\"60\">nvidia-smi<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"74,6,0\"><b data-path-to-node=\"74,6,0\" data-index-in-node=\"0\">Access from the network does not work:<\/b> Check the firewall settings and ensure that the port is open. Also check if both computers are in the same network.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"76\"><span class=\"ez-toc-section\" id=\"Updating_the_Container\"><\/span>Updating the Container<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"77\">If Open WebUI displays an update notification or you want to use the latest version, update the container as follows:<\/p>\n<p data-path-to-node=\"77\"><strong>Command:<\/strong><\/p>\n<pre data-path-to-node=\"78\"><code data-path-to-node=\"78\">docker stop open-webui\r\ndocker rm open-webui\r\ndocker pull ghcr.io\/open-webui\/open-webui:ollama\r\ndocker run -d -p 0.0.0.0:8080:8080 -v open-webui:\/app\/backend\/data -v open-webui-ollama:\/root\/.ollama --name open-webui ghcr.io\/open-webui\/open-webui:ollama<\/code><\/pre>\n<p data-path-to-node=\"79\">The persistent volumes remain intact, so all your data, chats, and models are available even after the update.<\/p>\n<h3 data-path-to-node=\"81\"><span class=\"ez-toc-section\" id=\"Rollback_Removing_Everything_Again\"><\/span>Rollback: Removing Everything Again<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"82\">If you want to completely remove Open WebUI, execute the following commands:<\/p>\n<p data-path-to-node=\"82\"><strong>Command:<\/strong><code data-path-to-node=\"83\">docker stop open-webui<\/code><\/p>\n<p data-path-to-node=\"82\"><strong>Command:<\/strong><code data-path-to-node=\"83\">docker rm open-webui<\/code><\/p>\n<p data-path-to-node=\"84\">To also remove the container image:<\/p>\n<p data-path-to-node=\"84\"><strong>Command:<\/strong> <code data-path-to-node=\"85\">docker rmi ghcr.io\/open-webui\/open-webui:ollama<\/code><\/p>\n<p data-path-to-node=\"86\">To also remove the persistent volumes (Attention: All chats, settings, and models will be lost!):<\/p>\n<p data-path-to-node=\"86\"><strong>Command:<\/strong> <code data-path-to-node=\"87\">docker volume rm open-webui open-webui-ollama<\/code><\/p>\n<blockquote data-path-to-node=\"88\">\n<p data-path-to-node=\"88,0\"><b data-path-to-node=\"88,0\" data-index-in-node=\"0\">Important Note:<\/b> Removing the volumes deletes all your data, chats, and downloaded models. Make sure you really want to remove everything before executing these commands.<\/p>\n<\/blockquote>\n<h3 data-path-to-node=\"90\"><span class=\"ez-toc-section\" id=\"Summary_Conclusion\"><\/span>Summary &amp; Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"91\">Installing Open WebUI on the Gigabyte AI TOP ATOM is surprisingly straightforward thanks to Docker. In less than 20 minutes, I had a complete chat solution running that can be used by any computer in the network. I spent the most time figuring out that on the DGX Spark platform, apparently only one download is allowed to run during a docker pull, and not 3 in parallel as set by default. After this adjustment as described above, everything ran without problems.<\/p>\n<p data-path-to-node=\"92\">What particularly excites me: The combination of Open WebUI and the integrated Ollama works together seamlessly. No complex configuration, no API calls \u2013 just open it in the browser and chat. The GPU performance of the Blackwell architecture is fully utilized, and the answers come quickly and precisely.<\/p>\n<p data-path-to-node=\"93\">I also find the persistent volumes particularly practical: all chats, settings, and models are preserved even when the container is restarted. This makes management much easier than with a manual installation.<\/p>\n<p data-path-to-node=\"94\">For teams or families, this is a perfect solution: a central server with full GPU power that everyone can access via the browser. No local installations needed, no complex configurations &#8211; just open the IP address in the browser and get started.<\/p>\n<p data-path-to-node=\"95\">If you have questions or encounter problems, feel free to check the <a href=\"https:\/\/docs.nvidia.com\/dgx\/dgx-spark\/\" target=\"_blank\" rel=\"noopener\">official NVIDIA DGX Spark documentation<\/a>, the <a href=\"https:\/\/github.com\/open-webui\/open-webui\" target=\"_blank\" rel=\"noopener\">Open WebUI documentation<\/a>, or the <a href=\"https:\/\/ollama.com\" target=\"_blank\" rel=\"noopener\">Ollama documentation<\/a>. The community is very helpful, and most problems can be solved quickly.<\/p>\n<p data-path-to-node=\"96\">Good luck experimenting with Open WebUI on your Gigabyte AI TOP ATOM \u2013 I&#8217;m excited to see what interesting chats and applications you develop with it!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>After showing you in the last post how to install Ollama on the Gigabyte AI TOP ATOM and configure it for network access, the next logical step follows: a user-friendly chat interface that everyone in the network can use. Open WebUI is exactly that \u2013 a self-hosted, extensible AI interface that works completely offline and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1891,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[873,162,50],"tags":[796,811,353,786,714,846,305,789,791,306,364,795,794],"class_list":["post-1903","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gigabyte-ai-top-atom","category-large-language-models-en","category-top-story-en","tag-ai-chat-interface","tag-blackwell-gpu","tag-docker","tag-gigabyte-ai-top-atom","tag-gpu-acceleration","tag-large-language-models","tag-llm-en","tag-local-llm","tag-nvidia-dgx-spark","tag-ollama-en","tag-open-webui-en","tag-private-ai-network","tag-self-hosted-ai","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"description\" content=\"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-21T22:20:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-05T06:44:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1295\" \/>\n\t<meta property=\"og:image:height\" content=\"1239\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network\",\"datePublished\":\"2025-12-21T22:20:26+00:00\",\"dateModified\":\"2026-01-05T06:44:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/\"},\"wordCount\":2042,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png\",\"keywords\":[\"AI Chat Interface\",\"Blackwell GPU\",\"Docker\",\"Gigabyte AI TOP ATOM\",\"GPU acceleration\",\"Large Language Models\",\"LLM\",\"Local LLM\",\"NVIDIA DGX Spark\",\"Ollama\",\"Open-WebUi\",\"Private AI Network\",\"Self-hosted AI\"],\"articleSection\":[\"Gigabyte AI TOP ATOM\",\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/\",\"name\":\"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png\",\"datePublished\":\"2025-12-21T22:20:26+00:00\",\"dateModified\":\"2026-01-05T06:44:29+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"description\":\"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png\",\"width\":1295,\"height\":1239,\"caption\":\"GIGABYTE AI TOP ATOM - Open-WebUI Interface\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\\\/1903\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box","description":"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/","og_locale":"en_US","og_type":"article","og_title":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box","og_description":"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.","og_url":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2025-12-21T22:20:26+00:00","article_modified_time":"2026-01-05T06:44:29+00:00","og_image":[{"width":1295,"height":1239,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png","type":"image\/png"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network","datePublished":"2025-12-21T22:20:26+00:00","dateModified":"2026-01-05T06:44:29+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/"},"wordCount":2042,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png","keywords":["AI Chat Interface","Blackwell GPU","Docker","Gigabyte AI TOP ATOM","GPU acceleration","Large Language Models","LLM","Local LLM","NVIDIA DGX Spark","Ollama","Open-WebUi","Private AI Network","Self-hosted AI"],"articleSection":["Gigabyte AI TOP ATOM","Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/","url":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/","name":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png","datePublished":"2025-12-21T22:20:26+00:00","dateModified":"2026-01-05T06:44:29+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"description":"Learn how to install Open WebUI with integrated Ollama on the Gigabyte AI TOP ATOM. Create your own local ChatGPT-like interface for your entire network using Docker.","breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-OpenWebUI-GUI.png","width":1295,"height":1239,"caption":"GIGABYTE AI TOP ATOM - Open-WebUI Interface"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/open-webui-on-the-gigabyte-ai-top-atom-chatgpt-like-chat-interface-for-your-own-network\/1903\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Open WebUI on the Gigabyte AI TOP ATOM: ChatGPT-like Chat Interface for Your Own Network"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1903","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=1903"}],"version-history":[{"count":6,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1903\/revisions"}],"predecessor-version":[{"id":2118,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1903\/revisions\/2118"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/1891"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=1903"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=1903"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=1903"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}