{"id":1898,"date":"2025-12-21T22:10:09","date_gmt":"2025-12-21T22:10:09","guid":{"rendered":"https:\/\/ai-box.eu\/?p=1898"},"modified":"2025-12-21T22:15:35","modified_gmt":"2025-12-21T22:15:35","slug":"ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/","title":{"rendered":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network"},"content":{"rendered":"<p data-path-to-node=\"1\">Anyone experimenting with Large Language Models knows the problem: local hardware is often insufficient to run larger models smoothly. For me, the solution was clear: I use a <b data-path-to-node=\"1\" data-index-in-node=\"195\">Gigabyte AI TOP ATOM<\/b> with its powerful Blackwell GPU as a dedicated Ollama server in the network. This allows all computers in my local network to access the Ollama API and use the full GPU power without each individual machine having to install the models locally. For my experience report here on my blog, I was loaned the <b data-path-to-node=\"1\" data-index-in-node=\"195\">Gigabyte AI TOP ATOM<\/b> by the company <a href=\"https:\/\/www.mifcom.de\/\" target=\"_blank\" rel=\"noopener\">MIFCOM<\/a>.<\/p>\n<p data-path-to-node=\"2\">In this post, I will show you how I installed <b data-path-to-node=\"2\" data-index-in-node=\"30\">Ollama<\/b> on my Gigabyte AI TOP ATOM and configured it so that Ollama is accessible throughout the entire network. Since the system is based on the same platform as the <b data-path-to-node=\"2\" data-index-in-node=\"88\">NVIDIA DGX Spark<\/b>, the official NVIDIA playbooks work just as reliably here. The best part: the installation is done in 10-15 minutes and is completely usable in the network.<\/p>\n<div id=\"attachment_1895\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-1024x796.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1895\" class=\"wp-image-1895 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-1024x796.png\" alt=\"GIGABYTE AI TOP ATOM - OLLAMA logo\" width=\"1024\" height=\"796\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-1024x796.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-300x233.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-768x597.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo-1080x840.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png 1152w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1895\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; OLLAMA logo<\/p><\/div>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#The_Basic_Idea_Central_Ollama_Server_for_the_Entire_Network\" >The Basic Idea: Central Ollama Server for the Entire Network<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Phase_1_Installing_Ollama_on_the_Gigabyte_AI_TOP_ATOM\" >Phase 1: Installing Ollama on the Gigabyte AI TOP ATOM<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Phase_2_Downloading_the_first_Language_Model\" >Phase 2: Downloading the first Language Model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Phase_3_Configuring_Ollama_for_Network_Access\" >Phase 3: Configuring Ollama for Network Access<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Phase_4_Testing_API_Access_from_the_Network\" >Phase 4: Testing API Access from the Network<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Phase_5_Testing_further_API_Endpoints\" >Phase 5: Testing further API Endpoints<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Trying_out_other_Models\" >Trying out other Models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Troubleshooting_Common_Problems_and_Solutions\" >Troubleshooting: Common Problems and Solutions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Rollback_Deactivating_Network_Access_again\" >Rollback: Deactivating Network Access again<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Rollback_Deleting_Ollama_again\" >Rollback: Deleting Ollama again<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Summary_Conclusion\" >Summary &amp; Conclusion<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#Next_Step_Open_WebUI_for_a_user-friendly_Chat_Interface\" >Next Step: Open WebUI for a user-friendly Chat Interface<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"4\"><span class=\"ez-toc-section\" id=\"The_Basic_Idea_Central_Ollama_Server_for_the_Entire_Network\"><\/span>The Basic Idea: Central Ollama Server for the Entire Network<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"5\">Before I dive into the technical details, an important point: with this configuration, Ollama runs directly on the Gigabyte AI TOP ATOM and utilizes the full GPU performance of the Blackwell architecture. I configure Ollama to listen on all network interfaces and expose port 11434 in the local network. This way, all computers in my network \u2013 whether laptop, desktop, or other devices \u2013 can directly access the Ollama API and use the models without each computer needing to install them locally, which is often not possible due to the hardware requirements of such open-source LLMs.<\/p>\n<p data-path-to-node=\"6\">This is particularly practical for teams or if you have several PCs in use in your household or small company. The goal is to set up a central server with full GPU power that everyone in your own network can access. Since I personally see the greatest benefit in locally operated LLMs, I&#8217;m starting with this first. Of course, with this setup, you should ensure that your network is trustworthy, as the Ollama API is accessible without authentication. For a private home network or a small protected company network, however, this is still a perfect solution to get started.<\/p>\n<p data-path-to-node=\"6\"><strong>What you need:<\/strong><\/p>\n<ul data-path-to-node=\"7\">\n<li>\n<p data-path-to-node=\"7,0,0\">A Gigabyte AI TOP ATOM, ASUS Ascent, MSI EdgeXpert (or NVIDIA DGX Spark) connected to the network<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"7,1,0\">A connected monitor or terminal access to the AI TOP ATOM<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"7,1,0\">A computer on the same network for API testing<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"7,3,0\">Basic knowledge of terminal commands and cURL (for API testing)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"7,4,0\">The IP address of your AI TOP ATOM in the network (found with <code data-path-to-node=\"7,4,0\" data-index-in-node=\"95\">ip addr<\/code> or <code data-path-to-node=\"7,4,0\" data-index-in-node=\"108\">hostname -I<\/code>)<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"9\"><span class=\"ez-toc-section\" id=\"Phase_1_Installing_Ollama_on_the_Gigabyte_AI_TOP_ATOM\"><\/span>Phase 1: Installing Ollama on the Gigabyte AI TOP ATOM<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"10\">For the rest of this guide, I&#8217;m assuming you&#8217;re sitting directly in front of the AI TOP ATOM with a monitor, keyboard, and mouse connected. First, I check if CUDA and possibly Ollama are already installed. To do this, I open a terminal on my AI TOP ATOM and execute the following two commands once.<\/p>\n<p data-path-to-node=\"10\">The following command shows you if the CUDA Toolkit is already installed.<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">You should now see the following view in the terminal window.<\/p>\n<div id=\"attachment_1860\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1860\" class=\"wp-image-1860 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\" alt=\"GIGABYTE AI TOP ATOM - NVIDIA-SMI\" width=\"1024\" height=\"694\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-300x203.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-768x520.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi.png 1026w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1860\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; NVIDIA-SMI<\/p><\/div>\n<p data-path-to-node=\"10\">With this command, you can check if ollama might already be installed.<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code data-path-to-node=\"11\">ollama --version<\/code><\/p>\n<p data-path-to-node=\"12\">If you see a version number with the Ollama command, you can jump directly to Phase 3. If the command returns &#8220;command not found,&#8221; you first need to install Ollama on your system.<\/p>\n<p data-path-to-node=\"13\">The installation is very simple. I use the official Ollama installation script:<\/p>\n<p data-path-to-node=\"13\"><strong>Command:<\/strong> <code data-path-to-node=\"14\">curl -fsSL https:\/\/ollama.com\/install.sh | sh<\/code><\/p>\n<p data-path-to-node=\"15\">The script downloads the latest version and installs both the Ollama binary and the service components. The beauty here is that Ollama was provided specifically for the NVIDIA platform. So the command is the same for both the x86 world and the ARM world. I simply wait until the installation is complete \u2013 it usually only takes a few minutes. The output will then show you that everything was successfully installed.<\/p>\n<div id=\"attachment_1858\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02-1024x452.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1858\" class=\"wp-image-1858 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02-1024x452.png\" alt=\"GIGABYTE AI TOP ATOM - Ollama installation\" width=\"1024\" height=\"452\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02-1024x452.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02-300x132.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02-768x339.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-installation-02.png 1026w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1858\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Ollama installation<\/p><\/div>\n<h3 data-path-to-node=\"17\"><span class=\"ez-toc-section\" id=\"Phase_2_Downloading_the_first_Language_Model\"><\/span>Phase 2: Downloading the first Language Model<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"18\">After the installation, I download a language model. For the Blackwell GPU on the AI TOP ATOM, I recommend <b data-path-to-node=\"18\" data-index-in-node=\"88\">Qwen2.5 32B<\/b>, as this model is specifically optimized for the Blackwell architecture and fully utilizes the GPU performance. The following command downloads the model and then makes it available in Ollama itself.<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code data-path-to-node=\"19\">ollama pull qwen2.5:32b<\/code><\/p>\n<p data-path-to-node=\"18\">You should also install the following three models as I find them very good.<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>ollama pull nemotron-3-nano<\/code><\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>ollama pull qwen3-vl<\/code><\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>ollama pull gpt-oss:120b<\/code><\/p>\n<p data-path-to-node=\"20\">Depending on your internet speed, the download of the models may take a few minutes. The models vary in size and took up approx. XXX GB of storage for me. You will see a progress bar during the download. If everything was successful, the output should look something like this:<\/p>\n<div id=\"attachment_1862\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5-1024x452.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1862\" class=\"wp-image-1862 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5-1024x452.png\" alt=\"GIGABYTE AI TOP ATOM - Ollama pull qwen 2.5\" width=\"1024\" height=\"452\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5-1024x452.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5-300x132.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5-768x339.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-pull-qwen2-5.png 1026w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1862\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Ollama pull qwen 2.5<\/p><\/div>\n<p data-path-to-node=\"20\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important Note:<\/b> Make sure there is enough storage space available on your AI TOP ATOM. If the download fails with a storage error, you can also use smaller models like <code data-path-to-node=\"22\" data-index-in-node=\"115\">qwen2.5:7b<\/code>, which require significantly less space.<\/p>\n<p data-path-to-node=\"20\">To list all downloaded models, the following command is available to you:<\/p>\n<p data-path-to-node=\"20\"><strong>Command:<\/strong> <code>ollama list<\/code><\/p>\n<p data-path-to-node=\"20\">Here is how the overview of the language models provided by Ollama looks for me now.<\/p>\n<div id=\"attachment_1867\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list-1024x276.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1867\" class=\"wp-image-1867 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list-1024x276.png\" alt=\"GIGABYTE AI TOP ATOM - Ollama list\" width=\"1024\" height=\"276\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list-1024x276.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list-300x81.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list-768x207.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-list.png 1026w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1867\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Ollama list<\/p><\/div>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_3_Configuring_Ollama_for_Network_Access\"><\/span>Phase 3: Configuring Ollama for Network Access<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now comes the crucial step: I configure Ollama so that it is accessible throughout the network. By default, Ollama only listens on localhost, which means only requests from the AI TOP ATOM itself, i.e., the IP address 127.0.0.1, are accepted. To allow access from the entire network, including from other IP addresses, the environment variable <code data-path-to-node=\"25\" data-index-in-node=\"200\">OLLAMA_HOST<\/code> must be set.<\/p>\n<p data-path-to-node=\"26\">First, I check the current IP address of my AI TOP ATOM in the network:<\/p>\n<p data-path-to-node=\"26\"><strong>Command: <\/strong><code data-path-to-node=\"27\">hostname -I<\/code><\/p>\n<p data-path-to-node=\"28\">Or alternatively:<\/p>\n<p data-path-to-node=\"28\"><strong>Command:<\/strong> <code data-path-to-node=\"29\">ip addr show | grep \"inet \"<\/code><\/p>\n<p data-path-to-node=\"30\">I note down the IP address (e.g., <code data-path-to-node=\"30\" data-index-in-node=\"50\">192.168.2.100<\/code>). Now I configure Ollama so that it listens on all network interfaces of the machine. To do this, I edit the systemd service file:<\/p>\n<p data-path-to-node=\"30\"><strong>Command:<\/strong> <code data-path-to-node=\"31\">sudo systemctl edit ollama<\/code><\/p>\n<p data-path-to-node=\"32\">This command opens an editor. I insert the following configuration:<\/p>\n<pre data-path-to-node=\"33\"><code data-path-to-node=\"33\">[Service]\r\nEnvironment=\"OLLAMA_HOST=0.0.0.0:11434\"\r\n<\/code> The configuration should now look as shown in the following image<\/pre>\n<div id=\"attachment_1870\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-1024x649.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1870\" class=\"wp-image-1870 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-1024x649.png\" alt=\"GIGABYTE AI TOP ATOM - Ollama network-access\" width=\"1024\" height=\"649\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-1024x649.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-300x190.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-768x487.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access-1080x684.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-ollama-network-access.png 1166w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1870\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; Ollama network-access<\/p><\/div>\n<p data-path-to-node=\"34\">The configuration <code data-path-to-node=\"34\" data-index-in-node=\"25\">OLLAMA_HOST=0.0.0.0:11434<\/code> means that Ollama listens on all network interfaces on port 11434. I save the file and restart the Ollama service:<\/p>\n<p data-path-to-node=\"34\"><strong>Command:<\/strong> <code data-path-to-node=\"35\">sudo systemctl daemon-reload<\/code><\/p>\n<p data-path-to-node=\"34\"><strong>Command:<\/strong> <code data-path-to-node=\"35\">sudo systemctl restart ollama<\/code><\/p>\n<p data-path-to-node=\"36\">To check if Ollama is now accessible in the network, I test the following URL in the browser from another computer in the network:<\/p>\n<p data-path-to-node=\"36\"><strong>Command:<\/strong> <code data-path-to-node=\"37\">http:\/\/&lt;IP-Address-AI-TOP-ATOM&gt;:11434\/api\/tags<\/code><\/p>\n<p data-path-to-node=\"38\">Replace <code data-path-to-node=\"38\" data-index-in-node=\"30\">&lt;IP-Address-AI-TOP-ATOM&gt;<\/code> with the IP address of your AI TOP ATOM. If you get back a list of available models, the network configuration is working correctly.<\/p>\n<h3 data-path-to-node=\"40\"><span class=\"ez-toc-section\" id=\"Phase_4_Testing_API_Access_from_the_Network\"><\/span>Phase 4: Testing API Access from the Network<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"41\">Now I can access the Ollama API from any computer in my network. To test if everything works, I execute the following command from another machine in the network (again using your IP address of the machine on which Ollama is running):<\/p>\n<pre data-path-to-node=\"42\"><code data-path-to-node=\"42\">curl http:\/\/192.168.2.100:11434\/api\/chat -d '{\r\n\u00a0 \"model\": \"qwen2.5:32b\",\r\n\u00a0 \"messages\": [{\r\n\u00a0 \u00a0 \"role\": \"user\",\r\n\u00a0 \u00a0 \"content\": \"Write me a haiku about GPUs and AI.\"\r\n\u00a0 }],\r\n\u00a0 \"stream\": false\r\n}'<\/code><\/pre>\n<p data-path-to-node=\"43\">If everything is configured correctly, I should get back a JSON response that looks something like this:<\/p>\n<pre data-path-to-node=\"44\"><code data-path-to-node=\"44\">curl http:\/\/192.168.2.100:11434\/api\/chat -d '{\r\n\u00a0 \"model\": \"qwen2.5:32b\",\r\n\u00a0 \"created_at\": \"2024-01-15T12:30:45.123Z\",\r\n\u00a0 \"message\": {\r\n\u00a0 \u00a0 \"role\": \"assistant\",\r\n\u00a0 \u00a0 \"content\": \"Silicon flows through circuits\\nDreams become reality\\nAI comes to life\"\r\n\u00a0 },\r\n\u00a0 \"done\": true\r\n}'<\/code><\/pre>\n<p data-path-to-node=\"45\">If you get an error message like &#8220;Connection refused&#8221;, check the following:<\/p>\n<ul data-path-to-node=\"46\">\n<li>\n<p data-path-to-node=\"46,0,0\">Is the IP address correct? Check with <code data-path-to-node=\"46,0,0\" data-index-in-node=\"40\">hostname -I<\/code> on the AI TOP ATOM<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"46,1,0\">Is Ollama running? Check with <code data-path-to-node=\"46,1,0\" data-index-in-node=\"30\">sudo systemctl status ollama<\/code><\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"46,2,0\">Is the firewall active? If so, you need to open port 11434: <code data-path-to-node=\"46,2,0\" data-index-in-node=\"50\">sudo ufw allow 11434<\/code><\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"46,3,0\">Are both computers on the same network?<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"48\"><span class=\"ez-toc-section\" id=\"Phase_5_Testing_further_API_Endpoints\"><\/span>Phase 5: Testing further API Endpoints<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"49\">To ensure everything is fully working, I test further API functions. First, I list all available models (again with your IP address):<\/p>\n<p data-path-to-node=\"49\"><strong>Command:<\/strong> <code data-path-to-node=\"50\">curl http:\/\/192.168.2.100:11434\/api\/tags<\/code><\/p>\n<p data-path-to-node=\"51\">This should show me all downloaded models. Then I test streaming, which is particularly useful for longer responses:<\/p>\n<pre data-path-to-node=\"52\"><code data-path-to-node=\"52\">curl -N http:\/\/192.168.1.100:11434\/api\/chat -d '{\r\n\u00a0 \"model\": \"qwen2.5:32b\",\r\n\u00a0 \"messages\": [{\"role\": \"user\", \"content\": \"Why does the sky look blue when no clouds are visible?\"}],\r\n\u00a0 \"stream\": true\r\n}'<\/code><\/pre>\n<p data-path-to-node=\"45\">With <code data-path-to-node=\"45\" data-index-in-node=\"4\">stream: true<\/code> you see the answer in real-time as it is generated. This is particularly practical when you generate longer texts and don&#8217;t want to wait until everything is finished.<\/p>\n<h3 data-path-to-node=\"47\"><span class=\"ez-toc-section\" id=\"Trying_out_other_Models\"><\/span>Trying out other Models<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"48\">The great thing about Ollama is the large selection of available models. After successful installation, you can download more models from the <a href=\"https:\/\/ollama.com\/library\" target=\"_blank\" rel=\"noopener\">Ollama Library<\/a>. For example, I have also tested the following models:<\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong><code data-path-to-node=\"49\">ollama pull llama3.1:8b<\/code><\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong><code data-path-to-node=\"49\">ollama pull codellama:13b<\/code><\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong><code data-path-to-node=\"49\">ollama pull phi3.5:3.8b<\/code><\/p>\n<p data-path-to-node=\"50\">Each model has its strengths: <b data-path-to-node=\"50\" data-index-in-node=\"30\">Llama3.1<\/b> is very versatile, <b data-path-to-node=\"50\" data-index-in-node=\"60\">CodeLlama<\/b> shines at code generation, and <b data-path-to-node=\"50\" data-index-in-node=\"95\">Phi3.5<\/b> is compact and fast. Just try out which model best suits your requirements. That&#8217;s exactly the beauty of this setup.<\/p>\n<h3 data-path-to-node=\"52\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Common_Problems_and_Solutions\"><\/span>Troubleshooting: Common Problems and Solutions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"53\">In my time with Ollama on the AI TOP ATOM, I have encountered some typical problems. Here are the most common ones and how I solved them:<\/p>\n<ul data-path-to-node=\"54\">\n<li>\n<p data-path-to-node=\"54,0,0\"><b data-path-to-node=\"54,0,0\" data-index-in-node=\"0\">&#8220;Connection refused&#8221; when accessing from the network:<\/b> Check if Ollama is listening on all interfaces (<code data-path-to-node=\"54,0,0\" data-index-in-node=\"100\">sudo systemctl status ollama<\/code> shows the environment variables). If not, check the service configuration and restart Ollama.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,0,1\"><b data-path-to-node=\"54,0,1\" data-index-in-node=\"0\">Firewall blocking access:<\/b> Port 11434 must be open in the firewall. Open with <code data-path-to-node=\"54,0,1\" data-index-in-node=\"60\">sudo ufw allow 11434<\/code> or corresponding iptables rules.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,1,0\"><b data-path-to-node=\"54,1,0\" data-index-in-node=\"0\">Model download fails with storage error:<\/b> Not enough storage space on the AI TOP ATOM. Either free up space or use a smaller model like <code data-path-to-node=\"54,1,0\" data-index-in-node=\"130\">qwen2.5:7b<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,2,0\"><b data-path-to-node=\"54,2,0\" data-index-in-node=\"0\">Ollama command not found after installation:<\/b> The installation path is not in the PATH. Restart the terminal session or run <code data-path-to-node=\"54,2,0\" data-index-in-node=\"120\">source ~\/.bashrc<\/code> or restart the computer.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,3,0\"><b data-path-to-node=\"54,3,0\" data-index-in-node=\"0\">API returns &#8220;model not found&#8221; error:<\/b> The model was not downloaded or the name is incorrect. Use <code data-path-to-node=\"54,3,0\" data-index-in-node=\"100\">ollama list<\/code> to see all available models.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,4,0\"><b data-path-to-node=\"54,4,0\" data-index-in-node=\"0\">Slow inference on the AI TOP ATOM:<\/b> The model is too large for the GPU memory. Either use a smaller model or check the GPU memory with <code data-path-to-node=\"54,4,0\" data-index-in-node=\"95\">nvidia-smi<\/code>.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Rollback_Deactivating_Network_Access_again\"><\/span>Rollback: Deactivating Network Access again<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"57\">If you want to make Ollama available only locally again (only from localhost), remove the service override file:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code data-path-to-node=\"58\">sudo rm \/etc\/systemd\/system\/ollama.service.d\/override.conf<br \/>\n<\/code><\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code data-path-to-node=\"58\">sudo systemctl daemon-reload<br \/>\n<\/code><\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code data-path-to-node=\"58\">sudo systemctl restart ollama<\/code><\/p>\n<p data-path-to-node=\"59\">Ollama will then run only on localhost again and will no longer be accessible from the network.<\/p>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Rollback_Deleting_Ollama_again\"><\/span>Rollback: Deleting Ollama again<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"60\">If you want to completely uninstall Ollama from the AI TOP ATOM, execute the following commands on the system:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code data-path-to-node=\"61\">sudo systemctl stop ollama<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong><code data-path-to-node=\"61\">sudo systemctl disable ollama<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong><code data-path-to-node=\"61\">sudo rm \/usr\/local\/bin\/ollama<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong><code data-path-to-node=\"61\">sudo rm -rf \/usr\/share\/ollama<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong><code data-path-to-node=\"61\">sudo userdel ollama<\/code><\/p>\n<blockquote data-path-to-node=\"62\">\n<p data-path-to-node=\"62,0\"><b data-path-to-node=\"62,0\" data-index-in-node=\"0\">Important Note:<\/b> These commands remove all Ollama files and also all downloaded models. Make sure you really want to remove everything before executing these commands.<\/p>\n<\/blockquote>\n<h2 data-path-to-node=\"64\"><span class=\"ez-toc-section\" id=\"Summary_Conclusion\"><\/span>Summary &amp; Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p data-path-to-node=\"65\">Installing Ollama on the Gigabyte AI TOP ATOM is surprisingly straightforward thanks to compatibility with NVIDIA DGX Spark playbooks. In less than 15 minutes, I have a fully functional Ollama server running that is accessible throughout the network.<\/p>\n<p data-path-to-node=\"66\">What particularly excites me: The performance of the Blackwell GPU is fully utilized, and all computers in my network can now access the same GPU power. This is particularly practical for teams or if you have multiple devices &#8211; everyone can use the models without having to install them locally.<\/p>\n<p data-path-to-node=\"67\">I also find it particularly practical that I can monitor the GPU and system utilization during inference via the DGX Dashboard. This way, I see exactly how resources are being used when multiple clients access the server simultaneously.<\/p>\n<p data-path-to-node=\"68\">For everyone who wants to dive deeper: The Ollama API can be easily integrated into your own applications. Whether Python, JavaScript, or another language \u2013 the REST API is universally applicable. I use it, for example, for automated text generation, code assistance, and even for chatbots.<\/p>\n<p data-path-to-node=\"69\">If you have any questions or encounter problems, feel free to check the <a href=\"https:\/\/docs.nvidia.com\/dgx\/dgx-spark\/\" target=\"_blank\" rel=\"noopener\">official NVIDIA DGX Spark documentation<\/a> or the <a href=\"https:\/\/ollama.com\" target=\"_blank\" rel=\"noopener\">Ollama documentation<\/a>. The community is very helpful, and most problems can be solved quickly.<\/p>\n<h3 data-path-to-node=\"71\"><span class=\"ez-toc-section\" id=\"Next_Step_Open_WebUI_for_a_user-friendly_Chat_Interface\"><\/span>Next Step: Open WebUI for a user-friendly Chat Interface<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"72\">You have now successfully installed Ollama and exposed it in the network. The API works, but for many users, a graphical user interface is much more practical than API calls via cURL. In the next blog post, I will show you how to install and configure <b data-path-to-node=\"72\" data-index-in-node=\"150\">Open WebUI<\/b> on your Gigabyte AI TOP ATOM.<\/p>\n<p data-path-to-node=\"73\">Open WebUI is a self-hosted, extensible AI interface that works completely offline. Together with Ollama, you will then have a complete chat solution for your network similar to ChatGPT, but locally hosted and with your own GPU power. The installation is done via Docker and is, I hope, just as straightforward as the Ollama installation. You can then access a nice chat interface from any computer in the network via the browser, select models, and chat directly with the LLMs. The great thing about it is that Open WebUI has user management. This way, the chat histories of individual users are separated from each other and, if I remember correctly, you can also create teams to access the same chat histories and uploaded documents in groups again.<\/p>\n<p data-path-to-node=\"74\">Stay tuned for the next post &#8211; in this one too, I&#8217;ll show you step-by-step how to set up Open WebUI and connect it to your already running Ollama server!<\/p>\n<p data-path-to-node=\"75\">Good luck experimenting with Ollama on your Gigabyte AI TOP ATOM. I&#8217;m excited to see what applications you develop with it! Let me and my readers know here in the comments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anyone experimenting with Large Language Models knows the problem: local hardware is often insufficient to run larger models smoothly. For me, the solution was clear: I use a Gigabyte AI TOP ATOM with its powerful Blackwell GPU as a dedicated Ollama server in the network. This allows all computers in my local network to access [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1896,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[872,162,50],"tags":[796,784,785,759,786,788,305,789,793,783,787,791,306,792,790],"class_list":["post-1898","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gigabyte-ai-top-atom","category-large-language-models-en","category-top-story-en","tag-ai-chat-interface","tag-ai-server","tag-api-setup","tag-blackwell-gpu","tag-gigabyte-ai-top-atom","tag-gpu-server","tag-llm-en","tag-local-llm","tag-mifcom","tag-network-configuration","tag-nvidia-blackwell","tag-nvidia-dgx-spark","tag-ollama-en","tag-open-source-ai","tag-qwen2-5","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"description\" content=\"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-21T22:10:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-21T22:15:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1152\" \/>\n\t<meta property=\"og:image:height\" content=\"896\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network\",\"datePublished\":\"2025-12-21T22:10:09+00:00\",\"dateModified\":\"2025-12-21T22:15:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/\"},\"wordCount\":2130,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png\",\"keywords\":[\"AI Chat Interface\",\"AI Server\",\"API Setup\",\"Blackwell GPU\",\"Gigabyte AI TOP ATOM\",\"GPU Server\",\"LLM\",\"Local LLM\",\"MIFCOM\",\"Network Configuration\",\"NVIDIA Blackwell\",\"NVIDIA DGX Spark\",\"Ollama\",\"Open Source AI\",\"Qwen2.5\"],\"articleSection\":[\"Gigabyte AI TOP ATOM\",\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/\",\"name\":\"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png\",\"datePublished\":\"2025-12-21T22:10:09+00:00\",\"dateModified\":\"2025-12-21T22:15:35+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"description\":\"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png\",\"width\":1152,\"height\":896,\"caption\":\"GIGABYTE AI TOP ATOM - OLLAMA logo\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\\\/1898\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box","description":"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/","og_locale":"en_US","og_type":"article","og_title":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box","og_description":"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!","og_url":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2025-12-21T22:10:09+00:00","article_modified_time":"2025-12-21T22:15:35+00:00","og_image":[{"width":1152,"height":896,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png","type":"image\/png"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network","datePublished":"2025-12-21T22:10:09+00:00","dateModified":"2025-12-21T22:15:35+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/"},"wordCount":2130,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png","keywords":["AI Chat Interface","AI Server","API Setup","Blackwell GPU","Gigabyte AI TOP ATOM","GPU Server","LLM","Local LLM","MIFCOM","Network Configuration","NVIDIA Blackwell","NVIDIA DGX Spark","Ollama","Open Source AI","Qwen2.5"],"articleSection":["Gigabyte AI TOP ATOM","Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/","url":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/","name":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png","datePublished":"2025-12-21T22:10:09+00:00","dateModified":"2025-12-21T22:15:35+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"description":"Learn how to install Ollama on the Gigabyte AI TOP ATOM and configure it as a powerful central GPU server for your network. Ready for LLMs in just 15 minutes!","breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM_OLLAMA_logo.png","width":1152,"height":896,"caption":"GIGABYTE AI TOP ATOM - OLLAMA logo"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/ollama-on-the-gigabyte-ai-top-atom-central-llm-server-for-the-entire-network\/1898\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Ollama on the Gigabyte AI TOP ATOM: Central LLM Server for the Entire Network"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1898","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=1898"}],"version-history":[{"count":3,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1898\/revisions"}],"predecessor-version":[{"id":1901,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1898\/revisions\/1901"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/1896"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=1898"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=1898"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=1898"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}