{"id":2133,"date":"2026-01-07T19:31:12","date_gmt":"2026-01-07T19:31:12","guid":{"rendered":"https:\/\/ai-box.eu\/?p=2133"},"modified":"2026-01-07T20:28:53","modified_gmt":"2026-01-07T20:28:53","slug":"installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/","title":{"rendered":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs"},"content":{"rendered":"<p data-path-to-node=\"1\">After showing in my previous posts how to install Ollama, Open WebUI, LLaMA Factory, vLLM, ComfyUI, and the AI Toolkit on the <b data-path-to-node=\"1\" data-index-in-node=\"75\">Gigabyte AI TOP ATOM<\/b>, here is another interesting alternative for everyone looking for a user-friendly GUI interface for local Large Language Models and who specifically does not want to use Ollama: <b data-path-to-node=\"1\" data-index-in-node=\"200\">LM Studio<\/b> \u2013 an intuitive desktop application with an integrated chat interface and OpenAI-compatible API, which is now also available for Linux ARM64.<\/p>\n<p data-path-to-node=\"2\">In this post, I will show you how I installed <b data-path-to-node=\"2\" data-index-in-node=\"30\">LM Studio<\/b> on my Gigabyte AI TOP ATOM and configured it so that it is accessible throughout the network as a private LLM server. LM Studio utilizes the GPU performance of the Blackwell GPU and offers both a graphical user interface and an OpenAI-compatible API for integration into your own applications. Since the AI TOP ATOM system from Gigabyte is based on the same platform as the <b data-path-to-node=\"2\" data-index-in-node=\"200\">NVIDIA DGX Spark<\/b>, the official LM Studio installation instructions work here as well.<\/p>\n<p data-path-to-node=\"2\"><strong>Note:<\/strong> For my field reports here on my blog, I was loaned the Gigabyte AI TOP ATOM by the company <a href=\"https:\/\/www.mifcom.de\/\" target=\"_blank\" rel=\"noopener\">MIFCOM<\/a>.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#The_Basic_Idea_User-friendly_GUI_with_Integrated_Chat_Interface_and_API_Server\" >The Basic Idea: User-friendly GUI with Integrated Chat Interface and API Server<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_1_Checking_System_Requirements\" >Phase 1: Checking System Requirements<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_2_Download_LM_Studio_AppImage\" >Phase 2: Download LM Studio AppImage<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_3_Start_LM_Studio\" >Phase 3: Start LM Studio<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_4_Download_Model\" >Phase 4: Download Model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_5_Start_LM_Studio_as_an_LLM_Server_optional\" >Phase 5: Start LM Studio as an LLM Server (optional)<\/a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Option_1_Via_the_GUI\" >Option 1: Via the GUI<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Option_2_Via_the_Command_Line\" >Option 2: Via the Command Line<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_6_Test_Network_Access_optional\" >Phase 6: Test Network Access (optional)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Phase_7_Test_API_Access_from_the_Network_optional\" >Phase 7: Test API Access from the Network (optional)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Troubleshooting_Common_Problems_and_Solutions\" >Troubleshooting: Common Problems and Solutions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Start_LM_Studio_as_a_Server_Automatically_optional\" >Start LM Studio as a Server Automatically (optional)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Rollback_Removing_LM_Studio\" >Rollback: Removing LM Studio<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Summary_Conclusion\" >Summary &amp; Conclusion<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#Next_Step_Try_Out_Models_and_Integrate_into_Your_Own_Applications\" >Next Step: Try Out Models and Integrate into Your Own Applications<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"4\"><span class=\"ez-toc-section\" id=\"The_Basic_Idea_User-friendly_GUI_with_Integrated_Chat_Interface_and_API_Server\"><\/span>The Basic Idea: User-friendly GUI with Integrated Chat Interface and API Server<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"5\">Before I dive into the technical details, an important point: <strong>LM Studio<\/strong> is a desktop application that provides both a graphical user interface for direct chatting with LLMs and an integrated API server that can be made accessible over the network. Unlike pure command-line tools or web interfaces, LM Studio offers a native desktop application that feels like a standard software application.<\/p>\n<p data-path-to-node=\"6\">The special thing about it: LM Studio now supports Linux ARM64 (aarch64), which means it runs directly on the Gigabyte AI TOP ATOM. The application uses a new variant of the llama.cpp engine with CUDA 13 support, which is perfect for the Blackwell architecture. You can use LM Studio locally on the AI TOP ATOM as well as configure it as a private LLM server for your entire network.<\/p>\n<p data-path-to-node=\"7\"><strong>What you need for this:<\/strong><\/p>\n<ul data-path-to-node=\"8\">\n\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,0,0\">A Gigabyte AI TOP ATOM, ASUS Ascent, MSI EdgeXpert (or NVIDIA DGX Spark) connected to the network<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,1,0\">A connected monitor or terminal access to the AI TOP ATOM<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,2,0\">Basic knowledge of terminal commands<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,3,0\">At least 20 GB of free storage space for the AppImage file and model downloads<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,4,0\">An internet connection to download the LM Studio AppImage and models<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"8,5,0\">Optional: A computer on the same network for API testing<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"9\"><span class=\"ez-toc-section\" id=\"Phase_1_Checking_System_Requirements\"><\/span>Phase 1: Checking System Requirements<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"10\">For the rest of this guide, I am assuming that you are sitting directly in front of the AI TOP ATOM with a monitor, keyboard, and mouse connected. First, I check if all necessary system requirements are met. To do this, I open a terminal on my AI TOP ATOM and execute the following commands.<\/p>\n<p data-path-to-node=\"10\">The following command shows you if the CUDA Toolkit is installed:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">You should now see the GPU information. If not, you must first install the NVIDIA drivers.<\/p>\n<div id=\"attachment_XXXX\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-XXXX\" class=\"wp-image-XXXX size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\" alt=\"GIGABYTE AI TOP ATOM - NVIDIA-SMI\" width=\"1024\" height=\"694\" \/><\/a><p id=\"caption-attachment-XXXX\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; NVIDIA-SMI<\/p><\/div>\n<h3 data-path-to-node=\"17\"><span class=\"ez-toc-section\" id=\"Phase_2_Download_LM_Studio_AppImage\"><\/span>Phase 2: Download LM Studio AppImage<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"18\">LM Studio is provided as an AppImage file for Linux ARM64. An AppImage is a portable application that requires no installation \u2013 simply download, make it executable, and start. First, I create a directory for LM Studio:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>mkdir -p ~\/lm-studio<\/code><\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>cd ~\/lm-studio<\/code><\/p>\n<p data-path-to-node=\"18\">Now I download the LM Studio Linux ARM64 AppImage from the official download page:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>wget https:\/\/lmstudio.ai\/download\/latest\/linux\/arm64 -O LM_Studio-linux-arm64.AppImage<\/code><\/p>\n<p data-path-to-node=\"18\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> If the direct download link does not work, visit the <a href=\"https:\/\/lmstudio.ai\/download?os=linux&amp;arch=arm64\" target=\"_blank\" rel=\"noopener\">official LM Studio download page<\/a> and select the Linux ARM64 version manually.<\/p>\n<p data-path-to-node=\"18\">After downloading, I make the AppImage file executable:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>chmod +x LM_Studio-linux-arm64.AppImage<\/code><\/p>\n<p data-path-to-node=\"18\">The AppImage file is now ready to start. Depending on your internet speed, the download may take a few minutes \u2013 the file is approximately 200-300 MB.<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_3_Start_LM_Studio\"><\/span>Phase 3: Start LM Studio<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now I can start LM Studio. Since it is a GUI application, you need a desktop environment, so you must be sitting in front of the Gigabyte AI TOP ATOM. I had to pass <code>--no-sandbox<\/code> because I received an error message stating that special root privileges were required.<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>.\/LM_Studio-linux-arm64.AppImage --no-sandbox<\/code><\/p>\n<p data-path-to-node=\"25\">Upon the first start, it may take a few seconds for the application to load. LM Studio then opens with the main interface, which offers various tabs: Chat, Models, Server, and Developer.<\/p>\n<div id=\"attachment_2122\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-1024x576.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2122\" class=\"size-large wp-image-2122\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-1024x576.png\" alt=\"GIGABYTE AI TOP ATOM - LM-Studio first start\" width=\"1024\" height=\"576\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-1024x576.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-300x169.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-768x432.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-1536x864.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-2048x1152.png 2048w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00-1080x608.png 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2122\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LM-Studio first start<\/p><\/div>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_4_Download_Model\"><\/span>Phase 4: Download Model<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Before you can chat with LM Studio, you must download a model. LM Studio offers an integrated model library through which you can download models directly from within the application. Alternatively, you can also download models via the command line using the LM Studio CLI.<\/p>\n<p data-path-to-node=\"25\">For the command-line installation, I use the LM Studio CLI tool <code>lms<\/code>, which is included with the AppImage. First, I check if the CLI is available:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>.\/LM_Studio-linux-arm64.AppImage --help<\/code><\/p>\n<p data-path-to-node=\"25\">To download a model, I searched for suitable models within LM Studio and downloaded them via the interface. Open the &#8220;Models&#8221; tab in LM Studio, search for a model like &#8220;gpt-oss&#8221; or &#8220;Qwen3 Coder&#8221; and click &#8220;Download&#8221;.<\/p>\n<div id=\"attachment_2124\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-1024x717.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2124\" class=\"size-large wp-image-2124\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-1024x717.png\" alt=\"GIGABYTE AI TOP ATOM - LLM Model download\" width=\"1024\" height=\"717\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-1024x717.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-300x210.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-768x538.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-1536x1076.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05-1080x757.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-05.png 1543w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2124\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLM Model download<\/p><\/div>\n<p data-path-to-node=\"25\">Or alternatively via the CLI:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>.\/LM_Studio-linux-arm64.AppImage get openai\/gpt-oss-20b<\/code><\/p>\n<p data-path-to-node=\"25\">Depending on the model size and your internet connection, the download can take from a few minutes to several hours. The models are stored locally on the AI TOP ATOM and do not need to be redownloaded every time you start.<\/p>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Recommended models for getting started:<\/b><\/p>\n<ul data-path-to-node=\"26\">\n\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"26,0,0\"><code>openai\/gpt-oss-20b<\/code> \u2013 Well-balanced model for general tasks<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"26,1,0\"><code>Qwen\/Qwen3-Coder<\/code> \u2013 Optimized for code generation<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"26,2,0\"><code>Qwen\/Qwen2.5-32B<\/code> \u2013 Very powerful, optimized for Blackwell GPUs<\/p>\n<\/li>\n<\/ul>\n<div id=\"attachment_2126\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-1024x717.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2126\" class=\"size-large wp-image-2126\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-1024x717.png\" alt=\"GIGABYTE AI TOP ATOM - LM-Studio - active download\" width=\"1024\" height=\"717\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-1024x717.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-300x210.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-768x538.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-1536x1076.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06-1080x757.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-06.png 1543w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2126\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LM-Studio &#8211; active download<\/p><\/div>\n<p>After I downloaded qwen3-vl-8b, I could immediately ask the LLM my question: &#8220;Why is the sky blue?&#8221;.<\/p>\n<div id=\"attachment_2129\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-1024x576.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-2129\" class=\"size-large wp-image-2129\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-1024x576.png\" alt=\"GIGABYTE AI TOP ATOM - LM-Studio\" width=\"1024\" height=\"576\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-1024x576.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-300x169.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-768x432.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-1536x864.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-2048x1152.png 2048w, https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-07-1080x608.png 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-2129\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LM-Studio<\/p><\/div>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_5_Start_LM_Studio_as_an_LLM_Server_optional\"><\/span>Phase 5: Start LM Studio as an LLM Server (optional)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now comes the optional step where you start LM Studio as a server: I configure LM Studio so that the LLM server is accessible throughout the network. There are two ways to start the server:<\/p>\n<h4 data-path-to-node=\"27\"><span class=\"ez-toc-section\" id=\"Option_1_Via_the_GUI\"><\/span>Option 1: Via the GUI<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p data-path-to-node=\"28\">In the LM Studio GUI, I open the &#8220;Developer&#8221; tab and activate the &#8220;Serve on Local Network&#8221; option in the server settings. This allows other devices on the network to access the LLM server of the AI TOP ATOM.<\/p>\n<p data-path-to-node=\"28\">In the server settings, you can see the IP address and the port the server is running on. By default, this is port 1234.<\/p>\n<h4 data-path-to-node=\"29\"><span class=\"ez-toc-section\" id=\"Option_2_Via_the_Command_Line\"><\/span>Option 2: Via the Command Line<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p data-path-to-node=\"30\">Alternatively, you can also start the server directly via the command line:<\/p>\n<p data-path-to-node=\"30\"><strong>Command:<\/strong> <code>.\/LM_Studio-linux-arm64.AppImage server start<\/code><\/p>\n<p data-path-to-node=\"30\">To use a different port:<\/p>\n<p data-path-to-node=\"30\"><strong>Command:<\/strong> <code>.\/LM_Studio-linux-arm64.AppImage server start --port 1234<\/code><\/p>\n<p data-path-to-node=\"30\">The server now starts and is accessible by default on all network interfaces (0.0.0.0).<\/p>\n<p data-path-to-node=\"30\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important Note:<\/b> If a firewall is active, you must open port 1234:<\/p>\n<p data-path-to-node=\"30\"><strong>Command:<\/strong> <code>sudo ufw allow 1234<\/code><\/p>\n<h3 data-path-to-node=\"31\"><span class=\"ez-toc-section\" id=\"Phase_6_Test_Network_Access_optional\"><\/span>Phase 6: Test Network Access (optional)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"32\">First, I check the IP address of my AI TOP ATOM on the network:<\/p>\n<p data-path-to-node=\"32\"><strong>Command:<\/strong> <code>hostname -I<\/code><\/p>\n<p data-path-to-node=\"32\">I note down the IP address (e.g., <code>192.168.2.100<\/code>). Now I test from another computer on the network if the server is reachable:<\/p>\n<p data-path-to-node=\"32\"><strong>Command:<\/strong> <code>curl http:\/\/&lt;IP-Address-AI-TOP-ATOM&gt;:1234\/v1\/models<\/code><\/p>\n<p data-path-to-node=\"32\">Replace <code>&lt;IP-Address-AI-TOP-ATOM&gt;<\/code> with the IP address of your AI TOP ATOM. If you receive a list of available models, the network configuration is working correctly.<\/p>\n<h3 data-path-to-node=\"40\"><span class=\"ez-toc-section\" id=\"Phase_7_Test_API_Access_from_the_Network_optional\"><\/span>Phase 7: Test API Access from the Network (optional)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"41\">Now I can access the LM Studio API from any computer in my network. To test if everything is working, I run the following command from another machine on the network:<\/p>\n<pre data-path-to-node=\"42\"><code data-path-to-node=\"42\">curl http:\/\/192.168.2.100:1234\/v1\/chat\/completions -H \"Content-Type: application\/json\" -d '{\r\n\u00a0 \"model\": \"openai\/gpt-oss-20b\",\r\n\u00a0 \"messages\": [{\r\n\u00a0 \u00a0 \"role\": \"user\",\r\n\u00a0 \u00a0 \"content\": \"Write me a haiku about GPUs and AI.\"\r\n\u00a0 }],\r\n\u00a0 \"max_tokens\": 500\r\n}'<\/code><\/pre>\n<p data-path-to-node=\"43\">If everything is configured correctly, I should receive a JSON response that looks something like this:<\/p>\n<pre data-path-to-node=\"44\"><code data-path-to-node=\"44\">{\r\n\u00a0 \"id\": \"chatcmpl-...\",\r\n\u00a0 \"object\": \"chat.completion\",\r\n\u00a0 \"created\": 1234567890,\r\n\u00a0 \"model\": \"openai\/gpt-oss-20b\",\r\n\u00a0 \"choices\": [{\r\n\u00a0 \u00a0 \"index\": 0,\r\n\u00a0 \u00a0 \"message\": {\r\n\u00a0 \u00a0 \u00a0 \"role\": \"assistant\",\r\n\u00a0 \u00a0 \u00a0 \"content\": \"Silicon flows through circuits\\nDreams become reality\\nAI wakes to life\"\r\n\u00a0 \u00a0 },\r\n\u00a0 \u00a0 \"finish_reason\": \"stop\"\r\n\u00a0 }]\r\n}<\/code><\/pre>\n<h3 data-path-to-node=\"52\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Common_Problems_and_Solutions\"><\/span>Troubleshooting: Common Problems and Solutions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"53\">During my time with LM Studio on the AI TOP ATOM, I encountered some typical problems. Here are the most common ones and how I solved them:<\/p>\n<ul data-path-to-node=\"54\">\n\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,0,0\"><b data-path-to-node=\"54,0,0\" data-index-in-node=\"0\">AppImage does not start:<\/b> Check if the file was made executable with <code data-path-to-node=\"54,0,0\" data-index-in-node=\"100\">chmod +x LM_Studio-linux-arm64.AppImage<\/code>. If the AppImage still doesn&#8217;t start, check if FUSE is installed: <code data-path-to-node=\"54,0,0\" data-index-in-node=\"200\">sudo apt install fuse<\/code>.<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,0,1\"><b data-path-to-node=\"54,0,1\" data-index-in-node=\"0\">GUI is not displayed:<\/b> If you are connected via SSH, you need X11 forwarding or a desktop environment on the AI TOP ATOM. Alternatively, use the command-line version of the server.<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,1,0\"><b data-path-to-node=\"54,1,0\" data-index-in-node=\"0\">Server is not reachable on the network:<\/b> Check if &#8220;Serve on Local Network&#8221; is activated in the Developer settings. Also check the firewall settings and ensure that port 1234 is opened.<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,2,0\"><b data-path-to-node=\"54,2,0\" data-index-in-node=\"0\">Model download fails:<\/b> Check the internet connection. If you have trouble with the download, you can also manually download models from Hugging Face and copy them into the LM Studio model directory.<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,3,0\"><b data-path-to-node=\"54,3,0\" data-index-in-node=\"0\">CUDA support not available:<\/b> LM Studio uses CUDA 13. Check with <code data-path-to-node=\"54,3,0\" data-index-in-node=\"95\">nvidia-smi<\/code> if the GPU is recognized. If not, install the NVIDIA drivers.<\/p>\n<\/li>\n<p>\u00a0\t<\/p>\n<li>\n<p data-path-to-node=\"54,4,0\"><b data-path-to-node=\"54,4,0\" data-index-in-node=\"0\">Slow inference:<\/b> The model might be too large for the available GPU memory. Try a smaller model or check the GPU usage with <code data-path-to-node=\"54,4,0\" data-index-in-node=\"95\">nvidia-smi<\/code>.<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Start_LM_Studio_as_a_Server_Automatically_optional\"><\/span>Start LM Studio as a Server Automatically (optional)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"57\">If you want to start LM Studio as a server automatically upon system boot, you can create a systemd service. First, I create a service file:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>sudo nano \/etc\/systemd\/system\/lm-studio.service<\/code><\/p>\n<p data-path-to-node=\"57\">Insert the following content (replace <code>\/home\/username<\/code> with your actual username and the path to the AppImage):<\/p>\n<pre data-path-to-node=\"60\"><code data-path-to-node=\"60\">[Unit]\r\nDescription=LM Studio LLM Server\r\nAfter=network.target\r\n\r\n[Service]\r\nType=simple\r\nUser=username\r\nWorkingDirectory=\/home\/username\/lm-studio\r\nExecStart=\/home\/username\/lm-studio\/LM_Studio-linux-arm64.AppImage server start --port 1234\r\nRestart=always\r\nRestartSec=10\r\n\r\n[Install]\r\nWantedBy=multi-user.target<\/code><\/pre>\n<p data-path-to-node=\"57\">Save the file and activate the service:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>sudo systemctl daemon-reload<\/code><\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>sudo systemctl enable lm-studio<\/code><\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>sudo systemctl start lm-studio<\/code><\/p>\n<p data-path-to-node=\"57\">The service now starts automatically after every reboot.<\/p>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Rollback_Removing_LM_Studio\"><\/span>Rollback: Removing LM Studio<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"60\">If you want to completely remove LM Studio from the AI TOP ATOM, execute the following commands on the system:<\/p>\n<p data-path-to-node=\"60\">First, stop the server (if it&#8217;s running) with <code>Ctrl+C<\/code> or:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>sudo systemctl stop lm-studio<\/code><\/p>\n<p data-path-to-node=\"60\">Remove the AppImage file:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>rm -rf ~\/lm-studio<\/code><\/p>\n<p data-path-to-node=\"60\">If you created a systemd service:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>sudo systemctl disable lm-studio<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>sudo rm \/etc\/systemd\/system\/lm-studio.service<\/code><\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>sudo systemctl daemon-reload<\/code><\/p>\n<blockquote data-path-to-node=\"62\">\n<p data-path-to-node=\"62,0\"><b data-path-to-node=\"62,0\" data-index-in-node=\"0\">Important Note:<\/b> These commands remove LM Studio, but not the downloaded models. The models remain in the LM Studio model directory in case you want to use them again later.<\/p>\n<\/blockquote>\n<h2 data-path-to-node=\"64\"><span class=\"ez-toc-section\" id=\"Summary_Conclusion\"><\/span>Summary &amp; Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p data-path-to-node=\"65\">The installation of LM Studio on the Gigabyte AI TOP ATOM is surprisingly straightforward thanks to compatibility with NVIDIA DGX Spark playbooks. In less than 15 minutes, I had LM Studio set up and can now both chat locally with the GUI and use the LLM server across the entire network.<\/p>\n<p data-path-to-node=\"66\">What particularly excites me: The user-friendly GUI interface makes it easy to download models and chat directly without complex configurations or API calls. The OpenAI-compatible API allows for seamless integration of existing applications, and CUDA 13 support utilizes the full performance of the Blackwell architecture.<\/p>\n<p data-path-to-node=\"67\">I also find it particularly practical that LM Studio can be operated both as a desktop application and as a server. The AppImage installation is portable and easy to manage \u2013 no complex dependencies or system changes required.<\/p>\n<p data-path-to-node=\"68\">For teams or anyone looking for an intuitive interface for local LLMs, this is a perfect solution: A central server with full GPU power that everyone can access via the OpenAI-compatible API. The GUI makes it easy to manage models and chat directly, while the API enables integration into your own applications.<\/p>\n<p data-path-to-node=\"69\">If you have questions or encounter problems, feel free to check the <a href=\"https:\/\/docs.nvidia.com\/dgx\/dgx-spark\/\" target=\"_blank\" rel=\"noopener\">official NVIDIA DGX Spark documentation<\/a> or the <a href=\"https:\/\/lmstudio.ai\/docs\" target=\"_blank\" rel=\"noopener\">LM Studio documentation<\/a>. The community is very helpful, and most problems can be solved quickly.<\/p>\n<h3 data-path-to-node=\"71\"><span class=\"ez-toc-section\" id=\"Next_Step_Try_Out_Models_and_Integrate_into_Your_Own_Applications\"><\/span>Next Step: Try Out Models and Integrate into Your Own Applications<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"72\">You have now successfully installed LM Studio and exposed the server to the network. The basic installation works, but this is only the beginning. Experiment with different models and use the OpenAI-compatible API to integrate LM Studio into your own applications.<\/p>\n<p data-path-to-node=\"73\">The LM Studio SDKs for Python and JavaScript make it easy to incorporate the server into existing projects. Try out different models and find out which one best suits your requirements.<\/p>\n<p data-path-to-node=\"74\">Good luck experimenting with LM Studio on your Gigabyte AI TOP ATOM. I am excited to see what applications you develop with it! Let me and my readers know here in the comments.<\/p>\n<p>&#8220;`<\/p>\n","protected":false},"excerpt":{"rendered":"<p>After showing in my previous posts how to install Ollama, Open WebUI, LLaMA Factory, vLLM, ComfyUI, and the AI Toolkit on the Gigabyte AI TOP ATOM, here is another interesting alternative for everyone looking for a user-friendly GUI interface for local Large Language Models and who specifically does not want to use Ollama: LM Studio [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2123,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[873,162,50],"tags":[920,958,811,962,786,888,846,961,960,959,789,791,880,957],"class_list":["post-2133","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gigabyte-ai-top-atom","category-large-language-models-en","category-top-story-en","tag-ai-top-atom-tutorial","tag-appimage-installation","tag-blackwell-gpu","tag-cuda-13","tag-gigabyte-ai-top-atom","tag-gpu-inference","tag-large-language-models","tag-linux-arm64","tag-llm-server","tag-lm-studio","tag-local-llm","tag-nvidia-dgx-spark","tag-openai-api","tag-private-ai","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"description\" content=\"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-07T19:31:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-07T20:28:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs\",\"datePublished\":\"2026-01-07T19:31:12+00:00\",\"dateModified\":\"2026-01-07T20:28:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/\"},\"wordCount\":1932,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png\",\"keywords\":[\"AI TOP ATOM Tutorial\",\"AppImage Installation\",\"Blackwell GPU\",\"CUDA 13\",\"Gigabyte AI TOP ATOM\",\"GPU-Inference\",\"Large Language Models\",\"Linux ARM64\",\"LLM Server\",\"LM Studio\",\"Local LLM\",\"NVIDIA DGX Spark\",\"OpenAI API\",\"Private AI\"],\"articleSection\":[\"Gigabyte AI TOP ATOM\",\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/\",\"name\":\"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png\",\"datePublished\":\"2026-01-07T19:31:12+00:00\",\"dateModified\":\"2026-01-07T20:28:53+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"description\":\"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2026\\\/01\\\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png\",\"width\":2560,\"height\":1440,\"caption\":\"GIGABYTE AI TOP ATOM - LM-Studio first start\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/top-story-en\\\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\\\/2133\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box","description":"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/","og_locale":"en_US","og_type":"article","og_title":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box","og_description":"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.","og_url":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2026-01-07T19:31:12+00:00","article_modified_time":"2026-01-07T20:28:53+00:00","og_image":[{"width":2560,"height":1440,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png","type":"image\/png"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs","datePublished":"2026-01-07T19:31:12+00:00","dateModified":"2026-01-07T20:28:53+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/"},"wordCount":1932,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png","keywords":["AI TOP ATOM Tutorial","AppImage Installation","Blackwell GPU","CUDA 13","Gigabyte AI TOP ATOM","GPU-Inference","Large Language Models","Linux ARM64","LLM Server","LM Studio","Local LLM","NVIDIA DGX Spark","OpenAI API","Private AI"],"articleSection":["Gigabyte AI TOP ATOM","Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/","url":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/","name":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png","datePublished":"2026-01-07T19:31:12+00:00","dateModified":"2026-01-07T20:28:53+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"description":"Learn how to install LM Studio on the Gigabyte AI TOP ATOM. Step-by-step guide for Linux ARM64, setting up a private LLM server, and using the OpenAI-compatible API.","breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2026\/01\/GIGABYTE_AI_TOP_ATOM-LM-Studio-00.png","width":2560,"height":1440,"caption":"GIGABYTE AI TOP ATOM - LM-Studio first start"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/top-story-en\/installing-lm-studio-on-gigabyte-ai-top-atom-user-friendly-gui-with-openai-compatible-api-for-local-llms\/2133\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Installing LM Studio on Gigabyte AI TOP ATOM: User-friendly GUI with OpenAI-compatible API for local LLMs"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2133","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=2133"}],"version-history":[{"count":0,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2133\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/2123"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=2133"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=2133"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=2133"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}