{"id":1945,"date":"2025-12-23T12:12:37","date_gmt":"2025-12-23T12:12:37","guid":{"rendered":"https:\/\/ai-box.eu\/?p=1945"},"modified":"2025-12-27T20:46:24","modified_gmt":"2025-12-27T20:46:24","slug":"installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/","title":{"rendered":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 1-2"},"content":{"rendered":"<p data-path-to-node=\"1\">After showing how to install Ollama, Open WebUI, and ComfyUI on the <b data-path-to-node=\"1\" data-index-in-node=\"75\">Gigabyte AI TOP ATOM<\/b> in my previous posts, now comes something for everyone who wants to adapt their own language models and make them individual: <b data-path-to-node=\"1\" data-index-in-node=\"200\">LLaMA Factory<\/b> \u2013 an open-source framework that simplifies the fine-tuning of Large Language Models and supports methods such as LoRA, QLoRA, and Full Fine-Tuning. For my field reports, I was loaned a system by the company <a href=\"https:\/\/www.mifcom.de\/\" target=\"_blank\" rel=\"noopener\">MIFCOM<\/a>, a specialist for high-performance and gaming computers from Munich.<\/p>\n<p data-path-to-node=\"2\">In this post, I will show you how I installed and configured <b data-path-to-node=\"2\" data-index-in-node=\"30\">LLaMA Factory<\/b> on my Gigabyte AI TOP ATOM to adapt language models like LLaMA, Mistral, or Qwen for specific tasks. LLaMA Factory utilizes the full GPU performance of the Blackwell architecture and allows you to train models using various fine-tuning methods. Mind you, everything is intended to run locally on your own <strong>AI TOP ATOM<\/strong> or your own NVIDIA DGX Spark. Since Gigabyte&#8217;s AI TOP ATOM system is based on the same platform as the <b data-path-to-node=\"2\" data-index-in-node=\"200\">NVIDIA DGX Spark<\/b>, the official NVIDIA playbooks work just as reliably here.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#The_Basic_Idea_Adapting_Your_Own_Language_Models_for_Special_Tasks\" >The Basic Idea: Adapting Your Own Language Models for Special Tasks<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_1_Check_System_Requirements\" >Phase 1: Check System Requirements<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_2_Start_NVIDIA_PyTorch_Container_with_GPU_Support\" >Phase 2: Start NVIDIA PyTorch Container with GPU Support<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_3_Clone_LLaMA_Factory_Repository\" >Phase 3: Clone LLaMA Factory Repository<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_4_Install_LLaMA_Factory_with_Dependencies\" >Phase 4: Install LLaMA Factory with Dependencies<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_5_Check_PyTorch_CUDA_Support\" >Phase 5: Check PyTorch CUDA Support<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#Phase_6_Prepare_Training_Configuration\" >Phase 6: Prepare Training Configuration<\/a><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"4\"><span class=\"ez-toc-section\" id=\"The_Basic_Idea_Adapting_Your_Own_Language_Models_for_Special_Tasks\"><\/span>The Basic Idea: Adapting Your Own Language Models for Special Tasks<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"5\">Before I dive into the technical details, an important point: <strong>LLaMA Factory<\/strong> is a framework that significantly simplifies the fine-tuning of Large Language Models. Unlike complex manual setups, LLaMA Factory offers a unified interface for various fine-tuning methods such as Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Quantized LoRA (QLoRA).<\/p>\n<p data-path-to-node=\"6\">The special thing about it: LLaMA Factory supports a wide range of LLM architectures such as LLaMA, Mistral, Qwen, and many more. You can adapt your models for specific domains &#8211; whether for code generation, medical applications, or special corporate requirements. Installation is done via Docker using the NVIDIA PyTorch container, which already includes CUDA support and all necessary libraries.<\/p>\n<p data-path-to-node=\"7\"><strong>What you need for this:<\/strong><\/p>\n<ul data-path-to-node=\"8\">\n<li>\n<p data-path-to-node=\"8,0,0\">A Gigabyte AI TOP ATOM, ASUS Ascent, MSI EdgeXpert (or NVIDIA DGX Spark) connected to the network<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,1,0\">A connected monitor or terminal access to the AI TOP ATOM<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,2,0\">Docker installed and configured for GPU access<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,3,0\">Basic knowledge of terminal commands, Docker, and Python<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,4,0\">At least 50 GB of free storage space for models, checkpoints, and training data<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,5,0\">An internet connection to download models from the Hugging Face Hub<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"8,6,0\">Optional: A Hugging Face account for gated models (models with access restrictions)<\/p>\n<\/li>\n<\/ul>\n<h3 data-path-to-node=\"9\"><span class=\"ez-toc-section\" id=\"Phase_1_Check_System_Requirements\"><\/span>Phase 1: Check System Requirements<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"10\">For the rest of my instructions, I am assuming that you are sitting directly in front of the AI TOP ATOM or the NVIDIA DGX Spark with a monitor, keyboard, and mouse connected. First, I check whether all necessary system requirements are met. To do this, I open a terminal on my AI TOP ATOM and execute the following commands.<\/p>\n<p data-path-to-node=\"10\">The following command shows you if the CUDA Toolkit is installed:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>nvcc --version<\/code><\/p>\n<p data-path-to-node=\"10\">You should see CUDA 12.9 or higher. Next, I check if Docker is installed:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker --version<\/code><\/p>\n<p data-path-to-node=\"10\">Now I use the following command to check if Docker has GPU access. A few GB will be downloaded, but you will save that time later because the same Docker container is required for LLaMA Factory.<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>docker run --gpus all nvcr.io\/nvidia\/pytorch:25.11-py3 nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">This command starts a test container and displays the GPU information. If Docker is not yet configured for GPU access, you must set that up first. Also check Python and Git:<\/p>\n<div id=\"attachment_1951\" style=\"width: 900px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-890x1024.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1951\" class=\"wp-image-1951 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-890x1024.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container test\" width=\"890\" height=\"1024\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-890x1024.png 890w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-261x300.png 261w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-768x884.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test-1080x1243.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-test.png 1216w\" sizes=\"(max-width: 890px) 100vw, 890px\" \/><\/a><p id=\"caption-attachment-1951\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Docker Container test<\/p><\/div>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>python3 --version<\/code><\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>git --version<\/code><\/p>\n<p data-path-to-node=\"10\">And finally, I check if the GPU is detected:<\/p>\n<p data-path-to-node=\"10\"><strong>Command:<\/strong> <code>nvidia-smi<\/code><\/p>\n<p data-path-to-node=\"10\">You should now see the GPU information. If any of these commands fail, you must install the corresponding components first.<\/p>\n<div id=\"attachment_XXXX\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-XXXX\" class=\"wp-image-XXXX size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-nvidia_smi-1024x694.png\" alt=\"GIGABYTE AI TOP ATOM - NVIDIA-SMI\" width=\"1024\" height=\"694\" \/><\/a><p id=\"caption-attachment-XXXX\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; NVIDIA-SMI<\/p><\/div>\n<h3 data-path-to-node=\"17\"><span class=\"ez-toc-section\" id=\"Phase_2_Start_NVIDIA_PyTorch_Container_with_GPU_Support\"><\/span>Phase 2: Start NVIDIA PyTorch Container with GPU Support<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"18\">LLaMA Factory runs in a Docker container that already contains PyTorch with CUDA support. This makes installation much easier, as we don&#8217;t have to worry about Python dependencies. First, I create a working directory:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>mkdir -p ~\/llama-factory-workspace<\/code><\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>cd ~\/llama-factory-workspace<\/code><\/p>\n<p data-path-to-node=\"18\"><strong>NVIDIA PyTorch Container:<\/strong><\/p>\n<p data-path-to-node=\"18\">Next comes the exciting part of the project. Now I start the NVIDIA PyTorch container with GPU access and mount the working directory. Important: I use a name for the container (<code>--name llama-factory<\/code>) and omit <code>--rm<\/code> so that the container is preserved even after a restart:<\/p>\n<p data-path-to-node=\"18\"><strong>Command:<\/strong> <code>docker run --gpus all --ipc=host --ulimit memlock=-1 -it --ulimit stack=67108864 --name llama-factory -p 7862:7860 -v \"$PWD\":\/workspace nvcr.io\/nvidia\/pytorch:25.11-py3 bash<\/code><\/p>\n<p data-path-to-node=\"18\">This command starts the container and opens an interactive bash session. The container supports CUDA 13 and is specifically optimized for the Blackwell architecture. The parameters <code>--ipc=host<\/code> and <code>--ulimit<\/code> are important for GPU performance and memory management.<\/p>\n<p data-path-to-node=\"18\">After starting, you will see a new prompt showing that you are now inside the container. All following commands are executed within the container.<\/p>\n<p data-path-to-node=\"18\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important Note:<\/b> If the container already exists (e.g., after a restart), start it with: <code>docker start -ai llama-factory<\/code>. To get back into a running container: <code>docker exec -it llama-factory bash<\/code>.<\/p>\n<div id=\"attachment_1953\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-1024x678.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1953\" class=\"wp-image-1953 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-1024x678.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI\" width=\"1024\" height=\"678\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-1024x678.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-300x199.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-768x508.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-1080x715.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI.png 1216w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1953\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Docker Container CLI<\/p><\/div>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_3_Clone_LLaMA_Factory_Repository\"><\/span>Phase 3: Clone LLaMA Factory Repository<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now I download the LLaMA Factory source code from the official GitHub repository. Since we are in the container, everything is saved in the mounted workspace directory:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>git clone --depth 1 https:\/\/github.com\/hiyouga\/LLaMA-Factory.git<\/code><\/p>\n<p data-path-to-node=\"25\">The parameter <code>--depth 1<\/code> only downloads the latest version, which is faster. After cloning, I switch to the LLaMA Factory directory:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>cd LLaMA-Factory<\/code><\/p>\n<p data-path-to-node=\"25\">The repository contains all necessary files for LLaMA Factory, including sample configurations and training scripts.<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_4_Install_LLaMA_Factory_with_Dependencies\"><\/span>Phase 4: Install LLaMA Factory with Dependencies<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Now I install LLaMA Factory in editable mode with metrics support for training evaluation:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>pip install -e \".[metrics]\"<\/code><\/p>\n<p data-path-to-node=\"25\">This installation can take several minutes as many packages need to be downloaded. The parameter <code>-e<\/code> installs LLaMA Factory in editable mode, so that changes to the code take effect immediately. The option <code>[metrics]<\/code> installs additional packages for training metrics.<\/p>\n<p data-path-to-node=\"25\">I didn&#8217;t see anything interesting in the terminal window here besides &#8220;Successfully installed&#8230;.&#8221; and therefore did not insert a picture here.<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_5_Check_PyTorch_CUDA_Support\"><\/span>Phase 5: Check PyTorch CUDA Support<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">PyTorch is already pre-installed in the container, but I&#8217;ll check anyway if CUDA support is available:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>python -c \"import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}')\"<\/code><\/p>\n<p data-path-to-node=\"25\">You should see an output that looks something like this:<\/p>\n<pre data-path-to-node=\"26\"><code data-path-to-node=\"26\">PyTorch: 2.10.0a0+b558c986e8.nv25.11, CUDA: True<\/code><\/pre>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_6_Prepare_Training_Configuration\"><\/span>Phase 6: Prepare Training Configuration<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">LLaMA Factory uses YAML configuration files for training. I&#8217;ll look at the example configuration for LoRA fine-tuning with Llama-3:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>cat examples\/train_lora\/llama3_lora_sft.yaml<\/code><\/p>\n<p data-path-to-node=\"25\">This configuration contains all necessary parameters for training: model name, dataset, batch size, learning rate, and much more. You can copy this file and adapt it to your own requirements.<\/p>\n<p data-path-to-node=\"25\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Important Note:<\/b> For your first training, I recommend using the sample configuration unchanged first to ensure that everything is working.<\/p>\n<p data-path-to-node=\"25\">\n<blockquote>\n<p data-path-to-node=\"25\"><strong>Proceed to Part 2 of the setup and configuration manual here.<\/strong><\/p>\n<p data-path-to-node=\"25\"><strong><a class=\"row-title\" href=\"https:\/\/ai-box.eu\/en\/top-story-en\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/\" aria-label=\"\u201eInstalling LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA \u2013 Part 2-2\u201c (Bearbeiten)\">Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA \u2013 Part 2-2<\/a><\/strong><\/p>\n<\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>After showing how to install Ollama, Open WebUI, and ComfyUI on the Gigabyte AI TOP ATOM in my previous posts, now comes something for everyone who wants to adapt their own language models and make them individual: LLaMA Factory \u2013 an open-source framework that simplifies the fine-tuning of Large Language Models and supports methods such [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1961,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[873,162,50],"tags":[826,845,811,835,831,828,353,836,786,848,832,846,813,830,829,818,817,819,823,847,305,816,844,837,820,827,316,824,791,834,833,814,821,825,815,822],"class_list":["post-1945","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gigabyte-ai-top-atom","category-large-language-models-en","category-top-story-en","tag-ai-model-training","tag-ai-top-atom-guide","tag-blackwell-gpu","tag-cuda-fine-tuning","tag-custom-llm-training","tag-dgx-spark-playbook","tag-docker","tag-fine-tuning","tag-gigabyte-ai-top-atom","tag-hugging-face","tag-hugging-face-fine-tuning","tag-large-language-models","tag-llama-factory","tag-llama-factory-anleitung","tag-llama-factory-deutsch","tag-llama-factory-docker","tag-llama-factory-installation","tag-llama-factory-tutorial","tag-llama-fine-tuning","tag-llama-3","tag-llm-en","tag-llm-fine-tuning","tag-local-ai-training","tag-lora","tag-lora-training","tag-machine-learning-training","tag-mistral-en","tag-mistral-fine-tuning","tag-nvidia-dgx-spark","tag-nvidia-pytorch-container","tag-pytorch-fine-tuning","tag-qlora","tag-qlora-training","tag-qwen-fine-tuning","tag-sprachmodelle-anpassen","tag-stable-diffusion-fine-tuning","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"description\" content=\"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-23T12:12:37+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-27T20:46:24+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1876\" \/>\n\t<meta property=\"og:image:height\" content=\"1262\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 1-2\",\"datePublished\":\"2025-12-23T12:12:37+00:00\",\"dateModified\":\"2025-12-27T20:46:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/\"},\"wordCount\":1102,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"keywords\":[\"AI Model Training\",\"AI TOP ATOM Guide\",\"Blackwell GPU\",\"CUDA Fine-Tuning\",\"Custom LLM Training\",\"DGX Spark Playbook\",\"Docker\",\"Fine-tuning\",\"Gigabyte AI TOP ATOM\",\"Hugging Face\",\"Hugging Face Fine-Tuning\",\"Large Language Models\",\"LLaMA Factory\",\"LLaMA Factory Anleitung\",\"LLaMA Factory Deutsch\",\"LLaMA Factory Docker\",\"LLaMA Factory Installation\",\"LLaMA Factory Tutorial\",\"LLaMA Fine-Tuning\",\"LLaMA-3\",\"LLM\",\"LLM Fine-Tuning\",\"Local AI Training\",\"LoRa\",\"LoRA Training\",\"Machine Learning Training\",\"mistral\",\"Mistral Fine-Tuning\",\"NVIDIA DGX Spark\",\"NVIDIA PyTorch Container\",\"PyTorch Fine-Tuning\",\"QLoRA\",\"QLoRA Training\",\"Qwen Fine-Tuning\",\"Sprachmodelle anpassen\",\"Stable Diffusion Fine-Tuning\"],\"articleSection\":[\"Gigabyte AI TOP ATOM\",\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/\",\"name\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"datePublished\":\"2025-12-23T12:12:37+00:00\",\"dateModified\":\"2025-12-27T20:46:24+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"description\":\"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"width\":1876,\"height\":1262,\"caption\":\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI running training\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/large-language-models-en\\\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\\\/1945\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 1-2\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box","description":"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/","og_locale":"en_US","og_type":"article","og_title":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box","og_description":"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.","og_url":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2025-12-23T12:12:37+00:00","article_modified_time":"2025-12-27T20:46:24+00:00","og_image":[{"width":1876,"height":1262,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","type":"image\/png"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 1-2","datePublished":"2025-12-23T12:12:37+00:00","dateModified":"2025-12-27T20:46:24+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/"},"wordCount":1102,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","keywords":["AI Model Training","AI TOP ATOM Guide","Blackwell GPU","CUDA Fine-Tuning","Custom LLM Training","DGX Spark Playbook","Docker","Fine-tuning","Gigabyte AI TOP ATOM","Hugging Face","Hugging Face Fine-Tuning","Large Language Models","LLaMA Factory","LLaMA Factory Anleitung","LLaMA Factory Deutsch","LLaMA Factory Docker","LLaMA Factory Installation","LLaMA Factory Tutorial","LLaMA Fine-Tuning","LLaMA-3","LLM","LLM Fine-Tuning","Local AI Training","LoRa","LoRA Training","Machine Learning Training","mistral","Mistral Fine-Tuning","NVIDIA DGX Spark","NVIDIA PyTorch Container","PyTorch Fine-Tuning","QLoRA","QLoRA Training","Qwen Fine-Tuning","Sprachmodelle anpassen","Stable Diffusion Fine-Tuning"],"articleSection":["Gigabyte AI TOP ATOM","Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/","url":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/","name":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 1-2 - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","datePublished":"2025-12-23T12:12:37+00:00","dateModified":"2025-12-27T20:46:24+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"description":"Learn how to install and use LLaMA Factory on the Gigabyte AI TOP ATOM to fine-tune LLMs like Llama 3 or Mistral locally. Step-by-step guide covering Docker, LoRA, and NVIDIA DGX Spark playbooks.","breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","width":1876,"height":1262,"caption":"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI running training"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 1-2"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1945","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=1945"}],"version-history":[{"count":8,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1945\/revisions"}],"predecessor-version":[{"id":2023,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/1945\/revisions\/2023"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/1961"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=1945"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=1945"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=1945"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}