{"id":2017,"date":"2025-12-27T20:37:30","date_gmt":"2025-12-27T20:37:30","guid":{"rendered":"https:\/\/ai-box.eu\/?p=2017"},"modified":"2025-12-27T20:48:10","modified_gmt":"2025-12-27T20:48:10","slug":"installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2","status":"publish","type":"post","link":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/","title":{"rendered":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 2-2"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_7_Start_Fine-Tuning_Training\" >Phase 7: Start Fine-Tuning Training<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_8_Validate_Training_Results\" >Phase 8: Validate Training Results<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_9_Test_Fine-Tuned_Model\" >Phase 9: Test Fine-Tuned Model<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_10_Start_LLaMA_Factory_Web_Interface\" >Phase 10: Start LLaMA Factory Web Interface<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_11_Configuring_Automatic_Restart_of_the_Web-UI_Reboot-Proof\" >Phase 11: Configuring Automatic Restart of the Web-UI (Reboot-Proof)<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Phase_12_Export_Model_for_Production\" >Phase 12: Export Model for Production<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Troubleshooting_Common_Problems_and_Solutions\" >Troubleshooting: Common Problems and Solutions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Exiting_and_Restarting_the_Container\" >Exiting and Restarting the Container<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Rollback_Removing_LLaMA_Factory_Again\" >Rollback: Removing LLaMA Factory Again<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Summary_Conclusion\" >Summary &amp; Conclusion<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#Next_Step_Preparing_Your_Own_Datasets_and_Adapting_Models\" >Next Step: Preparing Your Own Datasets and Adapting Models<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_7_Start_Fine-Tuning_Training\"><\/span>Phase 7: Start Fine-Tuning Training<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">Before I start the training, I might need to log in to the Hugging Face Hub if the model is gated (has access restrictions). For public models, this is not necessary:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>huggingface-cli login<\/code><\/p>\n<p data-path-to-node=\"25\">You will be asked for your Hugging Face token. You can find this in your Hugging Face account settings at <a href=\"https:\/\/huggingface.co\/settings\/tokens\" target=\"_blank\" rel=\"noopener\">https:\/\/huggingface.co\/settings\/tokens<\/a>.<\/p>\n<p data-path-to-node=\"25\"><strong>Note:<\/strong><\/p>\n<p data-path-to-node=\"25\">After I had deposited this and executed the following command to start the training, the following error message appeared:<\/p>\n<blockquote>\n<p data-path-to-node=\"25\">Cannot access gated repo for url https:\/\/huggingface.co\/meta-llama\/Meta-Llama-3-8B-Instruct\/resolve\/main\/config.json.<br \/>\nAccess to model meta-llama\/Meta-Llama-3-8B-Instruct is restricted and you are not in the authorized list. Visit https:\/\/huggingface.co\/meta-llama\/Meta-Llama-3-8B-Instruct to ask for access.<\/p>\n<\/blockquote>\n<p data-path-to-node=\"25\">For this example, I then went to the following page and applied for access for the model with my user and received it immediately.<\/p>\n<p data-path-to-node=\"25\"><strong>URL:<\/strong> <a href=\"https:\/\/huggingface.co\/meta-llama\/Meta-Llama-3-8B-Instruct\" target=\"_blank\" rel=\"noopener\">https:\/\/huggingface.co\/meta-llama\/Meta-Llama-3-8B-Instruct<\/a><\/p>\n<p data-path-to-node=\"25\">Now I first had to practice a bit of patience until I received access to the model. Without access to the model, it is not possible to continue.<\/p>\n<div id=\"attachment_1955\" style=\"width: 898px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/META-LLAMA-3-COMMUNITY-LICENSE-AGREEMENT.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1955\" class=\"size-full wp-image-1955\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/META-LLAMA-3-COMMUNITY-LICENSE-AGREEMENT.png\" alt=\"META LLAMA 3 COMMUNITY LICENSE AGREEMENT\" width=\"888\" height=\"473\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/META-LLAMA-3-COMMUNITY-LICENSE-AGREEMENT.png 888w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/META-LLAMA-3-COMMUNITY-LICENSE-AGREEMENT-300x160.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/META-LLAMA-3-COMMUNITY-LICENSE-AGREEMENT-768x409.png 768w\" sizes=\"(max-width: 888px) 100vw, 888px\" \/><\/a><p id=\"caption-attachment-1955\" class=\"wp-caption-text\">META LLAMA 3 COMMUNITY LICENSE AGREEMENT<\/p><\/div>\n<p>On the following page you can see the status of the approval for the model for which you have requested access.<\/p>\n<p><strong>URL:<\/strong> <a href=\"https:\/\/huggingface.co\/settings\/gated-repos\" target=\"_blank\" rel=\"noopener\">https:\/\/huggingface.co\/settings\/gated-repos<\/a><\/p>\n<p data-path-to-node=\"25\">For the test training to see if everything works, the following data sets are loaded as training data:<\/p>\n<p data-path-to-node=\"25\"><strong>Dataset identity.json:<\/strong> <a href=\"https:\/\/github.com\/hiyouga\/LLaMA-Factory\/blob\/main\/data\/identity.json\" target=\"_blank\" rel=\"noopener\">https:\/\/github.com\/hiyouga\/LLaMA-Factory\/blob\/main\/data\/identity.json<\/a><\/p>\n<p><strong>Dataset alpaca_en_demo.json:<\/strong> <a href=\"https:\/\/github.com\/hiyouga\/LLaMA-Factory\/blob\/main\/data\/alpaca_en_demo.json\" target=\"_blank\" rel=\"noopener\">https:\/\/github.com\/hiyouga\/LLaMA-Factory\/blob\/main\/data\/alpaca_en_demo.json<\/a><\/p>\n<p data-path-to-node=\"25\">Now I start the fine-tuning training with the sample configuration from the repository. Here it&#8217;s simply about whether everything works in general.<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>llamafactory-cli train examples\/train_lora\/llama3_lora_sft.yaml<\/code><\/p>\n<p data-path-to-node=\"25\">If you would like to learn more about the topic of data preparation for training, visit the following page.<\/p>\n<p data-path-to-node=\"25\"><strong>URL:<\/strong> <a href=\"https:\/\/llamafactory.readthedocs.io\/en\/latest\/getting_started\/data_preparation.html\" target=\"_blank\" rel=\"noopener\">https:\/\/llamafactory.readthedocs.io\/en\/latest\/getting_started\/data_preparation.html<\/a><\/p>\n<p data-path-to-node=\"25\">The training can take between 1-7 hours depending on the model size and dataset. You can see the progress in real time, including training metrics such as loss values. The output looks something like this:<\/p>\n<pre data-path-to-node=\"27\"><code data-path-to-node=\"27\">***** train metrics *****\r\n  epoch                     =        3.0\r\n  total_flos                = 22851591GF\r\n  train_loss                =     0.9113\r\n  train_runtime             = 0:22:21.99\r\n  train_samples_per_second =      2.437\r\n  train_steps_per_second   =      0.306\r\nFigure saved at: saves\/llama3-8b\/lora\/sft\/training_loss.png\r\n<\/code><\/pre>\n<p data-path-to-node=\"25\">During training, checkpoints are saved regularly so that you can interrupt the training if necessary and continue later. For me, the running training in the terminal window together with the DGX dashboard looked as shown in the following image.<\/p>\n<div id=\"attachment_1960\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-1024x689.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1960\" class=\"wp-image-1960 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-1024x689.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI running training\" width=\"1024\" height=\"689\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-1024x689.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-300x202.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-768x517.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-1536x1033.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running-1080x727.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png 1876w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1960\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Docker Container CLI running training<\/p><\/div>\n<p>The training was successfully completed after about 40 minutes. The result then looked like this:<\/p>\n<div id=\"attachment_1964\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-1024x868.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1964\" class=\"wp-image-1964 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-1024x868.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker training completed\" width=\"1024\" height=\"868\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-1024x868.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-300x254.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-768x651.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-1536x1302.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed-1080x916.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-completed.png 1676w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1964\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Docker training completed<\/p><\/div>\n<p>Here in the following image you can clearly see how the training went with the test data.<\/p>\n<div id=\"attachment_1966\" style=\"width: 650px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-loss.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1966\" class=\"size-full wp-image-1966\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-loss.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker training loss\" width=\"640\" height=\"480\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-loss.png 640w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-training-loss-300x225.png 300w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><p id=\"caption-attachment-1966\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Docker training loss<\/p><\/div>\n<p>You can find the generated training data on your computer in the following path if you have followed the instructions so far.<\/p>\n<p><strong>Path:<\/strong> \/home\/&lt;user&gt;\/LLaMA-Factory\/saves\/llama3-8b\/lora\/sft<\/p>\n<h3 data-path-to-node=\"24\"><span class=\"ez-toc-section\" id=\"Phase_8_Validate_Training_Results\"><\/span>Phase 8: Validate Training Results<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"25\">After training, I check whether everything was successful and the checkpoints were saved:<\/p>\n<p data-path-to-node=\"25\"><strong>Command:<\/strong> <code>ls -la saves\/llama3-8b\/lora\/sft\/<\/code><\/p>\n<p data-path-to-node=\"25\">You should see:<\/p>\n<ul data-path-to-node=\"28\">\n<li>\n<p data-path-to-node=\"28,0,0\">A checkpoint directory (e.g., <code>checkpoint-21<\/code>)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"28,1,0\">Model configuration files (<code>adapter_config.json<\/code>)<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"28,2,0\">Training metrics with decreasing loss values<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"28,3,0\">A training loss diagram as a PNG file<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"25\">The checkpoints contain your customized model and can be used later for inference or export.<\/p>\n<h3 data-path-to-node=\"39\"><span class=\"ez-toc-section\" id=\"Phase_9_Test_Fine-Tuned_Model\"><\/span>Phase 9: Test Fine-Tuned Model<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"40\">Now I test the customized model with my own prompt:<\/p>\n<p data-path-to-node=\"40\"><strong>Command:<\/strong> <code>llamafactory-cli chat examples\/inference\/llama3_lora_sft.yaml<\/code><\/p>\n<p data-path-to-node=\"40\">This command starts an interactive chat with your fine-tuned model. You can now ask questions and see how the model behaves after training. For example:<\/p>\n<p data-path-to-node=\"40\"><strong>Input:<\/strong> <code>Hello, how can you help me today?<\/code><\/p>\n<p data-path-to-node=\"40\">Here is the result of the short test as an image.<\/p>\n<div id=\"attachment_1969\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-1024x411.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1969\" class=\"wp-image-1969 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-1024x411.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory checkpoint test\" width=\"1024\" height=\"411\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-1024x411.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-300x120.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-768x308.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-1536x617.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test-1080x434.png 1080w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-checkpoint-test.png 1676w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1969\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory checkpoint test<\/p><\/div>\n<p data-path-to-node=\"40\">The model should give a response that shows the customized behavior. To end the chat, press <code>Ctrl+C<\/code>.<\/p>\n<h3 data-path-to-node=\"47\"><span class=\"ez-toc-section\" id=\"Phase_10_Start_LLaMA_Factory_Web_Interface\"><\/span>Phase 10: Start LLaMA Factory Web Interface<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"48\">LLaMA Factory also offers a user-friendly web interface that allows training and management of models via the browser. To start the web interface:<\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong> <code>llamafactory-cli webui<\/code><\/p>\n<p data-path-to-node=\"48\">The web interface starts and is reachable by default at <code>http:\/\/localhost:7860<\/code>. To make it reachable from the network as well, use:<\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong> <code>llamafactory-cli webui --host 0.0.0.0 --port 7862<\/code><\/p>\n<p data-path-to-node=\"48\"><strong>Note:<\/strong> Please pay attention to how the Docker container was started and here to the parameter <code>-p 7862:7860<\/code> so that the correct port is redirected into the container. In the output in the terminal it still says port <code>7860<\/code> but LLaMA Factory is reachable via port <code>7862<\/code>.<\/p>\n<p data-path-to-node=\"48\">Now you can access the web interface from any computer in the network. Open <code>http:\/\/&lt;IP-Address-AI-TOP-ATOM&gt;:7862<\/code> in your browser (replace <code>&lt;IP-Address-AI-TOP-ATOM&gt;<\/code> with the IP address of your AI TOP ATOM).<\/p>\n<div id=\"attachment_1975\" style=\"width: 1034px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-1024x560.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-1975\" class=\"wp-image-1975 size-large\" src=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-1024x560.png\" alt=\"GIGABYTE AI TOP ATOM - LLaMA Factory Web-Interface\" width=\"1024\" height=\"560\" srcset=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-1024x560.png 1024w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-300x164.png 300w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-768x420.png 768w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-1536x841.png 1536w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-2048x1121.png 2048w, https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-web-interface-1080x591.png 1080w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><p id=\"caption-attachment-1975\" class=\"wp-caption-text\">GIGABYTE AI TOP ATOM &#8211; LLaMA Factory Web-Interface<\/p><\/div>\n<p data-path-to-node=\"48\">If a firewall is active, you must open port 7862:<\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong> <code>sudo ufw allow 7862<\/code><\/p>\n<p data-path-to-node=\"48\">In the web interface you can:<\/p>\n<ul data-path-to-node=\"49\">\n<li>\n<p data-path-to-node=\"49,0,0\">Train and fine-tune models<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"49,1,0\">Upload and manage datasets<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"49,2,0\">Monitor training progress<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"49,3,0\">Test and export models<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"48\"><b data-path-to-node=\"22\" data-index-in-node=\"0\">Note:<\/b> The web interface runs in the foreground. To run it in the background, you can use <code>screen<\/code> or <code>tmux<\/code>, or start the container in detached mode and run the web interface there.<\/p>\n<h3 data-path-to-node=\"80\"><span class=\"ez-toc-section\" id=\"Phase_11_Configuring_Automatic_Restart_of_the_Web-UI_Reboot-Proof\"><\/span>Phase 11: Configuring Automatic Restart of the Web-UI (Reboot-Proof)<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"81\">To ensure that LLaMA Factory starts automatically after a system reboot without manual intervention, you can configure a Docker <strong>Restart Policy<\/strong> and use a combined start command. This turns your AI TOP ATOM into a reliable server. First, stop and remove the existing container:<\/p>\n<p data-path-to-node=\"82\"><strong>Command:<\/strong> <code>docker stop llama-factory &amp;&amp; docker rm llama-factory<\/code><\/p>\n<p data-path-to-node=\"83\">Now, restart the container with the <code>--restart unless-stopped<\/code> policy. We also use a bash command to ensure that the LLaMA Factory dependencies are registered before the Web-UI launches:<\/p>\n<p data-path-to-node=\"84\"><strong>Command:<\/strong> <code>docker run --gpus all --ipc=host --ulimit memlock=-1 -d --ulimit stack=67108864 --name llama-factory --restart unless-stopped -p 7862:7860 -v \"$PWD\":\/workspace -w \/workspace\/LLaMA-Factory nvcr.io\/nvidia\/pytorch:25.11-py3 bash -c \"pip install -e '.[metrics]' &amp;&amp; llamafactory-cli webui --host 0.0.0.0 --port 7860\"<\/code><\/p>\n<p data-path-to-node=\"85\">The parameter <code>-d<\/code> (detached) runs the container in the background. You can check the logs at any time to monitor the startup process or the training progress:<\/p>\n<p data-path-to-node=\"86\"><strong>Command:<\/strong> <code>docker logs -f llama-factory<\/code><\/p>\n<p data-path-to-node=\"87\">With this setup, the Web interface will be available at <code>http:\/\/&lt;IP-Address&gt;:7862<\/code> every time you power on your Gigabyte AI TOP ATOM.<\/p>\n<h3 data-path-to-node=\"47\"><span class=\"ez-toc-section\" id=\"Phase_12_Export_Model_for_Production\"><\/span>Phase 12: Export Model for Production<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"48\">For productive use, you can export your fine-tuned model. This combines the base model with the LoRA adapters into a single model. To do this, run the following command in the Docker container:<\/p>\n<p data-path-to-node=\"48\"><strong>Command:<\/strong> <code>llamafactory-cli export examples\/merge_lora\/llama3_lora_sft.yaml<\/code><\/p>\n<p data-path-to-node=\"48\">The exported model can then be used in other applications such as Ollama or vLLM. The export process can take several minutes, depending on the model size.<\/p>\n<h3 data-path-to-node=\"52\"><span class=\"ez-toc-section\" id=\"Troubleshooting_Common_Problems_and_Solutions\"><\/span>Troubleshooting: Common Problems and Solutions<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"53\">In my time with LLaMA Factory on the AI TOP ATOM, I have encountered some typical problems. Here are the most common ones and how I solved them:<\/p>\n<ul data-path-to-node=\"54\">\n<li>\n<p data-path-to-node=\"54,0,0\"><b data-path-to-node=\"54,0,0\" data-index-in-node=\"0\">CUDA out of memory during training:<\/b> The batch size is too large for the available GPU memory. Reduce <code>per_device_train_batch_size<\/code> in the configuration file or increase <code>gradient_accumulation_steps<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,0,1\"><b data-path-to-node=\"54,0,1\" data-index-in-node=\"0\">Access to gated repository not possible:<\/b> Certain Hugging Face models have access restrictions. Re-generate your <a href=\"https:\/\/huggingface.co\/docs\/hub\/en\/security-tokens\" target=\"_blank\" rel=\"noopener\">Hugging Face Token<\/a> and request access to the <a href=\"https:\/\/huggingface.co\/docs\/hub\/en\/models-gated#customize-requested-information\" target=\"_blank\" rel=\"noopener\">gated model<\/a> in the browser.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,1,0\"><b data-path-to-node=\"54,1,0\" data-index-in-node=\"0\">Model download fails or is slow:<\/b> Check the internet connection. If you already have cached models, you can use <code>HF_HUB_OFFLINE=1<\/code>.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,2,0\"><b data-path-to-node=\"54,2,0\" data-index-in-node=\"0\">Training Loss does not decrease:<\/b> The learning rate might be too high or too low. Adjust the <code>learning_rate<\/code> parameter or check the quality of your dataset.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,3,0\"><b data-path-to-node=\"54,3,0\" data-index-in-node=\"0\">Docker container does not start:<\/b> Check if Docker is correctly installed and whether <code>--gpus all<\/code> is supported. On some systems, the Docker group must be configured.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"54,4,0\"><b data-path-to-node=\"54,4,0\" data-index-in-node=\"0\">Memory problems despite sufficient RAM:<\/b> On the DGX Spark platform with Unified Memory Architecture, you can manually clear the buffer cache in case of memory problems:<\/p>\n<\/li>\n<\/ul>\n<pre data-path-to-node=\"55\"><code data-path-to-node=\"55\">sudo sh -c 'sync; echo 3 &gt; \/proc\/sys\/vm\/drop_caches'\r\n<\/code><\/pre>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Exiting_and_Restarting_the_Container\"><\/span>Exiting and Restarting the Container<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"57\">If you want to leave the container (e.g., to free up resources), you can simply enter <code>exit<\/code>. The container is preserved because we did not use the <code>--rm<\/code> parameter. Your data in the mounted workspace directory is also preserved.<\/p>\n<p data-path-to-node=\"57\">To get back into the container later, there are several possibilities:<\/p>\n<ul data-path-to-node=\"58\">\n<li>\n<p data-path-to-node=\"58,0,0\"><b data-path-to-node=\"58,0,0\" data-index-in-node=\"0\">Container is stopped:<\/b> Start it with <code>docker start -ai llama-factory<\/code>. This starts the container and connects you directly to the interactive session.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"58,1,0\"><b data-path-to-node=\"58,1,0\" data-index-in-node=\"0\">Container is already running:<\/b> Connect with <code>docker exec -it llama-factory bash<\/code>. This opens a new bash session in the running container.<\/p>\n<\/li>\n<li>\n<p data-path-to-node=\"58,2,0\"><b data-path-to-node=\"58,2,0\" data-index-in-node=\"0\">After a restart:<\/b> First check the status with <code>docker ps -a | grep llama-factory<\/code>. If the container is stopped, start it with <code>docker start -ai llama-factory<\/code>.<\/p>\n<\/li>\n<\/ul>\n<p data-path-to-node=\"57\">To check the status of all containers:<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker ps -a<\/code><\/p>\n<p data-path-to-node=\"57\">To stop the container (without deleting it):<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker stop llama-factory<\/code><\/p>\n<p data-path-to-node=\"57\">To completely remove the container (all data inside the container will be lost, but not in the mounted workspace):<\/p>\n<p data-path-to-node=\"57\"><strong>Command:<\/strong> <code>docker rm llama-factory<\/code><\/p>\n<p data-path-to-node=\"57\">If you want to recreate the container after removing it, simply use the command from Phase 2 again.<\/p>\n<h3 data-path-to-node=\"56\"><span class=\"ez-toc-section\" id=\"Rollback_Removing_LLaMA_Factory_Again\"><\/span>Rollback: Removing LLaMA Factory Again<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"60\">If you want to completely remove LLaMA Factory from the AI TOP ATOM, execute the following commands on the system:<\/p>\n<p data-path-to-node=\"60\">First, leave the container (if you are still inside):<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>exit<\/code><\/p>\n<p data-path-to-node=\"60\">Stop the container (if it is still running):<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker stop llama-factory<\/code><\/p>\n<p data-path-to-node=\"60\">Remove the container:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker rm llama-factory<\/code><\/p>\n<p data-path-to-node=\"60\">Then remove the workspace directory:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>rm -rf ~\/llama-factory-workspace<\/code><\/p>\n<p data-path-to-node=\"60\">To also remove unused Docker containers and images:<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker system prune -f<\/code><\/p>\n<p data-path-to-node=\"60\">To remove only containers (images are preserved):<\/p>\n<p data-path-to-node=\"60\"><strong>Command:<\/strong> <code>docker container prune -f<\/code><\/p>\n<blockquote data-path-to-node=\"62\">\n<p data-path-to-node=\"62,0\"><b data-path-to-node=\"62,0\" data-index-in-node=\"0\">Important Note:<\/b> These commands remove all training data, checkpoints, and models. Make sure you really want to remove everything before executing these commands. The checkpoints contain your customized model and cannot be easily restored.<\/p>\n<\/blockquote>\n<h2 data-path-to-node=\"64\"><span class=\"ez-toc-section\" id=\"Summary_Conclusion\"><\/span>Summary &amp; Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p data-path-to-node=\"65\">The installation of LLaMA Factory on the Gigabyte AI TOP ATOM is surprisingly straightforward thanks to the compatibility with the NVIDIA DGX Spark Playbooks. In about 30-60 minutes, I set up LLaMA Factory and can now adapt my own language models for specific tasks.<\/p>\n<p data-path-to-node=\"66\">What particularly excites me: The performance of the Blackwell GPU is fully utilized, and the Docker-based installation makes the setup much easier than a manual installation. LLaMA Factory provides a unified interface for various fine-tuning methods, so you can quickly switch between LoRA, QLoRA, and Full Fine-Tuning.<\/p>\n<p data-path-to-node=\"67\">I also find it particularly practical that the training checkpoints are automatically saved. This allows for interrupting the training if necessary and continuing it later. The training metrics are also saved, so you can track the progress precisely.<\/p>\n<p data-path-to-node=\"68\">For teams or developers who want to adapt their own language models, this is a perfect solution: a central server with full GPU power on which you can train models for specific domains. The exported models can then be used in other applications such as Ollama or vLLM.<\/p>\n<p data-path-to-node=\"69\">If you have any questions or encounter problems, feel free to check the <a href=\"https:\/\/docs.nvidia.com\/dgx\/dgx-spark\/\" target=\"_blank\" rel=\"noopener\">official NVIDIA DGX Spark documentation<\/a>, the <a href=\"https:\/\/github.com\/hiyouga\/LLaMA-Factory\" target=\"_blank\" rel=\"noopener\">LLaMA Factory documentation<\/a>, or the <a href=\"https:\/\/llamafactory.readthedocs.io\/en\/latest\/getting_started\/data_preparation.html\" target=\"_blank\" rel=\"noopener\">LLaMA Factory ReadTheDocs<\/a>. The community is very helpful, and most problems can be solved quickly.<\/p>\n<h3 data-path-to-node=\"71\"><span class=\"ez-toc-section\" id=\"Next_Step_Preparing_Your_Own_Datasets_and_Adapting_Models\"><\/span>Next Step: Preparing Your Own Datasets and Adapting Models<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p data-path-to-node=\"72\">You have now successfully installed LLaMA Factory and performed an initial training. The basic installation works, but that is just the beginning. The next step is to prepare your own datasets for specific use cases.<\/p>\n<p data-path-to-node=\"73\">LLaMA Factory supports various dataset formats, including JSON files for Instruction Tuning. You can adapt your models for code generation, medical applications, corporate knowledge, or other specific domains. The documentation shows you how to prepare your data in the correct format.<\/p>\n<p data-path-to-node=\"74\">Good luck experimenting with LLaMA Factory on your Gigabyte AI TOP ATOM. I am excited to see which customized models you develop with it! Let me and my readers know here in the comments.<\/p>\n<blockquote>\n<p data-path-to-node=\"25\"><strong>Go back to Part 1 of the setup and configuration manual here.<\/strong><\/p>\n<p data-path-to-node=\"25\"><strong><a class=\"row-title\" href=\"https:\/\/ai-box.eu\/en\/large-language-models-en\/installing-llama-factory-on-gigabyte-ai-top-atom-fine-tuning-language-models-with-lora-and-qlora\/1945\/\" aria-label=\"\u201eInstalling LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA \u2013 Part 2-2\u201c (Bearbeiten)\">Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA \u2013 Part 1-2<\/a><\/strong><\/p>\n<\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Phase 7: Start Fine-Tuning Training Before I start the training, I might need to log in to the Hugging Face Hub if the model is gated (has access restrictions). For public models, this is not necessary: Command: huggingface-cli login You will be asked for your Hugging Face token. You can find this in your Hugging [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1961,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[873,162,50],"tags":[826,845,811,835,831,828,353,836,786,848,832,846,813,830,829,818,817,819,823,847,305,816,844,837,820,827,316,824,791,834,833,814,821,825,815,822],"class_list":["post-2017","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-gigabyte-ai-top-atom","category-large-language-models-en","category-top-story-en","tag-ai-model-training","tag-ai-top-atom-guide","tag-blackwell-gpu","tag-cuda-fine-tuning","tag-custom-llm-training","tag-dgx-spark-playbook","tag-docker","tag-fine-tuning","tag-gigabyte-ai-top-atom","tag-hugging-face","tag-hugging-face-fine-tuning","tag-large-language-models","tag-llama-factory","tag-llama-factory-anleitung","tag-llama-factory-deutsch","tag-llama-factory-docker","tag-llama-factory-installation","tag-llama-factory-tutorial","tag-llama-fine-tuning","tag-llama-3","tag-llm-en","tag-llm-fine-tuning","tag-local-ai-training","tag-lora","tag-lora-training","tag-machine-learning-training","tag-mistral-en","tag-mistral-fine-tuning","tag-nvidia-dgx-spark","tag-nvidia-pytorch-container","tag-pytorch-fine-tuning","tag-qlora","tag-qlora-training","tag-qwen-fine-tuning","tag-sprachmodelle-anpassen","tag-stable-diffusion-fine-tuning","et-has-post-format-content","et_post_format-et-post-format-standard"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"og:description\" content=\"Phase 7: Start Fine-Tuning Training Before I start the training, I might need to log in to the Hugging Face Hub if the model is gated (has access restrictions). For public models, this is not necessary: Command: huggingface-cli login You will be asked for your Hugging Face token. You can find this in your Hugging [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/\" \/>\n<meta property=\"og:site_name\" content=\"Exploring the Future: Inside the AI Box\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-27T20:37:30+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-27T20:48:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1876\" \/>\n\t<meta property=\"og:image:height\" content=\"1262\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Maker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:site\" content=\"@Ingmar_Stapel\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/\"},\"author\":{\"name\":\"Maker\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"headline\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 2-2\",\"datePublished\":\"2025-12-27T20:37:30+00:00\",\"dateModified\":\"2025-12-27T20:48:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/\"},\"wordCount\":1879,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"keywords\":[\"AI Model Training\",\"AI TOP ATOM Guide\",\"Blackwell GPU\",\"CUDA Fine-Tuning\",\"Custom LLM Training\",\"DGX Spark Playbook\",\"Docker\",\"Fine-tuning\",\"Gigabyte AI TOP ATOM\",\"Hugging Face\",\"Hugging Face Fine-Tuning\",\"Large Language Models\",\"LLaMA Factory\",\"LLaMA Factory Anleitung\",\"LLaMA Factory Deutsch\",\"LLaMA Factory Docker\",\"LLaMA Factory Installation\",\"LLaMA Factory Tutorial\",\"LLaMA Fine-Tuning\",\"LLaMA-3\",\"LLM\",\"LLM Fine-Tuning\",\"Local AI Training\",\"LoRa\",\"LoRA Training\",\"Machine Learning Training\",\"mistral\",\"Mistral Fine-Tuning\",\"NVIDIA DGX Spark\",\"NVIDIA PyTorch Container\",\"PyTorch Fine-Tuning\",\"QLoRA\",\"QLoRA Training\",\"Qwen Fine-Tuning\",\"Sprachmodelle anpassen\",\"Stable Diffusion Fine-Tuning\"],\"articleSection\":[\"Gigabyte AI TOP ATOM\",\"Large Language Models\",\"Top story\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/\",\"name\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"datePublished\":\"2025-12-27T20:37:30+00:00\",\"dateModified\":\"2025-12-27T20:48:10+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#primaryimage\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"contentUrl\":\"https:\\\/\\\/ai-box.eu\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png\",\"width\":1876,\"height\":1262,\"caption\":\"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI running training\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/hardware-en\\\/gigabyte-ai-top-atom\\\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\\\/2017\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Start\",\"item\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 2-2\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/\",\"name\":\"Exploring the Future: Inside the AI Box\",\"description\":\"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/#\\\/schema\\\/person\\\/cc91d08618b3feeef6926591b465eab1\",\"name\":\"Maker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g\",\"caption\":\"Maker\"},\"description\":\"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.\",\"sameAs\":[\"https:\\\/\\\/ai-box.eu\"],\"url\":\"https:\\\/\\\/ai-box.eu\\\/en\\\/author\\\/ingmars\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/","og_locale":"en_US","og_type":"article","og_title":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box","og_description":"Phase 7: Start Fine-Tuning Training Before I start the training, I might need to log in to the Hugging Face Hub if the model is gated (has access restrictions). For public models, this is not necessary: Command: huggingface-cli login You will be asked for your Hugging Face token. You can find this in your Hugging [&hellip;]","og_url":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/","og_site_name":"Exploring the Future: Inside the AI Box","article_published_time":"2025-12-27T20:37:30+00:00","article_modified_time":"2025-12-27T20:48:10+00:00","og_image":[{"width":1876,"height":1262,"url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","type":"image\/png"}],"author":"Maker","twitter_card":"summary_large_image","twitter_creator":"@Ingmar_Stapel","twitter_site":"@Ingmar_Stapel","twitter_misc":{"Written by":"Maker","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#article","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/"},"author":{"name":"Maker","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"headline":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 2-2","datePublished":"2025-12-27T20:37:30+00:00","dateModified":"2025-12-27T20:48:10+00:00","mainEntityOfPage":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/"},"wordCount":1879,"commentCount":0,"image":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","keywords":["AI Model Training","AI TOP ATOM Guide","Blackwell GPU","CUDA Fine-Tuning","Custom LLM Training","DGX Spark Playbook","Docker","Fine-tuning","Gigabyte AI TOP ATOM","Hugging Face","Hugging Face Fine-Tuning","Large Language Models","LLaMA Factory","LLaMA Factory Anleitung","LLaMA Factory Deutsch","LLaMA Factory Docker","LLaMA Factory Installation","LLaMA Factory Tutorial","LLaMA Fine-Tuning","LLaMA-3","LLM","LLM Fine-Tuning","Local AI Training","LoRa","LoRA Training","Machine Learning Training","mistral","Mistral Fine-Tuning","NVIDIA DGX Spark","NVIDIA PyTorch Container","PyTorch Fine-Tuning","QLoRA","QLoRA Training","Qwen Fine-Tuning","Sprachmodelle anpassen","Stable Diffusion Fine-Tuning"],"articleSection":["Gigabyte AI TOP ATOM","Large Language Models","Top story"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/","url":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/","name":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA - Part 2-2 - Exploring the Future: Inside the AI Box","isPartOf":{"@id":"https:\/\/ai-box.eu\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#primaryimage"},"image":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#primaryimage"},"thumbnailUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","datePublished":"2025-12-27T20:37:30+00:00","dateModified":"2025-12-27T20:48:10+00:00","author":{"@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1"},"breadcrumb":{"@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#primaryimage","url":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","contentUrl":"https:\/\/ai-box.eu\/wp-content\/uploads\/2025\/12\/GIGABYTE_AI_TOP_ATOM-LLaMA-Factory-Docker-Container-CLI-running.png","width":1876,"height":1262,"caption":"GIGABYTE AI TOP ATOM - LLaMA Factory Docker Container CLI running training"},{"@type":"BreadcrumbList","@id":"https:\/\/ai-box.eu\/en\/hardware-en\/gigabyte-ai-top-atom\/installing-llama-factory-auf-gigabyte-ai-top-atom-installieren-sprachmodelle-mit-lora-und-qlora-fine-tunen-part-2-2\/2017\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Start","item":"https:\/\/ai-box.eu\/en\/"},{"@type":"ListItem","position":2,"name":"Installing LLaMA Factory on Gigabyte AI TOP ATOM: Fine-tuning Language Models with LoRA and QLoRA &#8211; Part 2-2"}]},{"@type":"WebSite","@id":"https:\/\/ai-box.eu\/en\/#website","url":"https:\/\/ai-box.eu\/en\/","name":"Exploring the Future: Inside the AI Box","description":"Inside the AI Box, we share our experiences and discoveries in the world of artificial intelligence.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ai-box.eu\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/ai-box.eu\/en\/#\/schema\/person\/cc91d08618b3feeef6926591b465eab1","name":"Maker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e96b93fc3c7e50c1f21c5c6b1f146dc4867936141360830b328947b32cacf93a?s=96&d=mm&r=g","caption":"Maker"},"description":"I live in Bavaria near Munich. In my head I always have many topics and try out especially in the field of Internet new media much in my spare time. I write on the blog because it makes me fun to report about the things that inspire me. I am happy about every comment, about suggestion and very about questions.","sameAs":["https:\/\/ai-box.eu"],"url":"https:\/\/ai-box.eu\/en\/author\/ingmars\/"}]}},"_links":{"self":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2017","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/comments?post=2017"}],"version-history":[{"count":6,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2017\/revisions"}],"predecessor-version":[{"id":2026,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/posts\/2017\/revisions\/2026"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media\/1961"}],"wp:attachment":[{"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/media?parent=2017"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/categories?post=2017"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-box.eu\/en\/wp-json\/wp\/v2\/tags?post=2017"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}