Ecosystem
August 11, 2025

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker using Hugging Face libraries

AWS now supports fine-tuning of OpenAI’s GPT-OSS models on SageMaker using Hugging Face’s TRL library, leveraging LoRA, MXFP4 quantization, and distributed training tools like DeepSpeed and Accelerate.

AWS published detailed guidelines on fine-tuning OpenAI’s gpt-oss-120B and 20B models using SageMaker AI and Hugging Face’s TRL framework. The tutorial highlights efficient strategies including LoRA (low-rank adaptation), MXFP4 (4-bit quantization), and distributed training with Hugging Face Accelerate and DeepSpeed ZeRO-3 for scalable performance.

These approaches help manage compute and memory costs without sacrificing model accuracy.

SageMaker’s managed infrastructure, along with built-in tools for experiment tracking, model governance, and secure deployment, makes it enterprise-ready for production-grade LLM customization.

#
AWS

Read Our Content

See All Blogs
AI safety

Decoding White House Executive Order on “Winning the AI Race: America’s AI Action Plan” for Organizations planning to adopt Gen AI

Rishabh Sood

September 24, 2025
Read more
AWS

AWS AI offerings powering enterprise AI in 2025

Siddharth Menon

September 22, 2025
Read more