Ecosystem
August 11, 2025

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker using Hugging Face libraries

AWS now supports fine-tuning of OpenAI’s GPT-OSS models on SageMaker using Hugging Face’s TRL library, leveraging LoRA, MXFP4 quantization, and distributed training tools like DeepSpeed and Accelerate.

AWS published detailed guidelines on fine-tuning OpenAI’s gpt-oss-120B and 20B models using SageMaker AI and Hugging Face’s TRL framework. The tutorial highlights efficient strategies including LoRA (low-rank adaptation), MXFP4 (4-bit quantization), and distributed training with Hugging Face Accelerate and DeepSpeed ZeRO-3 for scalable performance.

These approaches help manage compute and memory costs without sacrificing model accuracy.

SageMaker’s managed infrastructure, along with built-in tools for experiment tracking, model governance, and secure deployment, makes it enterprise-ready for production-grade LLM customization.

#
AWS

Read Our Content

See All Blogs
AWS

The Complete Guide to Nova 2 Omni

Sharan Sundar Sankaran

December 14, 2025
Read more
AWS

Day 4 at AWS re:Invent: Experience-Based Acceleration (EBA) partners announced and a big bang close

Deveshi Dabbawala

December 4, 2025
Read more