Ecosystem
August 11, 2025

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker using Hugging Face libraries

AWS now supports fine-tuning of OpenAI’s GPT-OSS models on SageMaker using Hugging Face’s TRL library, leveraging LoRA, MXFP4 quantization, and distributed training tools like DeepSpeed and Accelerate.

AWS published detailed guidelines on fine-tuning OpenAI’s gpt-oss-120B and 20B models using SageMaker AI and Hugging Face’s TRL framework. The tutorial highlights efficient strategies including LoRA (low-rank adaptation), MXFP4 (4-bit quantization), and distributed training with Hugging Face Accelerate and DeepSpeed ZeRO-3 for scalable performance.

These approaches help manage compute and memory costs without sacrificing model accuracy.

SageMaker’s managed infrastructure, along with built-in tools for experiment tracking, model governance, and secure deployment, makes it enterprise-ready for production-grade LLM customization.

#
AWS

Read Our Content

See All Blogs
Gen AI

Why GoML is the best Caylent alternative for AWS AI development

Deveshi Dabbawala

November 17, 2025
Read more
Gen AI

Why GoML is the best Accenture alternative for AI development and AI consulting

Deveshi Dabbawala

November 9, 2025
Read more