Ecosystem
August 11, 2025

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker using Hugging Face libraries

AWS now supports fine-tuning of OpenAI’s GPT-OSS models on SageMaker using Hugging Face’s TRL library, leveraging LoRA, MXFP4 quantization, and distributed training tools like DeepSpeed and Accelerate.

AWS published detailed guidelines on fine-tuning OpenAI’s gpt-oss-120B and 20B models using SageMaker AI and Hugging Face’s TRL framework. The tutorial highlights efficient strategies including LoRA (low-rank adaptation), MXFP4 (4-bit quantization), and distributed training with Hugging Face Accelerate and DeepSpeed ZeRO-3 for scalable performance.

These approaches help manage compute and memory costs without sacrificing model accuracy.

SageMaker’s managed infrastructure, along with built-in tools for experiment tracking, model governance, and secure deployment, makes it enterprise-ready for production-grade LLM customization.

#
AWS

Read Our Content

See All Blogs
Gen AI

WebMCP and AI orchestration: how the web is finally catching up to enterprise AI agents

Deveshi Dabbawala

March 10, 2026
Read more
Gen AI

OpenAI just released GPT-5.4: here’s what you need to know

Deveshi Dabbawala

March 6, 2026
Read more