# Auto 클래스[[auto-classes]]

많은 경우, 사용하려는 아키텍처는 `from_pretrained()` 메소드에서 제공하는 사전 훈련된 모델의 이름이나 경로로부터 유추할 수 있습니다. AutoClasses는 이 작업을 위해 존재하며, 사전 학습된 모델 가중치/구성/단어사전에 대한 이름/경로를 제공하면 자동으로 관련 모델을 가져오도록 도와줍니다.

[AutoConfig](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoConfig), [AutoModel](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel), [AutoTokenizer](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoTokenizer) 중 하나를 인스턴스화하면 해당 아키텍처의 클래스를 직접 생성합니다. 예를 들어,

```python
model = AutoModel.from_pretrained("google-bert/bert-base-cased")
```

위 코드는 [BertModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertModel)의 인스턴스인 모델을 생성합니다.

각 작업에 대해 하나의 `AutoModel` 클래스가 있으며, 각각의 백엔드(PyTorch, TensorFlow 또는 Flax)에 해당하는 클래스가 존재합니다.

## 자동 클래스 확장[[extending-the-auto-classes]]

각 자동 클래스는 사용자의 커스텀 클래스로 확장될 수 있는 메소드를 가지고 있습니다. 예를 들어, `NewModel`이라는 커스텀 모델 클래스를 정의했다면, `NewModelConfig`를 준비한 후 다음과 같이 자동 클래스에 추가할 수 있습니다:

```python
from transformers import AutoConfig, AutoModel

AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
```

이후에는 일반적으로 자동 클래스를 사용하는 것처럼 사용할 수 있습니다!

만약 `NewModelConfig`가 [PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)의 서브클래스라면, 해당 `model_type` 속성이 등록할 때 사용하는 키(여기서는 `"new-model"`)와 동일하게 설정되어 있는지 확인하세요.

마찬가지로, `NewModel`이 [PreTrainedModel](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel)의 서브클래스라면, 해당 `config_class` 속성이 등록할 때 사용하는 클래스(여기서는 `NewModelConfig`)와 동일하게 설정되어 있는지 확인하세요.

## AutoConfig[[transformers.AutoConfig]][[transformers.AutoConfig]]

#### transformers.AutoConfig[[transformers.AutoConfig]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/configuration_auto.py#L1373)

This is a generic configuration class that will be instantiated as one of the configuration classes of the library
when created with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoConfig.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_pretrainedtransformers.AutoConfig.from_pretrainedhttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/configuration_auto.py#L1396[{"name": "pretrained_model_name_or_path", "val": ": str | os.PathLike[str]"}, {"name": "**kwargs", "val": ""}]- **pretrained_model_name_or_path** (`str` or `os.PathLike`) --
  Can be either:

  - A string, the *model id* of a pretrained model configuration hosted inside a model repo on
    huggingface.co.
  - A path to a *directory* containing a configuration file saved using the
    [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.save_pretrained) method, or the [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) method,
    e.g., `./my_model_directory/`.
  - a path to a saved configuration JSON *file*, e.g.,
    `./my_model_directory/configuration.json`.
- **cache_dir** (`str` or `os.PathLike`, *optional*) --
  Path to a directory in which a downloaded pretrained model configuration should be cached if the
  standard cache should not be used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download the model weights and configuration files and override the
  cached versions if they exist.
- **proxies** (`dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  If `False`, then this function returns just the final configuration object.

  If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a
  dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
  part of `kwargs` which has not been used to update `config` and is otherwise ignored.
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
  should only be set to `True` for repositories you trust and in which you have read the code, as it will
  execute code present on the Hub on your local machine.
- **kwargs(additional** keyword arguments, *optional*) --
  The values in kwargs of any keys which are configuration attributes will be used to override the loaded
  values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
  by the `return_unused_kwargs` keyword parameter.0

Instantiate one of the configuration classes of the library from a pretrained model configuration.

The configuration class to instantiate is selected based on the `model_type` property of the config object that
is loaded, or when it's missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

- **afmoe** -- `AfmoeConfig` (AFMoE model)
- **aimv2** -- `Aimv2Config` (AIMv2 model)
- **aimv2_vision_model** -- `Aimv2VisionConfig` (Aimv2VisionModel model)
- **albert** -- [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) (ALBERT model)
- **align** -- `AlignConfig` (ALIGN model)
- **altclip** -- [AltCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPConfig) (AltCLIP model)
- **apertus** -- `ApertusConfig` (Apertus model)
- **arcee** -- `ArceeConfig` (Arcee model)
- **aria** -- `AriaConfig` (Aria model)
- **aria_text** -- `AriaTextConfig` (AriaText model)
- **audio-spectrogram-transformer** -- `ASTConfig` (Audio Spectrogram Transformer model)
- **audioflamingo3** -- `AudioFlamingo3Config` (AudioFlamingo3 model)
- **audioflamingo3_encoder** -- `AudioFlamingo3EncoderConfig` (AudioFlamingo3Encoder model)
- **autoformer** -- [AutoformerConfig](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerConfig) (Autoformer model)
- **aya_vision** -- `AyaVisionConfig` (AyaVision model)
- **bamba** -- `BambaConfig` (Bamba model)
- **bark** -- `BarkConfig` (Bark model)
- **bart** -- [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) (BART model)
- **beit** -- `BeitConfig` (BEiT model)
- **bert** -- [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) (BERT model)
- **bert-generation** -- `BertGenerationConfig` (Bert Generation model)
- **big_bird** -- [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) (BigBird model)
- **bigbird_pegasus** -- `BigBirdPegasusConfig` (BigBird-Pegasus model)
- **biogpt** -- [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) (BioGpt model)
- **bit** -- `BitConfig` (BiT model)
- **bitnet** -- `BitNetConfig` (BitNet model)
- **blenderbot** -- `BlenderbotConfig` (Blenderbot model)
- **blenderbot-small** -- `BlenderbotSmallConfig` (BlenderbotSmall model)
- **blip** -- [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) (BLIP model)
- **blip-2** -- [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) (BLIP-2 model)
- **blip_2_qformer** -- [Blip2QFormerConfig](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerConfig) (BLIP-2 QFormer model)
- **bloom** -- `BloomConfig` (BLOOM model)
- **blt** -- `BltConfig` (Blt model)
- **bridgetower** -- `BridgeTowerConfig` (BridgeTower model)
- **bros** -- `BrosConfig` (BROS model)
- **camembert** -- `CamembertConfig` (CamemBERT model)
- **canine** -- `CanineConfig` (CANINE model)
- **chameleon** -- [ChameleonConfig](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonConfig) (Chameleon model)
- **chinese_clip** -- `ChineseCLIPConfig` (Chinese-CLIP model)
- **chinese_clip_vision_model** -- `ChineseCLIPVisionConfig` (ChineseCLIPVisionModel model)
- **chmv2** -- `CHMv2Config` (CHMv2 model)
- **clap** -- `ClapConfig` (CLAP model)
- **clip** -- [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) (CLIP model)
- **clip_text_model** -- [CLIPTextConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextConfig) (CLIPTextModel model)
- **clip_vision_model** -- [CLIPVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionConfig) (CLIPVisionModel model)
- **clipseg** -- [CLIPSegConfig](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegConfig) (CLIPSeg model)
- **clvp** -- `ClvpConfig` (CLVP model)
- **code_llama** -- [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) (CodeLlama model)
- **codegen** -- [CodeGenConfig](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenConfig) (CodeGen model)
- **cohere** -- [CohereConfig](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereConfig) (Cohere model)
- **cohere2** -- `Cohere2Config` (Cohere2 model)
- **cohere2_vision** -- `Cohere2VisionConfig` (Cohere2Vision model)
- **cohere_asr** -- `CohereAsrConfig` (CohereASR model)
- **colmodernvbert** -- `ColModernVBertConfig` (ColModernVBert model)
- **colpali** -- `ColPaliConfig` (ColPali model)
- **colqwen2** -- `ColQwen2Config` (ColQwen2 model)
- **conditional_detr** -- `ConditionalDetrConfig` (Conditional DETR model)
- **convbert** -- [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) (ConvBERT model)
- **convnext** -- `ConvNextConfig` (ConvNeXT model)
- **convnextv2** -- `ConvNextV2Config` (ConvNeXTV2 model)
- **cpmant** -- `CpmAntConfig` (CPM-Ant model)
- **csm** -- `CsmConfig` (CSM model)
- **ctrl** -- `CTRLConfig` (CTRL model)
- **cvt** -- `CvtConfig` (CvT model)
- **cwm** -- `CwmConfig` (Code World Model (CWM) model)
- **d_fine** -- `DFineConfig` (D-FINE model)
- **dab-detr** -- `DabDetrConfig` (DAB-DETR model)
- **dac** -- `DacConfig` (DAC model)
- **data2vec-audio** -- `Data2VecAudioConfig` (Data2VecAudio model)
- **data2vec-text** -- `Data2VecTextConfig` (Data2VecText model)
- **data2vec-vision** -- `Data2VecVisionConfig` (Data2VecVision model)
- **dbrx** -- [DbrxConfig](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxConfig) (DBRX model)
- **deberta** -- [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) (DeBERTa model)
- **deberta-v2** -- [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) (DeBERTa-v2 model)
- **decision_transformer** -- `DecisionTransformerConfig` (Decision Transformer model)
- **deepseek_v2** -- `DeepseekV2Config` (DeepSeek-V2 model)
- **deepseek_v3** -- [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) (DeepSeek-V3 model)
- **deepseek_vl** -- `DeepseekVLConfig` (DeepseekVL model)
- **deepseek_vl_hybrid** -- `DeepseekVLHybridConfig` (DeepseekVLHybrid model)
- **deformable_detr** -- `DeformableDetrConfig` (Deformable DETR model)
- **deit** -- `DeiTConfig` (DeiT model)
- **depth_anything** -- `DepthAnythingConfig` (Depth Anything model)
- **depth_pro** -- `DepthProConfig` (DepthPro model)
- **detr** -- `DetrConfig` (DETR model)
- **dia** -- `DiaConfig` (Dia model)
- **diffllama** -- `DiffLlamaConfig` (DiffLlama model)
- **dinat** -- `DinatConfig` (DiNAT model)
- **dinov2** -- `Dinov2Config` (DINOv2 model)
- **dinov2_with_registers** -- `Dinov2WithRegistersConfig` (DINOv2 with Registers model)
- **dinov3_convnext** -- `DINOv3ConvNextConfig` (DINOv3 ConvNext model)
- **dinov3_vit** -- `DINOv3ViTConfig` (DINOv3 ViT model)
- **distilbert** -- `DistilBertConfig` (DistilBERT model)
- **doge** -- `DogeConfig` (Doge model)
- **donut-swin** -- `DonutSwinConfig` (DonutSwin model)
- **dots1** -- `Dots1Config` (dots1 model)
- **dpr** -- `DPRConfig` (DPR model)
- **dpt** -- `DPTConfig` (DPT model)
- **edgetam** -- `EdgeTamConfig` (EdgeTAM model)
- **edgetam_video** -- `EdgeTamVideoConfig` (EdgeTamVideo model)
- **edgetam_vision_model** -- `EdgeTamVisionConfig` (EdgeTamVisionModel model)
- **efficientloftr** -- `EfficientLoFTRConfig` (EfficientLoFTR model)
- **efficientnet** -- `EfficientNetConfig` (EfficientNet model)
- **electra** -- [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) (ELECTRA model)
- **emu3** -- `Emu3Config` (Emu3 model)
- **encodec** -- `EncodecConfig` (EnCodec model)
- **encoder-decoder** -- [EncoderDecoderConfig](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderConfig) (Encoder decoder model)
- **eomt** -- `EomtConfig` (EoMT model)
- **eomt_dinov3** -- `EomtDinov3Config` (EoMT-DINOv3 model)
- **ernie** -- `ErnieConfig` (ERNIE model)
- **ernie4_5** -- `Ernie4_5Config` (Ernie4_5 model)
- **ernie4_5_moe** -- `Ernie4_5_MoeConfig` (Ernie4_5_MoE model)
- **ernie4_5_vl_moe** -- `Ernie4_5_VLMoeConfig` (Ernie4_5_VLMoE model)
- **esm** -- [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) (ESM model)
- **eurobert** -- `EuroBertConfig` (EuroBERT model)
- **evolla** -- `EvollaConfig` (Evolla model)
- **exaone4** -- [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) (EXAONE-4.0 model)
- **exaone_moe** -- [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) (EXAONE-MoE model)
- **falcon** -- `FalconConfig` (Falcon model)
- **falcon_h1** -- `FalconH1Config` (FalconH1 model)
- **falcon_mamba** -- `FalconMambaConfig` (FalconMamba model)
- **fast_vlm** -- `FastVlmConfig` (FastVlm model)
- **fastspeech2_conformer** -- `FastSpeech2ConformerConfig` (FastSpeech2Conformer model)
- **fastspeech2_conformer_with_hifigan** -- `FastSpeech2ConformerWithHifiGanConfig` (FastSpeech2ConformerWithHifiGan model)
- **flaubert** -- `FlaubertConfig` (FlauBERT model)
- **flava** -- `FlavaConfig` (FLAVA model)
- **flex_olmo** -- `FlexOlmoConfig` (FlexOlmo model)
- **florence2** -- `Florence2Config` (Florence2 model)
- **fnet** -- `FNetConfig` (FNet model)
- **focalnet** -- `FocalNetConfig` (FocalNet model)
- **fsmt** -- `FSMTConfig` (FairSeq Machine-Translation model)
- **funnel** -- `FunnelConfig` (Funnel Transformer model)
- **fuyu** -- `FuyuConfig` (Fuyu model)
- **gemma** -- [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) (Gemma model)
- **gemma2** -- [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) (Gemma2 model)
- **gemma3** -- [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) (Gemma3ForConditionalGeneration model)
- **gemma3_text** -- [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) (Gemma3ForCausalLM model)
- **gemma3n** -- [Gemma3nConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nConfig) (Gemma3nForConditionalGeneration model)
- **gemma3n_audio** -- [Gemma3nAudioConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nAudioConfig) (Gemma3nAudioEncoder model)
- **gemma3n_text** -- [Gemma3nTextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextConfig) (Gemma3nForCausalLM model)
- **gemma3n_vision** -- [Gemma3nVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nVisionConfig) (TimmWrapperModel model)
- **gemma4** -- `Gemma4Config` (Gemma4ForConditionalGeneration model)
- **gemma4_audio** -- `Gemma4AudioConfig` (Gemma4AudioModel model)
- **gemma4_text** -- `Gemma4TextConfig` (Gemma4ForCausalLM model)
- **gemma4_vision** -- `Gemma4VisionConfig` (Gemma4VisionModel model)
- **git** -- `GitConfig` (GIT model)
- **glm** -- `GlmConfig` (GLM model)
- **glm4** -- `Glm4Config` (GLM4 model)
- **glm46v** -- `Glm46VConfig` (Glm46V model)
- **glm4_moe** -- `Glm4MoeConfig` (Glm4MoE model)
- **glm4_moe_lite** -- `Glm4MoeLiteConfig` (Glm4MoELite model)
- **glm4v** -- `Glm4vConfig` (GLM4V model)
- **glm4v_moe** -- `Glm4vMoeConfig` (GLM4VMOE model)
- **glm4v_moe_text** -- `Glm4vMoeTextConfig` (GLM4VMOE model)
- **glm4v_moe_vision** -- `Glm4vMoeVisionConfig` (Glm4vMoeVisionModel model)
- **glm4v_text** -- `Glm4vTextConfig` (GLM4V model)
- **glm4v_vision** -- `Glm4vVisionConfig` (Glm4vVisionModel model)
- **glm_image** -- `GlmImageConfig` (GlmImage model)
- **glm_image_text** -- `GlmImageTextConfig` (GlmImageText model)
- **glm_image_vision** -- `GlmImageVisionConfig` (GlmImageVisionModel model)
- **glm_image_vqmodel** -- `GlmImageVQVAEConfig` (GlmImageVQVAE model)
- **glm_moe_dsa** -- `GlmMoeDsaConfig` (GlmMoeDsa model)
- **glm_ocr** -- `GlmOcrConfig` (Glmocr model)
- **glm_ocr_text** -- `GlmOcrTextConfig` (GlmOcrText model)
- **glm_ocr_vision** -- `GlmOcrVisionConfig` (GlmOcrVisionModel model)
- **glmasr** -- `GlmAsrConfig` (GLM-ASR model)
- **glmasr_encoder** -- `GlmAsrEncoderConfig` (GLM-ASR Encoder model)
- **glpn** -- `GLPNConfig` (GLPN model)
- **got_ocr2** -- `GotOcr2Config` (GOT-OCR2 model)
- **gpt-sw3** -- [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) (GPT-Sw3 model)
- **gpt2** -- [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeConfig` (GPTBigCode model)
- **gpt_neo** -- `GPTNeoConfig` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXConfig` (GPT NeoX model)
- **gpt_neox_japanese** -- [GPTNeoXJapaneseConfig](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseConfig) (GPT NeoX Japanese model)
- **gpt_oss** -- `GptOssConfig` (GptOss model)
- **gptj** -- `GPTJConfig` (GPT-J model)
- **granite** -- `GraniteConfig` (Granite model)
- **granite_speech** -- `GraniteSpeechConfig` (GraniteSpeech model)
- **granitemoe** -- `GraniteMoeConfig` (GraniteMoeMoe model)
- **granitemoehybrid** -- `GraniteMoeHybridConfig` (GraniteMoeHybrid model)
- **granitemoeshared** -- `GraniteMoeSharedConfig` (GraniteMoeSharedMoe model)
- **granitevision** -- `LlavaNextConfig` (LLaVA-NeXT model)
- **grounding-dino** -- [GroundingDinoConfig](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoConfig) (Grounding DINO model)
- **groupvit** -- `GroupViTConfig` (GroupViT model)
- **helium** -- `HeliumConfig` (Helium model)
- **hgnet_v2** -- `HGNetV2Config` (HGNet-V2 model)
- **hiera** -- `HieraConfig` (Hiera model)
- **higgs_audio_v2** -- `HiggsAudioV2Config` (HiggsAudioV2 model)
- **higgs_audio_v2_tokenizer** -- `HiggsAudioV2TokenizerConfig` (HiggsAudioV2Tokenizer model)
- **hubert** -- `HubertConfig` (Hubert model)
- **hunyuan_v1_dense** -- `HunYuanDenseV1Config` (HunYuanDenseV1 model)
- **hunyuan_v1_moe** -- `HunYuanMoEV1Config` (HunYuanMoeV1 model)
- **ibert** -- `IBertConfig` (I-BERT model)
- **idefics** -- `IdeficsConfig` (IDEFICS model)
- **idefics2** -- `Idefics2Config` (Idefics2 model)
- **idefics3** -- `Idefics3Config` (Idefics3 model)
- **idefics3_vision** -- `Idefics3VisionConfig` (Idefics3VisionTransformer model)
- **ijepa** -- `IJepaConfig` (I-JEPA model)
- **imagegpt** -- `ImageGPTConfig` (ImageGPT model)
- **informer** -- [InformerConfig](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerConfig) (Informer model)
- **instructblip** -- `InstructBlipConfig` (InstructBLIP model)
- **instructblipvideo** -- `InstructBlipVideoConfig` (InstructBlipVideo model)
- **internvl** -- `InternVLConfig` (InternVL model)
- **internvl_vision** -- `InternVLVisionConfig` (InternVLVision model)
- **jais2** -- `Jais2Config` (Jais2 model)
- **jamba** -- [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) (Jamba model)
- **janus** -- `JanusConfig` (Janus model)
- **jetmoe** -- `JetMoeConfig` (JetMoe model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3Config` (JinaEmbeddingsV3 model)
- **kosmos-2** -- `Kosmos2Config` (KOSMOS-2 model)
- **kosmos-2.5** -- `Kosmos2_5Config` (KOSMOS-2.5 model)
- **kyutai_speech_to_text** -- `KyutaiSpeechToTextConfig` (KyutaiSpeechToText model)
- **lasr_ctc** -- `LasrCTCConfig` (Lasr model)
- **lasr_encoder** -- `LasrEncoderConfig` (LasrEncoder model)
- **layoutlm** -- `LayoutLMConfig` (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2Config` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3Config` (LayoutLMv3 model)
- **layoutxlm** -- `LayoutXLMConfig` (LayoutXLM model)
- **led** -- `LEDConfig` (LED model)
- **levit** -- `LevitConfig` (LeViT model)
- **lfm2** -- [Lfm2Config](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Config) (Lfm2 model)
- **lfm2_moe** -- `Lfm2MoeConfig` (Lfm2Moe model)
- **lfm2_vl** -- `Lfm2VlConfig` (Lfm2Vl model)
- **lightglue** -- `LightGlueConfig` (LightGlue model)
- **lighton_ocr** -- `LightOnOcrConfig` (LightOnOcr model)
- **lilt** -- `LiltConfig` (LiLT model)
- **llama** -- [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) (LLaMA model)
- **llama4** -- [Llama4Config](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Config) (Llama4 model)
- **llama4_text** -- [Llama4TextConfig](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextConfig) (Llama4ForCausalLM model)
- **llava** -- `LlavaConfig` (LLaVa model)
- **llava_next** -- `LlavaNextConfig` (LLaVA-NeXT model)
- **llava_next_video** -- `LlavaNextVideoConfig` (LLaVa-NeXT-Video model)
- **llava_onevision** -- `LlavaOnevisionConfig` (LLaVA-Onevision model)
- **longcat_flash** -- `LongcatFlashConfig` (LongCatFlash model)
- **longformer** -- `LongformerConfig` (Longformer model)
- **longt5** -- `LongT5Config` (LongT5 model)
- **luke** -- `LukeConfig` (LUKE model)
- **lw_detr** -- `LwDetrConfig` (LwDetr model)
- **lw_detr_vit** -- `LwDetrViTConfig` (LwDetrVit model)
- **lxmert** -- `LxmertConfig` (LXMERT model)
- **m2m_100** -- `M2M100Config` (M2M100 model)
- **mamba** -- [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) (Mamba model)
- **mamba2** -- [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) (mamba2 model)
- **marian** -- [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) (Marian model)
- **markuplm** -- `MarkupLMConfig` (MarkupLM model)
- **mask2former** -- `Mask2FormerConfig` (Mask2Former model)
- **maskformer** -- `MaskFormerConfig` (MaskFormer model)
- **maskformer-swin** -- `MaskFormerSwinConfig` (MaskFormerSwin model)
- **mbart** -- `MBartConfig` (mBART model)
- **megatron-bert** -- `MegatronBertConfig` (Megatron-BERT model)
- **metaclip_2** -- `MetaClip2Config` (MetaCLIP 2 model)
- **mgp-str** -- `MgpstrConfig` (MGP-STR model)
- **mimi** -- `MimiConfig` (Mimi model)
- **minimax** -- `MiniMaxConfig` (MiniMax model)
- **minimax_m2** -- `MiniMaxM2Config` (MiniMax-M2 model)
- **ministral** -- `MinistralConfig` (Ministral model)
- **ministral3** -- `Ministral3Config` (Ministral3 model)
- **mistral** -- [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) (Mistral model)
- **mistral3** -- `Mistral3Config` (Mistral3 model)
- **mistral4** -- `Mistral4Config` (Mistral4 model)
- **mixtral** -- `MixtralConfig` (Mixtral model)
- **mlcd** -- `MLCDVisionConfig` (MLCD model)
- **mlcd_vision_model** -- `MLCDVisionConfig` (MLCD model)
- **mllama** -- `MllamaConfig` (Mllama model)
- **mm-grounding-dino** -- `MMGroundingDinoConfig` (MM Grounding DINO model)
- **mobilebert** -- `MobileBertConfig` (MobileBERT model)
- **mobilenet_v1** -- `MobileNetV1Config` (MobileNetV1 model)
- **mobilenet_v2** -- `MobileNetV2Config` (MobileNetV2 model)
- **mobilevit** -- `MobileViTConfig` (MobileViT model)
- **mobilevitv2** -- `MobileViTV2Config` (MobileViTV2 model)
- **modernbert** -- `ModernBertConfig` (ModernBERT model)
- **modernbert-decoder** -- `ModernBertDecoderConfig` (ModernBertDecoder model)
- **modernvbert** -- `ModernVBertConfig` (ModernVBert model)
- **moonshine** -- `MoonshineConfig` (Moonshine model)
- **moonshine_streaming** -- `MoonshineStreamingConfig` (MoonshineStreaming model)
- **moonshine_streaming_encoder** -- `MoonshineStreamingEncoderConfig` (MoonshineStreamingEncoder model)
- **moshi** -- `MoshiConfig` (Moshi model)
- **mpnet** -- `MPNetConfig` (MPNet model)
- **mpt** -- `MptConfig` (MPT model)
- **mra** -- `MraConfig` (MRA model)
- **mt5** -- `MT5Config` (MT5 model)
- **musicflamingo** -- `MusicFlamingoConfig` (MusicFlamingo model)
- **musicflamingo_encoder** -- `AudioFlamingo3EncoderConfig` (AudioFlamingo3Encoder model)
- **musicgen** -- `MusicgenConfig` (MusicGen model)
- **musicgen_melody** -- `MusicgenMelodyConfig` (MusicGen Melody model)
- **mvp** -- `MvpConfig` (MVP model)
- **nanochat** -- `NanoChatConfig` (NanoChat model)
- **nemotron** -- `NemotronConfig` (Nemotron model)
- **nemotron_h** -- `NemotronHConfig` (NemotronH model)
- **nllb-moe** -- `NllbMoeConfig` (NLLB-MOE model)
- **nomic_bert** -- `NomicBertConfig` (NomicBERT model)
- **nougat** -- `VisionEncoderDecoderConfig` (Nougat model)
- **nystromformer** -- `NystromformerConfig` (Nyströmformer model)
- **olmo** -- `OlmoConfig` (OLMo model)
- **olmo2** -- `Olmo2Config` (OLMo2 model)
- **olmo3** -- `Olmo3Config` (Olmo3 model)
- **olmo_hybrid** -- `OlmoHybridConfig` (OlmoHybrid model)
- **olmoe** -- `OlmoeConfig` (OLMoE model)
- **omdet-turbo** -- `OmDetTurboConfig` (OmDet-Turbo model)
- **oneformer** -- `OneFormerConfig` (OneFormer model)
- **openai-gpt** -- [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) (OpenAI GPT model)
- **opt** -- `OPTConfig` (OPT model)
- **ovis2** -- `Ovis2Config` (Ovis2 model)
- **owlv2** -- `Owlv2Config` (OWLv2 model)
- **owlvit** -- `OwlViTConfig` (OWL-ViT model)
- **paddleocr_vl** -- `PaddleOCRVLConfig` (PaddleOCRVL model)
- **paligemma** -- [PaliGemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaConfig) (PaliGemma model)
- **parakeet_ctc** -- `ParakeetCTCConfig` (Parakeet model)
- **parakeet_encoder** -- `ParakeetEncoderConfig` (ParakeetEncoder model)
- **patchtsmixer** -- [PatchTSMixerConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerConfig) (PatchTSMixer model)
- **patchtst** -- [PatchTSTConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTConfig) (PatchTST model)
- **pe_audio** -- `PeAudioConfig` (PeAudio model)
- **pe_audio_encoder** -- `PeAudioEncoderConfig` (PeAudioEncoder model)
- **pe_audio_video** -- `PeAudioVideoConfig` (PeAudioVideo model)
- **pe_audio_video_encoder** -- `PeAudioVideoEncoderConfig` (PeAudioVideoEncoder model)
- **pe_video** -- `PeVideoConfig` (PeVideo model)
- **pe_video_encoder** -- `PeVideoEncoderConfig` (PeVideoEncoder model)
- **pegasus** -- `PegasusConfig` (Pegasus model)
- **pegasus_x** -- `PegasusXConfig` (PEGASUS-X model)
- **perceiver** -- `PerceiverConfig` (Perceiver model)
- **perception_lm** -- `PerceptionLMConfig` (PerceptionLM model)
- **persimmon** -- `PersimmonConfig` (Persimmon model)
- **phi** -- `PhiConfig` (Phi model)
- **phi3** -- `Phi3Config` (Phi3 model)
- **phi4_multimodal** -- `Phi4MultimodalConfig` (Phi4Multimodal model)
- **phimoe** -- `PhimoeConfig` (Phimoe model)
- **pi0** -- `PI0Config` (PI0 model)
- **pix2struct** -- `Pix2StructConfig` (Pix2Struct model)
- **pixio** -- `PixioConfig` (Pixio model)
- **pixtral** -- `PixtralVisionConfig` (Pixtral model)
- **plbart** -- `PLBartConfig` (PLBart model)
- **poolformer** -- `PoolFormerConfig` (PoolFormer model)
- **pop2piano** -- `Pop2PianoConfig` (Pop2Piano model)
- **pp_chart2table** -- `PPChart2TableConfig` (PPChart2Table model)
- **pp_doclayout_v2** -- `PPDocLayoutV2Config` (PPDocLayoutV2 model)
- **pp_doclayout_v3** -- `PPDocLayoutV3Config` (PPDocLayoutV3 model)
- **pp_lcnet** -- `PPLCNetConfig` (PPLCNet model)
- **pp_lcnet_v3** -- `PPLCNetV3Config` (PPLCNetV3 model)
- **pp_ocrv5_mobile_det** -- `PPOCRV5MobileDetConfig` (PPOCRV5MobileDet model)
- **pp_ocrv5_mobile_rec** -- `PPOCRV5MobileRecConfig` (PPOCRV5MobileRec model)
- **pp_ocrv5_server_det** -- `PPOCRV5ServerDetConfig` (PPOCRV5ServerDet model)
- **pp_ocrv5_server_rec** -- `PPOCRV5ServerRecConfig` (PPOCRV5ServerRec model)
- **prompt_depth_anything** -- `PromptDepthAnythingConfig` (PromptDepthAnything model)
- **prophetnet** -- `ProphetNetConfig` (ProphetNet model)
- **pvt** -- `PvtConfig` (PVT model)
- **pvt_v2** -- `PvtV2Config` (PVTv2 model)
- **qwen2** -- `Qwen2Config` (Qwen2 model)
- **qwen2_5_omni** -- `Qwen2_5OmniConfig` (Qwen2_5Omni model)
- **qwen2_5_vl** -- `Qwen2_5_VLConfig` (Qwen2_5_VL model)
- **qwen2_5_vl_text** -- `Qwen2_5_VLTextConfig` (Qwen2_5_VL model)
- **qwen2_audio** -- `Qwen2AudioConfig` (Qwen2Audio model)
- **qwen2_audio_encoder** -- `Qwen2AudioEncoderConfig` (Qwen2AudioEncoder model)
- **qwen2_moe** -- `Qwen2MoeConfig` (Qwen2MoE model)
- **qwen2_vl** -- [Qwen2VLConfig](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLConfig) (Qwen2VL model)
- **qwen2_vl_text** -- `Qwen2VLTextConfig` (Qwen2VL model)
- **qwen3** -- `Qwen3Config` (Qwen3 model)
- **qwen3_5** -- `Qwen3_5Config` (Qwen3_5 model)
- **qwen3_5_moe** -- `Qwen3_5MoeConfig` (Qwen3_5Moe model)
- **qwen3_5_moe_text** -- `Qwen3_5MoeTextConfig` (Qwen3_5MoeText model)
- **qwen3_5_text** -- `Qwen3_5TextConfig` (Qwen3_5Text model)
- **qwen3_moe** -- `Qwen3MoeConfig` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextConfig` (Qwen3Next model)
- **qwen3_omni_moe** -- `Qwen3OmniMoeConfig` (Qwen3OmniMoE model)
- **qwen3_vl** -- `Qwen3VLConfig` (Qwen3VL model)
- **qwen3_vl_moe** -- `Qwen3VLMoeConfig` (Qwen3VLMoe model)
- **qwen3_vl_moe_text** -- `Qwen3VLMoeTextConfig` (Qwen3VLMoe model)
- **qwen3_vl_text** -- `Qwen3VLTextConfig` (Qwen3VL model)
- **rag** -- [RagConfig](/docs/transformers/v5.5.1/ko/model_doc/rag#transformers.RagConfig) (RAG model)
- **recurrent_gemma** -- `RecurrentGemmaConfig` (RecurrentGemma model)
- **reformer** -- `ReformerConfig` (Reformer model)
- **regnet** -- `RegNetConfig` (RegNet model)
- **rembert** -- `RemBertConfig` (RemBERT model)
- **resnet** -- `ResNetConfig` (ResNet model)
- **roberta** -- [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormConfig` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertConfig` (RoCBert model)
- **roformer** -- `RoFormerConfig` (RoFormer model)
- **rt_detr** -- `RTDetrConfig` (RT-DETR model)
- **rt_detr_resnet** -- `RTDetrResNetConfig` (RT-DETR-ResNet model)
- **rt_detr_v2** -- `RTDetrV2Config` (RT-DETRv2 model)
- **rwkv** -- `RwkvConfig` (RWKV model)
- **sam** -- `SamConfig` (SAM model)
- **sam2** -- `Sam2Config` (SAM2 model)
- **sam2_hiera_det_model** -- `Sam2HieraDetConfig` (Sam2HieraDetModel model)
- **sam2_video** -- `Sam2VideoConfig` (Sam2VideoModel model)
- **sam2_vision_model** -- `Sam2VisionConfig` (Sam2VisionModel model)
- **sam3** -- `Sam3Config` (SAM3 model)
- **sam3_tracker** -- `Sam3TrackerConfig` (Sam3Tracker model)
- **sam3_tracker_video** -- `Sam3TrackerVideoConfig` (Sam3TrackerVideo model)
- **sam3_video** -- `Sam3VideoConfig` (Sam3VideoModel model)
- **sam3_vision_model** -- `Sam3VisionConfig` (Sam3VisionModel model)
- **sam3_vit_model** -- `Sam3ViTConfig` (Sam3ViTModel model)
- **sam_hq** -- [SamHQConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQConfig) (SAM-HQ model)
- **sam_hq_vision_model** -- [SamHQVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionConfig) (SamHQVisionModel model)
- **sam_vision_model** -- `SamVisionConfig` (SamVisionModel model)
- **seamless_m4t** -- `SeamlessM4TConfig` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4Tv2Config` (SeamlessM4Tv2 model)
- **seed_oss** -- `SeedOssConfig` (SeedOss model)
- **segformer** -- `SegformerConfig` (SegFormer model)
- **seggpt** -- `SegGptConfig` (SegGPT model)
- **sew** -- `SEWConfig` (SEW model)
- **sew-d** -- `SEWDConfig` (SEW-D model)
- **shieldgemma2** -- `ShieldGemma2Config` (Shieldgemma2 model)
- **siglip** -- [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) (SigLIP model)
- **siglip2** -- `Siglip2Config` (SigLIP2 model)
- **siglip2_vision_model** -- `Siglip2VisionConfig` (Siglip2VisionModel model)
- **siglip_vision_model** -- [SiglipVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionConfig) (SiglipVisionModel model)
- **slanext** -- `SLANeXtConfig` (SLANeXt model)
- **smollm3** -- `SmolLM3Config` (SmolLM3 model)
- **smolvlm** -- [SmolVLMConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMConfig) (SmolVLM model)
- **smolvlm_vision** -- [SmolVLMVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionConfig) (SmolVLMVisionTransformer model)
- **solar_open** -- `SolarOpenConfig` (SolarOpen model)
- **speech-encoder-decoder** -- `SpeechEncoderDecoderConfig` (Speech Encoder decoder model)
- **speech_to_text** -- `Speech2TextConfig` (Speech2Text model)
- **speecht5** -- `SpeechT5Config` (SpeechT5 model)
- **splinter** -- `SplinterConfig` (Splinter model)
- **squeezebert** -- `SqueezeBertConfig` (SqueezeBERT model)
- **stablelm** -- `StableLmConfig` (StableLm model)
- **starcoder2** -- `Starcoder2Config` (Starcoder2 model)
- **superglue** -- `SuperGlueConfig` (SuperGlue model)
- **superpoint** -- `SuperPointConfig` (SuperPoint model)
- **swiftformer** -- `SwiftFormerConfig` (SwiftFormer model)
- **swin** -- [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) (Swin Transformer model)
- **swin2sr** -- [Swin2SRConfig](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRConfig) (Swin2SR model)
- **swinv2** -- [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) (Swin Transformer V2 model)
- **switch_transformers** -- `SwitchTransformersConfig` (SwitchTransformers model)
- **t5** -- `T5Config` (T5 model)
- **t5gemma** -- `T5GemmaConfig` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2Config` (T5Gemma2 model)
- **t5gemma2_encoder** -- `T5Gemma2EncoderConfig` (T5Gemma2Encoder model)
- **table-transformer** -- `TableTransformerConfig` (Table Transformer model)
- **tapas** -- `TapasConfig` (TAPAS model)
- **textnet** -- `TextNetConfig` (TextNet model)
- **time_series_transformer** -- [TimeSeriesTransformerConfig](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerConfig) (Time Series Transformer model)
- **timesfm** -- `TimesFmConfig` (TimesFm model)
- **timesfm2_5** -- `TimesFm2_5Config` (TimesFm2p5 model)
- **timesformer** -- [TimesformerConfig](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerConfig) (TimeSformer model)
- **timm_backbone** -- `TimmBackboneConfig` (TimmBackbone model)
- **timm_wrapper** -- `TimmWrapperConfig` (TimmWrapperModel model)
- **trocr** -- `TrOCRConfig` (TrOCR model)
- **tvp** -- [TvpConfig](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpConfig) (TVP model)
- **udop** -- `UdopConfig` (UDOP model)
- **umt5** -- `UMT5Config` (UMT5 model)
- **unispeech** -- `UniSpeechConfig` (UniSpeech model)
- **unispeech-sat** -- `UniSpeechSatConfig` (UniSpeechSat model)
- **univnet** -- `UnivNetConfig` (UnivNet model)
- **upernet** -- `UperNetConfig` (UPerNet model)
- **uvdoc** -- `UVDocConfig` (UVDoc model)
- **uvdoc_backbone** -- `UVDocBackboneConfig` (UVDocBackbone model)
- **vaultgemma** -- `VaultGemmaConfig` (VaultGemma model)
- **vibevoice_acoustic_tokenizer** -- `VibeVoiceAcousticTokenizerConfig` (VibeVoiceAcousticTokenizer model)
- **vibevoice_acoustic_tokenizer_decoder** -- `VibeVoiceAcousticTokenizerDecoderConfig` (VibeVoiceAcousticTokenizerDecoderConfig model)
- **vibevoice_acoustic_tokenizer_encoder** -- `VibeVoiceAcousticTokenizerEncoderConfig` (VibeVoiceAcousticTokenizerEncoderConfig model)
- **vibevoice_asr** -- `VibeVoiceAsrConfig` (VibeVoiceAsr model)
- **video_llama_3** -- `VideoLlama3Config` (VideoLlama3 model)
- **video_llama_3_vision** -- `VideoLlama3VisionConfig` (VideoLlama3Vision model)
- **video_llava** -- `VideoLlavaConfig` (VideoLlava model)
- **videomae** -- `VideoMAEConfig` (VideoMAE model)
- **videomt** -- `VideomtConfig` (VidEoMT model)
- **vilt** -- `ViltConfig` (ViLT model)
- **vipllava** -- `VipLlavaConfig` (VipLlava model)
- **vision-encoder-decoder** -- `VisionEncoderDecoderConfig` (Vision Encoder decoder model)
- **vision-text-dual-encoder** -- `VisionTextDualEncoderConfig` (VisionTextDualEncoder model)
- **visual_bert** -- `VisualBertConfig` (VisualBERT model)
- **vit** -- [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) (ViT model)
- **vit_mae** -- `ViTMAEConfig` (ViTMAE model)
- **vit_msn** -- `ViTMSNConfig` (ViTMSN model)
- **vitdet** -- `VitDetConfig` (VitDet model)
- **vitmatte** -- `VitMatteConfig` (ViTMatte model)
- **vitpose** -- `VitPoseConfig` (ViTPose model)
- **vitpose_backbone** -- `VitPoseBackboneConfig` (ViTPoseBackbone model)
- **vits** -- `VitsConfig` (VITS model)
- **vivit** -- [VivitConfig](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitConfig) (ViViT model)
- **vjepa2** -- `VJEPA2Config` (VJEPA2Model model)
- **voxtral** -- `VoxtralConfig` (Voxtral model)
- **voxtral_encoder** -- `VoxtralEncoderConfig` (Voxtral Encoder model)
- **voxtral_realtime** -- `VoxtralRealtimeConfig` (VoxtralRealtime model)
- **voxtral_realtime_encoder** -- `VoxtralRealtimeEncoderConfig` (VoxtralRealtime Encoder model)
- **voxtral_realtime_text** -- `VoxtralRealtimeTextConfig` (VoxtralRealtime Text Model model)
- **wav2vec2** -- `Wav2Vec2Config` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertConfig` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerConfig` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMConfig` (WavLM model)
- **whisper** -- [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) (Whisper model)
- **xclip** -- [XCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPConfig) (X-CLIP model)
- **xcodec** -- `XcodecConfig` (X-CODEC model)
- **xglm** -- `XGLMConfig` (XGLM model)
- **xlm** -- `XLMConfig` (XLM model)
- **xlm-roberta** -- `XLMRobertaConfig` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLConfig` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetConfig` (XLNet model)
- **xlstm** -- `xLSTMConfig` (xLSTM model)
- **xmod** -- `XmodConfig` (X-MOD model)
- **yolos** -- `YolosConfig` (YOLOS model)
- **yoso** -- `YosoConfig` (YOSO model)
- **youtu** -- `YoutuConfig` (Youtu model)
- **zamba** -- `ZambaConfig` (Zamba model)
- **zamba2** -- `Zamba2Config` (Zamba2 model)
- **zoedepth** -- `ZoeDepthConfig` (ZoeDepth model)

Examples:

```python
>>> from transformers import AutoConfig

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased")

>>> # Download configuration from huggingface.co (user-uploaded) and cache.
>>> config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased")

>>> # If configuration file is in a directory (e.g., was saved using *save_pretrained('./test/saved_model/')*).
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/")

>>> # Load a specific configuration file.
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/my_configuration.json")

>>> # Change some config attributes when loading a pretrained config.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
>>> config.output_attentions
True

>>> config, unused_kwargs = AutoConfig.from_pretrained(
...     "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
... )
>>> config.output_attentions
True

>>> unused_kwargs
{'foo': False}
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model configuration hosted inside a model repo on huggingface.co. - A path to a *directory* containing a configuration file saved using the [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.save_pretrained) method, or the [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) method, e.g., `./my_model_directory/`. - a path to a saved configuration JSON *file*, e.g., `./my_model_directory/configuration.json`.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

return_unused_kwargs (`bool`, *optional*, defaults to `False`) : If `False`, then this function returns just the final configuration object.  If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of `kwargs` which has not been used to update `config` and is otherwise ignored.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

kwargs(additional keyword arguments, *optional*) : The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled by the `return_unused_kwargs` keyword parameter.
#### register[[transformers.AutoConfig.register]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/configuration_auto.py#L1533)

Register a new configuration for this class.

**Parameters:**

model_type (`str`) : The model type like "bert" or "gpt".

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The config to register.

## AutoTokenizer[[transformers.AutoTokenizer]][[transformers.AutoTokenizer]]

#### transformers.AutoTokenizer[[transformers.AutoTokenizer]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/tokenization_auto.py#L557)

This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when
created with the [AutoTokenizer.from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoTokenizer.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_pretrainedtransformers.AutoTokenizer.from_pretrainedhttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/tokenization_auto.py#L571[{"name": "pretrained_model_name_or_path", "val": ""}, {"name": "*inputs", "val": ""}, {"name": "**kwargs", "val": ""}]- **pretrained_model_name_or_path** (`str` or `os.PathLike`) --
  Can be either:

  - A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co.
  - A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
    using the [save_pretrained()](/docs/transformers/v5.5.1/ko/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained) method, e.g., `./my_model_directory/`.
  - a path to a single saved vocabulary file if and only if the tokenizer only requires a
    single vocabulary file (like Bert or XLNet), e.g.: `./my_model_directory/vocab.txt`. (Not
    applicable to all derived classes)
- **inputs** (additional positional arguments, *optional*) --
  Will be passed along to the Tokenizer `__init__()` method.
- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) --
  The configuration object used to determine the tokenizer class to instantiate.
- **cache_dir** (`str` or `os.PathLike`, *optional*) --
  Path to a directory in which a downloaded pretrained model configuration should be cached if the
  standard cache should not be used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download the model weights and configuration files and override the
  cached versions if they exist.
- **proxies** (`dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.
- **subfolder** (`str`, *optional*) --
  In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
  facebook/rag-token-base), specify it here.
- **tokenizer_type** (`str`, *optional*) --
  Tokenizer type to be loaded.
- **backend** (`str`, *optional*, defaults to `"tokenizers"`) --
  Backend to use for tokenization. Valid options are:
  - `"tokenizers"`: Use the HuggingFace tokenizers library backend (default)
  - `"sentencepiece"`: Use the SentencePiece backend
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
  should only be set to `True` for repositories you trust and in which you have read the code, as it will
  execute code present on the Hub on your local machine.
- **kwargs** (additional keyword arguments, *optional*) --
  Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like
  `bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
  `additional_special_tokens`. See parameters in the `__init__()` for more details.0

Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.

The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **aimv2** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (AIMv2 model)
- **albert** -- [AlbertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertTokenizer) (ALBERT model)
- **align** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (ALIGN model)
- **audioflamingo3** -- `Qwen2Tokenizer` (AudioFlamingo3 model)
- **aya_vision** -- `CohereTokenizer` (AyaVision model)
- **bark** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (Bark model)
- **bart** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (BART model)
- **barthez** -- [BarthezTokenizer](/docs/transformers/v5.5.1/ko/model_doc/barthez#transformers.BarthezTokenizer) (BARThez model)
- **bartpho** -- [BartphoTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bartpho#transformers.BartphoTokenizer) (BARTpho model)
- **bert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (BERT model)
- **bert-generation** -- `BertGenerationTokenizer` (Bert Generation model)
- **bert-japanese** -- [BertJapaneseTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bert-japanese#transformers.BertJapaneseTokenizer) (BertJapanese model)
- **bertweet** -- [BertweetTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bertweet#transformers.BertweetTokenizer) (BERTweet model)
- **big_bird** -- [BigBirdTokenizer](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdTokenizer) (BigBird model)
- **bigbird_pegasus** -- `PegasusTokenizer` (BigBird-Pegasus model)
- **biogpt** -- [BioGptTokenizer](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptTokenizer) (BioGpt model)
- **blenderbot** -- `BlenderbotTokenizer` (Blenderbot model)
- **blenderbot-small** -- `BlenderbotSmallTokenizer` (BlenderbotSmall model)
- **blip** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (BLIP model)
- **blip-2** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (BLIP-2 model)
- **bridgetower** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (BridgeTower model)
- **bros** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (BROS model)
- **byt5** -- `ByT5Tokenizer` (ByT5 model)
- **camembert** -- `CamembertTokenizer` (CamemBERT model)
- **canine** -- `CanineTokenizer` (CANINE model)
- **chameleon** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Chameleon model)
- **chinese_clip** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (Chinese-CLIP model)
- **clap** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (CLAP model)
- **clip** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (CLIP model)
- **clipseg** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (CLIPSeg model)
- **clvp** -- `ClvpTokenizer` (CLVP model)
- **code_llama** -- [CodeLlamaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/code_llama#transformers.CodeLlamaTokenizer) (CodeLlama model)
- **codegen** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (CodeGen model)
- **cohere** -- `CohereTokenizer` (Cohere model)
- **cohere2** -- `CohereTokenizer` (Cohere2 model)
- **cohere_asr** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (CohereASR model)
- **colqwen2** -- `Qwen2Tokenizer` (ColQwen2 model)
- **convbert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (ConvBERT model)
- **cpm** -- `CpmTokenizer` (CPM model)
- **cpmant** -- `CpmAntTokenizer` (CPM-Ant model)
- **ctrl** -- `CTRLTokenizer` (CTRL model)
- **data2vec-audio** -- `Wav2Vec2CTCTokenizer` (Data2VecAudio model)
- **data2vec-text** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (Data2VecText model)
- **dbrx** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (DBRX model)
- **deberta** -- [DebertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaTokenizer) (DeBERTa model)
- **deberta-v2** -- [DebertaV2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Tokenizer) (DeBERTa-v2 model)
- **deepseek_v2** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (DeepSeek-V2 model)
- **deepseek_v3** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (DeepSeek-V3 model)
- **deepseek_vl** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (DeepseekVL model)
- **deepseek_vl_hybrid** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (DeepseekVLHybrid model)
- **dia** -- `DiaTokenizer` (Dia model)
- **distilbert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (DistilBERT model)
- **dpr** -- `DPRQuestionEncoderTokenizer` (DPR model)
- **electra** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (ELECTRA model)
- **emu3** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (Emu3 model)
- **ernie** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (ERNIE model)
- **esm** -- [EsmTokenizer](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmTokenizer) (ESM model)
- **falcon_mamba** -- `GPTNeoXTokenizer` (FalconMamba model)
- **fastspeech2_conformer** -- `None` (FastSpeech2Conformer model)
- **flaubert** -- `FlaubertTokenizer` (FlauBERT model)
- **flava** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (FLAVA model)
- **flex_olmo** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (FlexOlmo model)
- **florence2** -- [BartTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (Florence2 model)
- **fnet** -- `FNetTokenizer` (FNet model)
- **fsmt** -- `FSMTTokenizer` (FairSeq Machine-Translation model)
- **funnel** -- `FunnelTokenizer` (Funnel Transformer model)
- **fuyu** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Fuyu model)
- **gemma** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma model)
- **gemma2** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma2 model)
- **gemma3** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma3ForConditionalGeneration model)
- **gemma3_text** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma3ForCausalLM model)
- **gemma3n** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma3nForConditionalGeneration model)
- **gemma3n_text** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Gemma3nForCausalLM model)
- **git** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (GIT model)
- **glm** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GLM model)
- **glm4** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GLM4 model)
- **glm4_moe** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Glm4MoE model)
- **glm4_moe_lite** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Glm4MoELite model)
- **glm4v** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GLM4V model)
- **glm4v_moe** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GLM4VMOE model)
- **glm_image** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GlmImage model)
- **glmasr** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GLM-ASR model)
- **got_ocr2** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (GOT-OCR2 model)
- **gpt-sw3** -- `GPTSw3Tokenizer` (GPT-Sw3 model)
- **gpt2** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (OpenAI GPT-2 model)
- **gpt_bigcode** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GPTBigCode model)
- **gpt_neo** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GPT Neo model)
- **gpt_neox** -- `GPTNeoXTokenizer` (GPT NeoX model)
- **gpt_neox_japanese** -- [GPTNeoXJapaneseTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseTokenizer) (GPT NeoX Japanese model)
- **gptj** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GPT-J model)
- **granite** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (Granite model)
- **granitemoe** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GraniteMoeMoe model)
- **granitemoehybrid** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GraniteMoeHybrid model)
- **granitemoeshared** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (GraniteMoeSharedMoe model)
- **grounding-dino** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (Grounding DINO model)
- **groupvit** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (GroupViT model)
- **herbert** -- `HerbertTokenizer` (HerBERT model)
- **hubert** -- `Wav2Vec2CTCTokenizer` (Hubert model)
- **ibert** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (I-BERT model)
- **idefics** -- [LlamaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaTokenizer) (IDEFICS model)
- **idefics2** -- [LlamaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaTokenizer) (Idefics2 model)
- **instructblip** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (InstructBLIP model)
- **instructblipvideo** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (InstructBlipVideo model)
- **internvl** -- `Qwen2Tokenizer` (InternVL model)
- **jais2** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (Jais2 model)
- **jamba** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Jamba model)
- **janus** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Janus model)
- **jina_embeddings_v3** -- `XLMRobertaTokenizer` (JinaEmbeddingsV3 model)
- **kosmos-2** -- `XLMRobertaTokenizer` (KOSMOS-2 model)
- **lasr_ctc** -- `LasrTokenizer` (Lasr model)
- **lasr_encoder** -- `LasrTokenizer` (LasrEncoder model)
- **layoutlm** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2Tokenizer` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3Tokenizer` (LayoutLMv3 model)
- **layoutxlm** -- `LayoutXLMTokenizer` (LayoutXLM model)
- **led** -- [LEDTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (LED model)
- **lighton_ocr** -- `Qwen2TokenizerFast` (LightOnOcr model)
- **lilt** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (LiLT model)
- **llava** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (LLaVa model)
- **llava_next** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (LLaVA-NeXT model)
- **longformer** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (Longformer model)
- **luke** -- `LukeTokenizer` (LUKE model)
- **lxmert** -- [LxmertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (LXMERT model)
- **m2m_100** -- `M2M100Tokenizer` (M2M100 model)
- **mamba** -- `GPTNeoXTokenizer` (Mamba model)
- **mamba2** -- `GPTNeoXTokenizer` (mamba2 model)
- **marian** -- [MarianTokenizer](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianTokenizer) (Marian model)
- **markuplm** -- `MarkupLMTokenizer` (MarkupLM model)
- **mbart** -- `MBartTokenizer` (mBART model)
- **mbart50** -- `MBart50Tokenizer` (mBART-50 model)
- **megatron-bert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (Megatron-BERT model)
- **metaclip_2** -- `XLMRobertaTokenizer` (MetaCLIP 2 model)
- **mgp-str** -- `MgpstrTokenizer` (MGP-STR model)
- **minimax_m2** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (MiniMax-M2 model)
- **ministral** -- `MistralCommonBackend` (Ministral model)
- **ministral3** -- `MistralCommonBackend` (Ministral3 model)
- **mistral** -- `MistralCommonBackend` (Mistral model)
- **mistral3** -- `MistralCommonBackend` (Mistral3 model)
- **mixtral** -- `MistralCommonBackend` (Mixtral model)
- **mluke** -- `MLukeTokenizer` (mLUKE model)
- **mm-grounding-dino** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (MM Grounding DINO model)
- **mobilebert** -- [MobileBertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (MobileBERT model)
- **modernbert** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (ModernBERT model)
- **mpnet** -- `MPNetTokenizer` (MPNet model)
- **mpt** -- `GPTNeoXTokenizer` (MPT model)
- **mra** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (MRA model)
- **mt5** -- `T5Tokenizer` (MT5 model)
- **musicgen** -- `T5Tokenizer` (MusicGen model)
- **musicgen_melody** -- `T5Tokenizer` (MusicGen Melody model)
- **mvp** -- [MvpTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (MVP model)
- **myt5** -- `MyT5Tokenizer` (myt5 model)
- **nemotron** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Nemotron model)
- **nllb** -- `NllbTokenizer` (NLLB model)
- **nllb-moe** -- `NllbTokenizer` (NLLB-MOE model)
- **nomic_bert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (NomicBERT model)
- **nougat** -- `NougatTokenizer` (Nougat model)
- **nystromformer** -- [AlbertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertTokenizer) (Nyströmformer model)
- **olmo** -- `GPTNeoXTokenizer` (OLMo model)
- **olmo2** -- `GPTNeoXTokenizer` (OLMo2 model)
- **olmo3** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Olmo3 model)
- **olmo_hybrid** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (OlmoHybrid model)
- **olmoe** -- `GPTNeoXTokenizer` (OLMoE model)
- **omdet-turbo** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (OmDet-Turbo model)
- **oneformer** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (OneFormer model)
- **openai-gpt** -- [OpenAIGPTTokenizer](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTTokenizer) (OpenAI GPT model)
- **opt** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (OPT model)
- **ovis2** -- `Qwen2Tokenizer` (Ovis2 model)
- **owlv2** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (OWLv2 model)
- **owlvit** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (OWL-ViT model)
- **pegasus** -- `PegasusTokenizer` (Pegasus model)
- **pegasus_x** -- `PegasusTokenizer` (PEGASUS-X model)
- **perceiver** -- `PerceiverTokenizer` (Perceiver model)
- **phi** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (Phi model)
- **phi3** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Phi3 model)
- **phimoe** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (Phimoe model)
- **phobert** -- `PhobertTokenizer` (PhoBERT model)
- **pix2struct** -- `T5Tokenizer` (Pix2Struct model)
- **pixtral** -- `MistralCommonBackend` (Pixtral model)
- **plbart** -- `PLBartTokenizer` (PLBart model)
- **prophetnet** -- `ProphetNetTokenizer` (ProphetNet model)
- **qwen2** -- `Qwen2Tokenizer` (Qwen2 model)
- **qwen2_5_omni** -- `Qwen2Tokenizer` (Qwen2_5Omni model)
- **qwen2_5_vl** -- `Qwen2Tokenizer` (Qwen2_5_VL model)
- **qwen2_audio** -- `Qwen2Tokenizer` (Qwen2Audio model)
- **qwen2_moe** -- `Qwen2Tokenizer` (Qwen2MoE model)
- **qwen2_vl** -- `Qwen2Tokenizer` (Qwen2VL model)
- **qwen3** -- `Qwen2Tokenizer` (Qwen3 model)
- **qwen3_5** -- `Qwen3_5Tokenizer` (Qwen3_5 model)
- **qwen3_5_moe** -- `Qwen3_5Tokenizer` (Qwen3_5Moe model)
- **qwen3_moe** -- `Qwen2Tokenizer` (Qwen3MoE model)
- **qwen3_next** -- `Qwen2Tokenizer` (Qwen3Next model)
- **qwen3_omni_moe** -- `Qwen2Tokenizer` (Qwen3OmniMoE model)
- **qwen3_vl** -- `Qwen2Tokenizer` (Qwen3VL model)
- **qwen3_vl_moe** -- `Qwen2Tokenizer` (Qwen3VLMoe model)
- **rag** -- [RagTokenizer](/docs/transformers/v5.5.1/ko/model_doc/rag#transformers.RagTokenizer) (RAG model)
- **recurrent_gemma** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (RecurrentGemma model)
- **reformer** -- `ReformerTokenizer` (Reformer model)
- **rembert** -- `RemBertTokenizer` (RemBERT model)
- **roberta** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (RoBERTa model)
- **roberta-prelayernorm** -- [RobertaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.RobertaTokenizer) (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertTokenizer` (RoCBert model)
- **roformer** -- `RoFormerTokenizer` (RoFormer model)
- **rwkv** -- `GPTNeoXTokenizer` (RWKV model)
- **sam3** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (SAM3 model)
- **sam3_video** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (Sam3VideoModel model)
- **seamless_m4t** -- `SeamlessM4TTokenizer` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4TTokenizer` (SeamlessM4Tv2 model)
- **shieldgemma2** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (Shieldgemma2 model)
- **siglip** -- [SiglipTokenizer](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipTokenizer) (SigLIP model)
- **siglip2** -- `Siglip2Tokenizer` (SigLIP2 model)
- **speech_to_text** -- `Speech2TextTokenizer` (Speech2Text model)
- **speecht5** -- `SpeechT5Tokenizer` (SpeechT5 model)
- **splinter** -- `SplinterTokenizer` (Splinter model)
- **squeezebert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (SqueezeBERT model)
- **stablelm** -- `GPTNeoXTokenizer` (StableLm model)
- **starcoder2** -- [GPT2Tokenizer](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Tokenizer) (Starcoder2 model)
- **switch_transformers** -- `T5Tokenizer` (SwitchTransformers model)
- **t5** -- `T5Tokenizer` (T5 model)
- **t5gemma** -- [GemmaTokenizer](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaTokenizer) (T5Gemma model)
- **tapas** -- `TapasTokenizer` (TAPAS model)
- **trocr** -- `XLMRobertaTokenizer` (TrOCR model)
- **tvp** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (TVP model)
- **udop** -- `UdopTokenizer` (UDOP model)
- **umt5** -- `T5Tokenizer` (UMT5 model)
- **unispeech** -- `Wav2Vec2CTCTokenizer` (UniSpeech model)
- **unispeech-sat** -- `Wav2Vec2CTCTokenizer` (UniSpeechSat model)
- **vilt** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (ViLT model)
- **vipllava** -- [TokenizersBackend](/docs/transformers/v5.5.1/ko/main_classes/tokenizer#transformers.TokenizersBackend) (VipLlava model)
- **visual_bert** -- [BertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.BertTokenizer) (VisualBERT model)
- **vits** -- `VitsTokenizer` (VITS model)
- **voxtral** -- `MistralCommonBackend` (Voxtral model)
- **voxtral_realtime** -- `MistralCommonBackend` (VoxtralRealtime model)
- **wav2vec2** -- `Wav2Vec2CTCTokenizer` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2CTCTokenizer` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2CTCTokenizer` (Wav2Vec2-Conformer model)
- **wav2vec2_phoneme** -- `Wav2Vec2PhonemeCTCTokenizer` (Wav2Vec2Phoneme model)
- **whisper** -- [WhisperTokenizer](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperTokenizer) (Whisper model)
- **xclip** -- [CLIPTokenizer](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTokenizer) (X-CLIP model)
- **xglm** -- `XGLMTokenizer` (XGLM model)
- **xlm** -- `XLMTokenizer` (XLM model)
- **xlm-roberta** -- `XLMRobertaTokenizer` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaTokenizer` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetTokenizer` (XLNet model)
- **xlstm** -- `GPTNeoXTokenizer` (xLSTM model)
- **xmod** -- `XLMRobertaTokenizer` (X-MOD model)
- **yoso** -- [AlbertTokenizer](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertTokenizer) (YOSO model)

Examples:

```python
>>> from transformers import AutoTokenizer

>>> # Download vocabulary from huggingface.co and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")

>>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")

>>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
>>> # tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/")

>>> # Download vocabulary from huggingface.co and define model-specific arguments
>>> tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base", add_prefix_space=True)

>>> # Explicitly use the tokenizers backend
>>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/llama-tokenizer", backend="tokenizers")

>>> # Explicitly use the sentencepiece backend
>>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/llama-tokenizer", backend="sentencepiece")
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co. - A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved using the [save_pretrained()](/docs/transformers/v5.5.1/ko/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.save_pretrained) method, e.g., `./my_model_directory/`. - a path to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.: `./my_model_directory/vocab.txt`. (Not applicable to all derived classes)

inputs (additional positional arguments, *optional*) : Will be passed along to the Tokenizer `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : The configuration object used to determine the tokenizer class to instantiate.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

subfolder (`str`, *optional*) : In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here.

tokenizer_type (`str`, *optional*) : Tokenizer type to be loaded.

backend (`str`, *optional*, defaults to `"tokenizers"`) : Backend to use for tokenization. Valid options are: - `"tokenizers"`: Use the HuggingFace tokenizers library backend (default) - `"sentencepiece"`: Use the SentencePiece backend

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

kwargs (additional keyword arguments, *optional*) : Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like `bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`, `additional_special_tokens`. See parameters in the `__init__()` for more details.
#### register[[transformers.AutoTokenizer.register]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/tokenization_auto.py#L838)

Register a new tokenizer in this mapping.

**Parameters:**

config_class ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The configuration corresponding to the model to register.

tokenizer_class : The tokenizer class to register (V5 - preferred parameter).

slow_tokenizer_class : (Deprecated) The slow tokenizer to register.

fast_tokenizer_class : (Deprecated) The fast tokenizer to register.

## AutoFeatureExtractor[[transformers.AutoFeatureExtractor]][[transformers.AutoFeatureExtractor]]

#### transformers.AutoFeatureExtractor[[transformers.AutoFeatureExtractor]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/feature_extraction_auto.py#L231)

This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the
library when created with the [AutoFeatureExtractor.from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoFeatureExtractor.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_pretrainedtransformers.AutoFeatureExtractor.from_pretrainedhttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/feature_extraction_auto.py#L245[{"name": "pretrained_model_name_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]- **pretrained_model_name_or_path** (`str` or `os.PathLike`) --
  This can be either:

  - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on
    huggingface.co.
  - a path to a *directory* containing a feature extractor file saved using the
    [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) method, e.g.,
    `./my_model_directory/`.
  - a path to a saved feature extractor JSON *file*, e.g.,
    `./my_model_directory/preprocessor_config.json`.
- **cache_dir** (`str` or `os.PathLike`, *optional*) --
  Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
  standard cache should not be used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force to (re-)download the feature extractor files and override the cached versions
  if they exist.
- **proxies** (`dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `hf auth login` (stored in `~/.huggingface`).
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  If `False`, then this function returns just the final feature extractor object. If `True`, then this
  functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary
  consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
  `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored.
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
  should only be set to `True` for repositories you trust and in which you have read the code, as it will
  execute code present on the Hub on your local machine.
- **kwargs** (`dict[str, Any]`, *optional*) --
  The values in kwargs of any keys which are feature extractor attributes will be used to override the
  loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is
  controlled by the `return_unused_kwargs` keyword parameter.0

Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.

The feature extractor class to instantiate is selected based on the `model_type` property of the config object
(either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's
missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

- **audio-spectrogram-transformer** -- `ASTFeatureExtractor` (Audio Spectrogram Transformer model)
- **audioflamingo3** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (AudioFlamingo3 model)
- **clap** -- `ClapFeatureExtractor` (CLAP model)
- **clvp** -- `ClvpFeatureExtractor` (CLVP model)
- **cohere_asr** -- `CohereAsrFeatureExtractor` (CohereASR model)
- **csm** -- `EncodecFeatureExtractor` (CSM model)
- **dac** -- `DacFeatureExtractor` (DAC model)
- **data2vec-audio** -- `Wav2Vec2FeatureExtractor` (Data2VecAudio model)
- **dia** -- `DiaFeatureExtractor` (Dia model)
- **encodec** -- `EncodecFeatureExtractor` (EnCodec model)
- **gemma3n** -- [Gemma3nAudioFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nAudioFeatureExtractor) (Gemma3nForConditionalGeneration model)
- **gemma4** -- `Gemma4AudioFeatureExtractor` (Gemma4ForConditionalGeneration model)
- **glmasr** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (GLM-ASR model)
- **granite_speech** -- `GraniteSpeechFeatureExtractor` (GraniteSpeech model)
- **higgs_audio_v2_tokenizer** -- `DacFeatureExtractor` (HiggsAudioV2Tokenizer model)
- **hubert** -- `Wav2Vec2FeatureExtractor` (Hubert model)
- **kyutai_speech_to_text** -- `KyutaiSpeechToTextFeatureExtractor` (KyutaiSpeechToText model)
- **lasr_ctc** -- `LasrFeatureExtractor` (Lasr model)
- **lasr_encoder** -- `LasrFeatureExtractor` (LasrEncoder model)
- **markuplm** -- `MarkupLMFeatureExtractor` (MarkupLM model)
- **mimi** -- `EncodecFeatureExtractor` (Mimi model)
- **moonshine** -- `Wav2Vec2FeatureExtractor` (Moonshine model)
- **moshi** -- `EncodecFeatureExtractor` (Moshi model)
- **musicgen** -- `EncodecFeatureExtractor` (MusicGen model)
- **musicgen_melody** -- `MusicgenMelodyFeatureExtractor` (MusicGen Melody model)
- **parakeet_ctc** -- `ParakeetFeatureExtractor` (Parakeet model)
- **parakeet_encoder** -- `ParakeetFeatureExtractor` (ParakeetEncoder model)
- **pe_audio** -- `PeAudioFeatureExtractor` (PeAudio model)
- **pe_audio_video** -- `PeAudioFeatureExtractor` (PeAudioVideo model)
- **phi4_multimodal** -- `Phi4MultimodalFeatureExtractor` (Phi4Multimodal model)
- **pop2piano** -- `Pop2PianoFeatureExtractor` (Pop2Piano model)
- **qwen2_5_omni** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (Qwen2_5Omni model)
- **qwen2_audio** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (Qwen2Audio model)
- **qwen3_omni_moe** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (Qwen3OmniMoE model)
- **seamless_m4t** -- `SeamlessM4TFeatureExtractor` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4TFeatureExtractor` (SeamlessM4Tv2 model)
- **sew** -- `Wav2Vec2FeatureExtractor` (SEW model)
- **sew-d** -- `Wav2Vec2FeatureExtractor` (SEW-D model)
- **speech_to_text** -- `Speech2TextFeatureExtractor` (Speech2Text model)
- **speecht5** -- `SpeechT5FeatureExtractor` (SpeechT5 model)
- **unispeech** -- `Wav2Vec2FeatureExtractor` (UniSpeech model)
- **unispeech-sat** -- `Wav2Vec2FeatureExtractor` (UniSpeechSat model)
- **univnet** -- `UnivNetFeatureExtractor` (UnivNet model)
- **vibevoice_acoustic_tokenizer** -- `VibeVoiceAcousticTokenizerFeatureExtractor` (VibeVoiceAcousticTokenizer model)
- **vibevoice_asr** -- `VibeVoiceAcousticTokenizerFeatureExtractor` (VibeVoiceAsr model)
- **voxtral** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (Voxtral model)
- **voxtral_realtime** -- `VoxtralRealtimeFeatureExtractor` (VoxtralRealtime model)
- **wav2vec2** -- `Wav2Vec2FeatureExtractor` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2FeatureExtractor` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2FeatureExtractor` (Wav2Vec2-Conformer model)
- **wavlm** -- `Wav2Vec2FeatureExtractor` (WavLM model)
- **whisper** -- [WhisperFeatureExtractor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperFeatureExtractor) (Whisper model)
- **xcodec** -- `DacFeatureExtractor` (X-CODEC model)

Passing `token=True` is required when you want to use a private model.

Examples:

```python
>>> from transformers import AutoFeatureExtractor

>>> # Download feature extractor from huggingface.co and cache.
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")

>>> # If feature extractor files are in a directory (e.g. feature extractor was saved using *save_pretrained('./test/saved_model/')*)
>>> # feature_extractor = AutoFeatureExtractor.from_pretrained("./test/saved_model/")
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : This can be either:  - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on huggingface.co. - a path to a *directory* containing a feature extractor file saved using the [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/feature_extractor#transformers.FeatureExtractionMixin.save_pretrained) method, e.g., `./my_model_directory/`. - a path to a saved feature extractor JSON *file*, e.g., `./my_model_directory/preprocessor_config.json`.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.

token (`str` or *bool*, *optional*) : The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `hf auth login` (stored in `~/.huggingface`).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

return_unused_kwargs (`bool`, *optional*, defaults to `False`) : If `False`, then this function returns just the final feature extractor object. If `True`, then this functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

kwargs (`dict[str, Any]`, *optional*) : The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is controlled by the `return_unused_kwargs` keyword parameter.
#### register[[transformers.AutoFeatureExtractor.register]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/feature_extraction_auto.py#L373)

Register a new feature extractor for this class.

**Parameters:**

config_class ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The configuration corresponding to the model to register.

feature_extractor_class (`FeatureExtractorMixin`) : The feature extractor to register.

## AutoImageProcessor[[transformers.AutoImageProcessor]][[transformers.AutoImageProcessor]]

#### transformers.AutoImageProcessor[[transformers.AutoImageProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/image_processing_auto.py#L557)

This is a generic image processor class that will be instantiated as one of the image processor classes of the
library when created with the [AutoImageProcessor.from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoImageProcessor.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_pretrainedtransformers.AutoImageProcessor.from_pretrainedhttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/image_processing_auto.py#L571[{"name": "pretrained_model_name_or_path", "val": ""}, {"name": "*inputs", "val": ""}, {"name": "**kwargs", "val": ""}]- **pretrained_model_name_or_path** (`str` or `os.PathLike`) --
  This can be either:

  - a string, the *model id* of a pretrained image_processor hosted inside a model repo on
    huggingface.co.
  - a path to a *directory* containing a image processor file saved using the
    [save_pretrained()](/docs/transformers/v5.5.1/ko/internal/image_processing_utils#transformers.ImageProcessingMixin.save_pretrained) method, e.g.,
    `./my_model_directory/`.
  - a path to a saved image processor JSON *file*, e.g.,
    `./my_model_directory/preprocessor_config.json`.
- **cache_dir** (`str` or `os.PathLike`, *optional*) --
  Path to a directory in which a downloaded pretrained model image processor should be cached if the
  standard cache should not be used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force to (re-)download the image processor files and override the cached versions if
  they exist.
- **proxies** (`dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `hf auth login` (stored in `~/.huggingface`).
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.
- **use_fast** (`bool`, *optional*, defaults to `False`) --
  **Deprecated**: Use `backend="torchvision"` instead. This parameter is kept for backward compatibility.
  Use a fast torchvision-based image processor if it is supported for a given model.
  If a fast image processor is not available for a given model, a normal numpy-based image processor
  is returned instead.
- **backend** (`str`, *optional*, defaults to `None`) --
  The backend to use for image processing. Can be:
  - `None`: Automatically select the best available backend (torchvision if available, otherwise pil)
  - `"torchvision"`: Use Torchvision backend (GPU-accelerated, faster)
  - `"pil"`: Use PIL backend (portable, CPU-only)
  - Any custom backend name registered via `register()` method
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  If `False`, then this function returns just the final image processor object. If `True`, then this
  functions returns a `Tuple(image_processor, unused_kwargs)` where *unused_kwargs* is a dictionary
  consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of
  `kwargs` which has not been used to update `image_processor` and is otherwise ignored.
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
  should only be set to `True` for repositories you trust and in which you have read the code, as it will
  execute code present on the Hub on your local machine.
- **image_processor_filename** (`str`, *optional*, defaults to `"config.json"`) --
  The name of the file in the model directory to use for the image processor config.
- **kwargs** (`dict[str, Any]`, *optional*) --
  The values in kwargs of any keys which are image processor attributes will be used to override the
  loaded values. Behavior concerning key/value pairs whose keys are *not* image processor attributes is
  controlled by the `return_unused_kwargs` keyword parameter.0

Instantiate one of the image processor classes of the library from a pretrained model vocabulary.

The image processor class to instantiate is selected based on the `model_type` property of the config object
(either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's
missing, by falling back to using pattern matching on `pretrained_model_name_or_path`:

- **aimv2** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (AIMv2 model)
- **aimv2_vision_model** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (Aimv2VisionModel model)
- **align** -- `{'torchvision': 'EfficientNetImageProcessor', 'pil': 'EfficientNetImageProcessorPil'}` (ALIGN model)
- **altclip** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (AltCLIP model)
- **aria** -- `{'torchvision': 'AriaImageProcessor', 'pil': 'AriaImageProcessorPil'}` (Aria model)
- **aya_vision** -- `{'torchvision': 'GotOcr2ImageProcessor', 'pil': 'GotOcr2ImageProcessorPil'}` (AyaVision model)
- **beit** -- `{'torchvision': 'BeitImageProcessor', 'pil': 'BeitImageProcessorPil'}` (BEiT model)
- **bit** -- `{'torchvision': 'BitImageProcessor', 'pil': 'BitImageProcessorPil'}` (BiT model)
- **blip** -- `{'torchvision': 'BlipImageProcessor', 'pil': 'BlipImageProcessorPil'}` (BLIP model)
- **blip-2** -- `{'torchvision': 'BlipImageProcessor', 'pil': 'BlipImageProcessorPil'}` (BLIP-2 model)
- **bridgetower** -- `{'torchvision': 'BridgeTowerImageProcessor', 'pil': 'BridgeTowerImageProcessorPil'}` (BridgeTower model)
- **chameleon** -- `{'torchvision': 'ChameleonImageProcessor', 'pil': 'ChameleonImageProcessorPil'}` (Chameleon model)
- **chinese_clip** -- `{'torchvision': 'ChineseCLIPImageProcessor', 'pil': 'ChineseCLIPImageProcessorPil'}` (Chinese-CLIP model)
- **chmv2** -- `{'torchvision': 'CHMv2ImageProcessor'}` (CHMv2 model)
- **clip** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (CLIP model)
- **clipseg** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (CLIPSeg model)
- **cohere2_vision** -- `{'torchvision': 'Cohere2VisionImageProcessor'}` (Cohere2Vision model)
- **colpali** -- `{'torchvision': 'SiglipImageProcessor', 'pil': 'SiglipImageProcessorPil'}` (ColPali model)
- **colqwen2** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (ColQwen2 model)
- **conditional_detr** -- `{'torchvision': 'ConditionalDetrImageProcessor', 'pil': 'ConditionalDetrImageProcessorPil'}` (Conditional DETR model)
- **convnext** -- `{'torchvision': 'ConvNextImageProcessor', 'pil': 'ConvNextImageProcessorPil'}` (ConvNeXT model)
- **convnextv2** -- `{'torchvision': 'ConvNextImageProcessor', 'pil': 'ConvNextImageProcessorPil'}` (ConvNeXTV2 model)
- **cvt** -- `{'torchvision': 'ConvNextImageProcessor', 'pil': 'ConvNextImageProcessorPil'}` (CvT model)
- **data2vec-vision** -- `{'torchvision': 'BeitImageProcessor', 'pil': 'BeitImageProcessorPil'}` (Data2VecVision model)
- **deepseek_vl** -- `{'torchvision': 'DeepseekVLImageProcessor', 'pil': 'DeepseekVLImageProcessorPil'}` (DeepseekVL model)
- **deepseek_vl_hybrid** -- `{'torchvision': 'DeepseekVLHybridImageProcessor', 'pil': 'DeepseekVLHybridImageProcessorPil'}` (DeepseekVLHybrid model)
- **deformable_detr** -- `{'torchvision': 'DeformableDetrImageProcessor', 'pil': 'DeformableDetrImageProcessorPil'}` (Deformable DETR model)
- **deit** -- `{'torchvision': 'DeiTImageProcessor', 'pil': 'DeiTImageProcessorPil'}` (DeiT model)
- **depth_anything** -- `{'torchvision': 'DPTImageProcessor', 'pil': 'DPTImageProcessorPil'}` (Depth Anything model)
- **depth_pro** -- `{'torchvision': 'DepthProImageProcessor'}` (DepthPro model)
- **detr** -- `{'torchvision': 'DetrImageProcessor', 'pil': 'DetrImageProcessorPil'}` (DETR model)
- **dinat** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (DiNAT model)
- **dinov2** -- `{'torchvision': 'BitImageProcessor', 'pil': 'BitImageProcessorPil'}` (DINOv2 model)
- **dinov3_vit** -- `{'torchvision': 'DINOv3ViTImageProcessor'}` (DINOv3 ViT model)
- **donut-swin** -- `{'torchvision': 'DonutImageProcessor', 'pil': 'DonutImageProcessorPil'}` (DonutSwin model)
- **dpt** -- `{'torchvision': 'DPTImageProcessor', 'pil': 'DPTImageProcessorPil'}` (DPT model)
- **edgetam** -- `{'torchvision': 'Sam2ImageProcessor'}` (EdgeTAM model)
- **efficientloftr** -- `{'torchvision': 'EfficientLoFTRImageProcessor', 'pil': 'EfficientLoFTRImageProcessorPil'}` (EfficientLoFTR model)
- **efficientnet** -- `{'torchvision': 'EfficientNetImageProcessor', 'pil': 'EfficientNetImageProcessorPil'}` (EfficientNet model)
- **emu3** -- `{'pil': 'Emu3ImageProcessor'}` (Emu3 model)
- **eomt** -- `{'torchvision': 'EomtImageProcessor', 'pil': 'EomtImageProcessorPil'}` (EoMT model)
- **eomt_dinov3** -- `{'torchvision': 'EomtImageProcessor', 'pil': 'EomtImageProcessorPil'}` (EoMT-DINOv3 model)
- **ernie4_5_vl_moe** -- `{'torchvision': 'Ernie4_5_VLMoeImageProcessor', 'pil': 'Ernie4_5_VLMoeImageProcessorPil'}` (Ernie4_5_VLMoE model)
- **flava** -- `{'torchvision': 'FlavaImageProcessor', 'pil': 'FlavaImageProcessorPil'}` (FLAVA model)
- **florence2** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (Florence2 model)
- **focalnet** -- `{'torchvision': 'BitImageProcessor', 'pil': 'BitImageProcessorPil'}` (FocalNet model)
- **fuyu** -- `{'torchvision': 'FuyuImageProcessor', 'pil': 'FuyuImageProcessorPil'}` (Fuyu model)
- **gemma3** -- `{'torchvision': 'Gemma3ImageProcessor', 'pil': 'Gemma3ImageProcessorPil'}` (Gemma3ForConditionalGeneration model)
- **gemma3n** -- `{'torchvision': 'SiglipImageProcessor', 'pil': 'SiglipImageProcessorPil'}` (Gemma3nForConditionalGeneration model)
- **gemma4** -- `{'torchvision': 'Gemma4ImageProcessor', 'pil': 'Gemma4ImageProcessorPil'}` (Gemma4ForConditionalGeneration model)
- **git** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (GIT model)
- **glm46v** -- `{'torchvision': 'Glm46VImageProcessor', 'pil': 'Glm46VImageProcessorPil'}` (Glm46V model)
- **glm4v** -- `{'torchvision': 'Glm4vImageProcessor', 'pil': 'Glm4vImageProcessorPil'}` (GLM4V model)
- **glm_image** -- `{'torchvision': 'GlmImageImageProcessor', 'pil': 'GlmImageImageProcessorPil'}` (GlmImage model)
- **glpn** -- `{'torchvision': 'GLPNImageProcessor', 'pil': 'GLPNImageProcessorPil'}` (GLPN model)
- **got_ocr2** -- `{'torchvision': 'GotOcr2ImageProcessor', 'pil': 'GotOcr2ImageProcessorPil'}` (GOT-OCR2 model)
- **grounding-dino** -- `{'torchvision': 'GroundingDinoImageProcessor', 'pil': 'GroundingDinoImageProcessorPil'}` (Grounding DINO model)
- **groupvit** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (GroupViT model)
- **hiera** -- `{'torchvision': 'BitImageProcessor', 'pil': 'BitImageProcessorPil'}` (Hiera model)
- **idefics** -- `{'torchvision': 'IdeficsImageProcessor', 'pil': 'IdeficsImageProcessorPil'}` (IDEFICS model)
- **idefics2** -- `{'torchvision': 'Idefics2ImageProcessor', 'pil': 'Idefics2ImageProcessorPil'}` (Idefics2 model)
- **idefics3** -- `{'torchvision': 'Idefics3ImageProcessor', 'pil': 'Idefics3ImageProcessorPil'}` (Idefics3 model)
- **ijepa** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (I-JEPA model)
- **imagegpt** -- `{'torchvision': 'ImageGPTImageProcessor', 'pil': 'ImageGPTImageProcessorPil'}` (ImageGPT model)
- **instructblip** -- `{'torchvision': 'BlipImageProcessor', 'pil': 'BlipImageProcessorPil'}` (InstructBLIP model)
- **internvl** -- `{'torchvision': 'GotOcr2ImageProcessor', 'pil': 'GotOcr2ImageProcessorPil'}` (InternVL model)
- **janus** -- `{'torchvision': 'JanusImageProcessor', 'pil': 'JanusImageProcessorPil'}` (Janus model)
- **kosmos-2** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (KOSMOS-2 model)
- **kosmos-2.5** -- `{'torchvision': 'Kosmos2_5ImageProcessor', 'pil': 'Kosmos2_5ImageProcessorPil'}` (KOSMOS-2.5 model)
- **layoutlmv2** -- `{'torchvision': 'LayoutLMv2ImageProcessor', 'pil': 'LayoutLMv2ImageProcessorPil'}` (LayoutLMv2 model)
- **layoutlmv3** -- `{'torchvision': 'LayoutLMv3ImageProcessor', 'pil': 'LayoutLMv3ImageProcessorPil'}` (LayoutLMv3 model)
- **layoutxlm** -- `{'torchvision': 'LayoutLMv2ImageProcessor', 'pil': 'LayoutLMv2ImageProcessorPil'}` (LayoutXLM model)
- **levit** -- `{'torchvision': 'LevitImageProcessor', 'pil': 'LevitImageProcessorPil'}` (LeViT model)
- **lfm2_vl** -- `{'torchvision': 'Lfm2VlImageProcessor'}` (Lfm2Vl model)
- **lightglue** -- `{'torchvision': 'LightGlueImageProcessor', 'pil': 'LightGlueImageProcessorPil'}` (LightGlue model)
- **lighton_ocr** -- `{'torchvision': 'PixtralImageProcessor', 'pil': 'PixtralImageProcessorPil'}` (LightOnOcr model)
- **llama4** -- `{'torchvision': 'Llama4ImageProcessor'}` (Llama4 model)
- **llava** -- `{'torchvision': 'LlavaImageProcessor', 'pil': 'LlavaImageProcessorPil'}` (LLaVa model)
- **llava_next** -- `{'torchvision': 'LlavaNextImageProcessor', 'pil': 'LlavaNextImageProcessorPil'}` (LLaVA-NeXT model)
- **llava_next_video** -- `{'torchvision': 'LlavaNextImageProcessor', 'pil': 'LlavaNextImageProcessorPil'}` (LLaVa-NeXT-Video model)
- **llava_onevision** -- `{'torchvision': 'LlavaOnevisionImageProcessor', 'pil': 'LlavaOnevisionImageProcessorPil'}` (LLaVA-Onevision model)
- **lw_detr** -- `{'torchvision': 'DeformableDetrImageProcessor', 'pil': 'DeformableDetrImageProcessorPil'}` (LwDetr model)
- **mask2former** -- `{'torchvision': 'Mask2FormerImageProcessor', 'pil': 'Mask2FormerImageProcessorPil'}` (Mask2Former model)
- **maskformer** -- `{'torchvision': 'MaskFormerImageProcessor', 'pil': 'MaskFormerImageProcessorPil'}` (MaskFormer model)
- **metaclip_2** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (MetaCLIP 2 model)
- **mgp-str** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (MGP-STR model)
- **mistral3** -- `{'torchvision': 'PixtralImageProcessor', 'pil': 'PixtralImageProcessorPil'}` (Mistral3 model)
- **mlcd** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (MLCD model)
- **mllama** -- `{'torchvision': 'MllamaImageProcessor', 'pil': 'MllamaImageProcessorPil'}` (Mllama model)
- **mm-grounding-dino** -- `{'torchvision': 'GroundingDinoImageProcessor', 'pil': 'GroundingDinoImageProcessorPil'}` (MM Grounding DINO model)
- **mobilenet_v1** -- `{'torchvision': 'MobileNetV1ImageProcessor', 'pil': 'MobileNetV1ImageProcessorPil'}` (MobileNetV1 model)
- **mobilenet_v2** -- `{'torchvision': 'MobileNetV2ImageProcessor', 'pil': 'MobileNetV2ImageProcessorPil'}` (MobileNetV2 model)
- **mobilevit** -- `{'torchvision': 'MobileViTImageProcessor', 'pil': 'MobileViTImageProcessorPil'}` (MobileViT model)
- **mobilevitv2** -- `{'torchvision': 'MobileViTImageProcessor', 'pil': 'MobileViTImageProcessorPil'}` (MobileViTV2 model)
- **nougat** -- `{'torchvision': 'NougatImageProcessor', 'pil': 'NougatImageProcessorPil'}` (Nougat model)
- **omdet-turbo** -- `{'torchvision': 'DetrImageProcessor', 'pil': 'DetrImageProcessorPil'}` (OmDet-Turbo model)
- **oneformer** -- `{'torchvision': 'OneFormerImageProcessor', 'pil': 'OneFormerImageProcessorPil'}` (OneFormer model)
- **ovis2** -- `{'torchvision': 'Ovis2ImageProcessor', 'pil': 'Ovis2ImageProcessorPil'}` (Ovis2 model)
- **owlv2** -- `{'torchvision': 'Owlv2ImageProcessor', 'pil': 'Owlv2ImageProcessorPil'}` (OWLv2 model)
- **owlvit** -- `{'torchvision': 'OwlViTImageProcessor', 'pil': 'OwlViTImageProcessorPil'}` (OWL-ViT model)
- **paddleocr_vl** -- `{'torchvision': 'PaddleOCRVLImageProcessor', 'pil': 'PaddleOCRVLImageProcessorPil'}` (PaddleOCRVL model)
- **paligemma** -- `{'torchvision': 'SiglipImageProcessor', 'pil': 'SiglipImageProcessorPil'}` (PaliGemma model)
- **perceiver** -- `{'torchvision': 'PerceiverImageProcessor', 'pil': 'PerceiverImageProcessorPil'}` (Perceiver model)
- **perception_lm** -- `{'torchvision': 'PerceptionLMImageProcessor'}` (PerceptionLM model)
- **phi4_multimodal** -- `{'torchvision': 'Phi4MultimodalImageProcessor'}` (Phi4Multimodal model)
- **pi0** -- `{'torchvision': 'PI0ImageProcessor'}` (PI0 model)
- **pix2struct** -- `{'torchvision': 'Pix2StructImageProcessor', 'pil': 'Pix2StructImageProcessorPil'}` (Pix2Struct model)
- **pixio** -- `{'torchvision': 'BitImageProcessor', 'pil': 'BitImageProcessorPil'}` (Pixio model)
- **pixtral** -- `{'torchvision': 'PixtralImageProcessor', 'pil': 'PixtralImageProcessorPil'}` (Pixtral model)
- **poolformer** -- `{'torchvision': 'PoolFormerImageProcessor', 'pil': 'PoolFormerImageProcessorPil'}` (PoolFormer model)
- **pp_chart2table** -- `{'torchvision': 'PPChart2TableImageProcessor', 'pil': 'PPChart2TableImageProcessorPil'}` (PPChart2Table model)
- **pp_doclayout_v2** -- `{'torchvision': 'PPDocLayoutV2ImageProcessor'}` (PPDocLayoutV2 model)
- **pp_doclayout_v3** -- `{'torchvision': 'PPDocLayoutV3ImageProcessor'}` (PPDocLayoutV3 model)
- **pp_lcnet** -- `{'torchvision': 'PPLCNetImageProcessor'}` (PPLCNet model)
- **pp_ocrv5_mobile_det** -- `{'torchvision': 'PPOCRV5ServerDetImageProcessor'}` (PPOCRV5MobileDet model)
- **pp_ocrv5_mobile_rec** -- `{'torchvision': 'PPOCRV5ServerRecImageProcessor'}` (PPOCRV5MobileRec model)
- **pp_ocrv5_server_det** -- `{'torchvision': 'PPOCRV5ServerDetImageProcessor'}` (PPOCRV5ServerDet model)
- **pp_ocrv5_server_rec** -- `{'torchvision': 'PPOCRV5ServerRecImageProcessor'}` (PPOCRV5ServerRec model)
- **prompt_depth_anything** -- `{'torchvision': 'PromptDepthAnythingImageProcessor', 'pil': 'PromptDepthAnythingImageProcessorPil'}` (PromptDepthAnything model)
- **pvt** -- `{'torchvision': 'PvtImageProcessor', 'pil': 'PvtImageProcessorPil'}` (PVT model)
- **pvt_v2** -- `{'torchvision': 'PvtImageProcessor', 'pil': 'PvtImageProcessorPil'}` (PVTv2 model)
- **qwen2_5_omni** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen2_5Omni model)
- **qwen2_5_vl** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen2_5_VL model)
- **qwen2_vl** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen2VL model)
- **qwen3_5** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen3_5 model)
- **qwen3_5_moe** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen3_5Moe model)
- **qwen3_omni_moe** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen3OmniMoE model)
- **qwen3_vl** -- `{'torchvision': 'Qwen2VLImageProcessor', 'pil': 'Qwen2VLImageProcessorPil'}` (Qwen3VL model)
- **regnet** -- `{'torchvision': 'ConvNextImageProcessor', 'pil': 'ConvNextImageProcessorPil'}` (RegNet model)
- **resnet** -- `{'torchvision': 'ConvNextImageProcessor', 'pil': 'ConvNextImageProcessorPil'}` (ResNet model)
- **rt_detr** -- `{'torchvision': 'RTDetrImageProcessor', 'pil': 'RTDetrImageProcessorPil'}` (RT-DETR model)
- **sam** -- `{'torchvision': 'SamImageProcessor', 'pil': 'SamImageProcessorPil'}` (SAM model)
- **sam2** -- `{'torchvision': 'Sam2ImageProcessor'}` (SAM2 model)
- **sam2_video** -- `{'torchvision': 'Sam2ImageProcessor'}` (Sam2VideoModel model)
- **sam3** -- `{'torchvision': 'Sam3ImageProcessor'}` (SAM3 model)
- **sam3_tracker** -- `{'torchvision': 'Sam3ImageProcessor'}` (Sam3Tracker model)
- **sam3_tracker_video** -- `{'torchvision': 'Sam3ImageProcessor'}` (Sam3TrackerVideo model)
- **sam3_video** -- `{'torchvision': 'Sam3ImageProcessor'}` (Sam3VideoModel model)
- **sam_hq** -- `{'torchvision': 'SamImageProcessor', 'pil': 'SamImageProcessorPil'}` (SAM-HQ model)
- **segformer** -- `{'torchvision': 'SegformerImageProcessor', 'pil': 'SegformerImageProcessorPil'}` (SegFormer model)
- **seggpt** -- `{'torchvision': 'SegGptImageProcessor', 'pil': 'SegGptImageProcessorPil'}` (SegGPT model)
- **shieldgemma2** -- `{'torchvision': 'Gemma3ImageProcessor', 'pil': 'Gemma3ImageProcessorPil'}` (Shieldgemma2 model)
- **siglip** -- `{'torchvision': 'SiglipImageProcessor', 'pil': 'SiglipImageProcessorPil'}` (SigLIP model)
- **siglip2** -- `{'torchvision': 'Siglip2ImageProcessor', 'pil': 'Siglip2ImageProcessorPil'}` (SigLIP2 model)
- **slanext** -- `{'torchvision': 'SLANeXtImageProcessor'}` (SLANeXt model)
- **smolvlm** -- `{'torchvision': 'SmolVLMImageProcessor', 'pil': 'SmolVLMImageProcessorPil'}` (SmolVLM model)
- **superglue** -- `{'torchvision': 'SuperGlueImageProcessor', 'pil': 'SuperGlueImageProcessorPil'}` (SuperGlue model)
- **superpoint** -- `{'torchvision': 'SuperPointImageProcessor', 'pil': 'SuperPointImageProcessorPil'}` (SuperPoint model)
- **swiftformer** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (SwiftFormer model)
- **swin** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (Swin Transformer model)
- **swin2sr** -- `{'torchvision': 'Swin2SRImageProcessor', 'pil': 'Swin2SRImageProcessorPil'}` (Swin2SR model)
- **swinv2** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (Swin Transformer V2 model)
- **t5gemma2** -- `{'torchvision': 'Gemma3ImageProcessor', 'pil': 'Gemma3ImageProcessorPil'}` (T5Gemma2 model)
- **t5gemma2_encoder** -- `{'torchvision': 'Gemma3ImageProcessor', 'pil': 'Gemma3ImageProcessorPil'}` (T5Gemma2Encoder model)
- **table-transformer** -- `{'torchvision': 'DetrImageProcessor', 'pil': 'DetrImageProcessorPil'}` (Table Transformer model)
- **textnet** -- `{'torchvision': 'TextNetImageProcessor', 'pil': 'TextNetImageProcessorPil'}` (TextNet model)
- **timesformer** -- `{'pil': 'VideoMAEImageProcessorPil', 'torchvision': 'VideoMAEImageProcessor'}` (TimeSformer model)
- **timm_wrapper** -- `{'pil': 'TimmWrapperImageProcessor'}` (TimmWrapperModel model)
- **trocr** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (TrOCR model)
- **tvp** -- `{'torchvision': 'TvpImageProcessor', 'pil': 'TvpImageProcessorPil'}` (TVP model)
- **udop** -- `{'torchvision': 'LayoutLMv3ImageProcessor', 'pil': 'LayoutLMv3ImageProcessorPil'}` (UDOP model)
- **upernet** -- `{'torchvision': 'SegformerImageProcessor', 'pil': 'SegformerImageProcessorPil'}` (UPerNet model)
- **uvdoc** -- `{'torchvision': 'UVDocImageProcessor'}` (UVDoc model)
- **video_llama_3** -- `{'torchvision': 'VideoLlama3ImageProcessor', 'pil': 'VideoLlama3ImageProcessorPil'}` (VideoLlama3 model)
- **video_llava** -- `{'pil': 'VideoLlavaImageProcessor'}` (VideoLlava model)
- **videomae** -- `{'torchvision': 'VideoMAEImageProcessor', 'pil': 'VideoMAEImageProcessorPil'}` (VideoMAE model)
- **vilt** -- `{'torchvision': 'ViltImageProcessor', 'pil': 'ViltImageProcessorPil'}` (ViLT model)
- **vipllava** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (VipLlava model)
- **vit** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (ViT model)
- **vit_mae** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (ViTMAE model)
- **vit_msn** -- `{'torchvision': 'ViTImageProcessor', 'pil': 'ViTImageProcessorPil'}` (ViTMSN model)
- **vitmatte** -- `{'torchvision': 'VitMatteImageProcessor', 'pil': 'VitMatteImageProcessorPil'}` (ViTMatte model)
- **vitpose** -- `{'torchvision': 'VitPoseImageProcessor', 'pil': 'VitPoseImageProcessorPil'}` (ViTPose model)
- **xclip** -- `{'torchvision': 'CLIPImageProcessor', 'pil': 'CLIPImageProcessorPil'}` (X-CLIP model)
- **yolos** -- `{'torchvision': 'YolosImageProcessor', 'pil': 'YolosImageProcessorPil'}` (YOLOS model)
- **zoedepth** -- `{'torchvision': 'ZoeDepthImageProcessor', 'pil': 'ZoeDepthImageProcessorPil'}` (ZoeDepth model)

Passing `token=True` is required when you want to use a private model.

Examples:

```python
>>> from transformers import AutoImageProcessor

>>> # Download image processor from huggingface.co and cache.
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")

>>> # If image processor files are in a directory (e.g. image processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # image_processor = AutoImageProcessor.from_pretrained("./test/saved_model/")
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : This can be either:  - a string, the *model id* of a pretrained image_processor hosted inside a model repo on huggingface.co. - a path to a *directory* containing a image processor file saved using the [save_pretrained()](/docs/transformers/v5.5.1/ko/internal/image_processing_utils#transformers.ImageProcessingMixin.save_pretrained) method, e.g., `./my_model_directory/`. - a path to a saved image processor JSON *file*, e.g., `./my_model_directory/preprocessor_config.json`.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force to (re-)download the image processor files and override the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.

token (`str` or *bool*, *optional*) : The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `hf auth login` (stored in `~/.huggingface`).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

use_fast (`bool`, *optional*, defaults to `False`) : **Deprecated**: Use `backend="torchvision"` instead. This parameter is kept for backward compatibility. Use a fast torchvision-based image processor if it is supported for a given model. If a fast image processor is not available for a given model, a normal numpy-based image processor is returned instead.

backend (`str`, *optional*, defaults to `None`) : The backend to use for image processing. Can be: - `None`: Automatically select the best available backend (torchvision if available, otherwise pil) - `"torchvision"`: Use Torchvision backend (GPU-accelerated, faster) - `"pil"`: Use PIL backend (portable, CPU-only) - Any custom backend name registered via `register()` method

return_unused_kwargs (`bool`, *optional*, defaults to `False`) : If `False`, then this function returns just the final image processor object. If `True`, then this functions returns a `Tuple(image_processor, unused_kwargs)` where *unused_kwargs* is a dictionary consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of `kwargs` which has not been used to update `image_processor` and is otherwise ignored.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

image_processor_filename (`str`, *optional*, defaults to `"config.json"`) : The name of the file in the model directory to use for the image processor config.

kwargs (`dict[str, Any]`, *optional*) : The values in kwargs of any keys which are image processor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* image processor attributes is controlled by the `return_unused_kwargs` keyword parameter.
#### register[[transformers.AutoImageProcessor.register]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/image_processing_auto.py#L761)

Register a new image processor for this class.

**Parameters:**

config_class ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The configuration corresponding to the model to register.

slow_image_processor_class (`type`, *optional*) : The PIL backend image processor class (deprecated, use `image_processor_classes={"pil": ...}`).

fast_image_processor_class (`type`, *optional*) : The Torchvision backend image processor class (deprecated, use `image_processor_classes={"torchvision": ...}`).

image_processor_classes (`dict[str, type]`, *optional*) : Dictionary mapping backend names to image processor classes. Allows registering custom backends. Example: `{"pil": MyPilProcessor, "torchvision": MyTorchvisionProcessor, "custom": MyCustomProcessor}`

exist_ok (`bool`, *optional*, defaults to `False`) : If `True`, allow overwriting existing registrations.

## AutoProcessor[[transformers.AutoProcessor]][[transformers.AutoProcessor]]

#### transformers.AutoProcessor[[transformers.AutoProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/processing_auto.py#L215)

This is a generic processor class that will be instantiated as one of the processor classes of the library when
created with the [AutoProcessor.from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoProcessor.from_pretrained) class method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_pretrainedtransformers.AutoProcessor.from_pretrainedhttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/processing_auto.py#L229[{"name": "pretrained_model_name_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]- **pretrained_model_name_or_path** (`str` or `os.PathLike`) --
  This can be either:

  - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on
    huggingface.co.
  - a path to a *directory* containing a processor files saved using the `save_pretrained()` method,
    e.g., `./my_model_directory/`.
- **cache_dir** (`str` or `os.PathLike`, *optional*) --
  Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
  standard cache should not be used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force to (re-)download the feature extractor files and override the cached versions
  if they exist.
- **proxies** (`dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `hf auth login` (stored in `~/.huggingface`).
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  If `False`, then this function returns just the final feature extractor object. If `True`, then this
  functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary
  consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
  `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored.
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
  should only be set to `True` for repositories you trust and in which you have read the code, as it will
  execute code present on the Hub on your local machine.
- **kwargs** (`dict[str, Any]`, *optional*) --
  The values in kwargs of any keys which are feature extractor attributes will be used to override the
  loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is
  controlled by the `return_unused_kwargs` keyword parameter.0

Instantiate one of the processor classes of the library from a pretrained model vocabulary.

The processor class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible):

- **aimv2** -- [CLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPProcessor) (AIMv2 model)
- **align** -- `AlignProcessor` (ALIGN model)
- **altclip** -- [AltCLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPProcessor) (AltCLIP model)
- **aria** -- `AriaProcessor` (Aria model)
- **audioflamingo3** -- `AudioFlamingo3Processor` (AudioFlamingo3 model)
- **aya_vision** -- `AyaVisionProcessor` (AyaVision model)
- **bark** -- `BarkProcessor` (Bark model)
- **blip** -- [BlipProcessor](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipProcessor) (BLIP model)
- **blip-2** -- [Blip2Processor](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Processor) (BLIP-2 model)
- **bridgetower** -- `BridgeTowerProcessor` (BridgeTower model)
- **chameleon** -- [ChameleonProcessor](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonProcessor) (Chameleon model)
- **chinese_clip** -- `ChineseCLIPProcessor` (Chinese-CLIP model)
- **clap** -- `ClapProcessor` (CLAP model)
- **clip** -- [CLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPProcessor) (CLIP model)
- **clipseg** -- [CLIPSegProcessor](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegProcessor) (CLIPSeg model)
- **clvp** -- `ClvpProcessor` (CLVP model)
- **cohere2_vision** -- `Cohere2VisionProcessor` (Cohere2Vision model)
- **cohere_asr** -- `CohereAsrProcessor` (CohereASR model)
- **colmodernvbert** -- `ColModernVBertProcessor` (ColModernVBert model)
- **colpali** -- `ColPaliProcessor` (ColPali model)
- **colqwen2** -- `ColQwen2Processor` (ColQwen2 model)
- **deepseek_vl** -- `DeepseekVLProcessor` (DeepseekVL model)
- **deepseek_vl_hybrid** -- `DeepseekVLHybridProcessor` (DeepseekVLHybrid model)
- **dia** -- `DiaProcessor` (Dia model)
- **edgetam** -- `Sam2Processor` (EdgeTAM model)
- **emu3** -- `Emu3Processor` (Emu3 model)
- **ernie4_5_vl_moe** -- `Ernie4_5_VLMoeProcessor` (Ernie4_5_VLMoE model)
- **evolla** -- `EvollaProcessor` (Evolla model)
- **flava** -- `FlavaProcessor` (FLAVA model)
- **florence2** -- `Florence2Processor` (Florence2 model)
- **fuyu** -- `FuyuProcessor` (Fuyu model)
- **gemma3** -- [Gemma3Processor](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Processor) (Gemma3ForConditionalGeneration model)
- **gemma3n** -- [Gemma3nProcessor](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nProcessor) (Gemma3nForConditionalGeneration model)
- **gemma4** -- `Gemma4Processor` (Gemma4ForConditionalGeneration model)
- **git** -- `GitProcessor` (GIT model)
- **glm46v** -- `Glm46VProcessor` (Glm46V model)
- **glm4v** -- `Glm4vProcessor` (GLM4V model)
- **glm4v_moe** -- `Glm4vProcessor` (GLM4VMOE model)
- **glm_image** -- `Glm4vProcessor` (GlmImage model)
- **glmasr** -- `GlmAsrProcessor` (GLM-ASR model)
- **got_ocr2** -- `GotOcr2Processor` (GOT-OCR2 model)
- **granite_speech** -- `GraniteSpeechProcessor` (GraniteSpeech model)
- **grounding-dino** -- [GroundingDinoProcessor](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoProcessor) (Grounding DINO model)
- **groupvit** -- [CLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPProcessor) (GroupViT model)
- **higgs_audio_v2** -- `HiggsAudioV2Processor` (HiggsAudioV2 model)
- **hubert** -- `Wav2Vec2Processor` (Hubert model)
- **idefics** -- `IdeficsProcessor` (IDEFICS model)
- **idefics2** -- `Idefics2Processor` (Idefics2 model)
- **idefics3** -- `Idefics3Processor` (Idefics3 model)
- **instructblip** -- `InstructBlipProcessor` (InstructBLIP model)
- **instructblipvideo** -- `InstructBlipVideoProcessor` (InstructBlipVideo model)
- **internvl** -- `InternVLProcessor` (InternVL model)
- **janus** -- `JanusProcessor` (Janus model)
- **kosmos-2** -- `Kosmos2Processor` (KOSMOS-2 model)
- **kosmos-2.5** -- `Kosmos2_5Processor` (KOSMOS-2.5 model)
- **kyutai_speech_to_text** -- `KyutaiSpeechToTextProcessor` (KyutaiSpeechToText model)
- **lasr_ctc** -- `LasrProcessor` (Lasr model)
- **lasr_encoder** -- `LasrProcessor` (LasrEncoder model)
- **layoutlmv2** -- `LayoutLMv2Processor` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3Processor` (LayoutLMv3 model)
- **layoutxlm** -- `LayoutXLMProcessor` (LayoutXLM model)
- **lfm2_vl** -- `Lfm2VlProcessor` (Lfm2Vl model)
- **lighton_ocr** -- `LightOnOcrProcessor` (LightOnOcr model)
- **llama4** -- [Llama4Processor](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Processor) (Llama4 model)
- **llava** -- `LlavaProcessor` (LLaVa model)
- **llava_next** -- `LlavaNextProcessor` (LLaVA-NeXT model)
- **llava_next_video** -- `LlavaNextVideoProcessor` (LLaVa-NeXT-Video model)
- **llava_onevision** -- `LlavaOnevisionProcessor` (LLaVA-Onevision model)
- **markuplm** -- `MarkupLMProcessor` (MarkupLM model)
- **metaclip_2** -- [CLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPProcessor) (MetaCLIP 2 model)
- **mgp-str** -- `MgpstrProcessor` (MGP-STR model)
- **mistral3** -- `PixtralProcessor` (Mistral3 model)
- **mllama** -- `MllamaProcessor` (Mllama model)
- **mm-grounding-dino** -- [GroundingDinoProcessor](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoProcessor) (MM Grounding DINO model)
- **modernvbert** -- `Idefics3Processor` (ModernVBert model)
- **moonshine** -- `Wav2Vec2Processor` (Moonshine model)
- **moonshine_streaming** -- `MoonshineStreamingProcessor` (MoonshineStreaming model)
- **musicflamingo** -- `MusicFlamingoProcessor` (MusicFlamingo model)
- **omdet-turbo** -- `OmDetTurboProcessor` (OmDet-Turbo model)
- **oneformer** -- `OneFormerProcessor` (OneFormer model)
- **ovis2** -- `Ovis2Processor` (Ovis2 model)
- **owlv2** -- `Owlv2Processor` (OWLv2 model)
- **owlvit** -- `OwlViTProcessor` (OWL-ViT model)
- **paddleocr_vl** -- `PaddleOCRVLProcessor` (PaddleOCRVL model)
- **paligemma** -- [PaliGemmaProcessor](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaProcessor) (PaliGemma model)
- **perception_lm** -- `PerceptionLMProcessor` (PerceptionLM model)
- **phi4_multimodal** -- `Phi4MultimodalProcessor` (Phi4Multimodal model)
- **pi0** -- `PI0Processor` (PI0 model)
- **pix2struct** -- `Pix2StructProcessor` (Pix2Struct model)
- **pixtral** -- `PixtralProcessor` (Pixtral model)
- **pop2piano** -- `Pop2PianoProcessor` (Pop2Piano model)
- **pp_chart2table** -- `PPChart2TableProcessor` (PPChart2Table model)
- **qwen2_5_omni** -- `Qwen2_5OmniProcessor` (Qwen2_5Omni model)
- **qwen2_5_vl** -- `Qwen2_5_VLProcessor` (Qwen2_5_VL model)
- **qwen2_audio** -- `Qwen2AudioProcessor` (Qwen2Audio model)
- **qwen2_vl** -- [Qwen2VLProcessor](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLProcessor) (Qwen2VL model)
- **qwen3_5** -- `Qwen3VLProcessor` (Qwen3_5 model)
- **qwen3_5_moe** -- `Qwen3VLProcessor` (Qwen3_5Moe model)
- **qwen3_omni_moe** -- `Qwen3OmniMoeProcessor` (Qwen3OmniMoE model)
- **qwen3_vl** -- `Qwen3VLProcessor` (Qwen3VL model)
- **qwen3_vl_moe** -- `Qwen3VLProcessor` (Qwen3VLMoe model)
- **sam** -- `SamProcessor` (SAM model)
- **sam2** -- `Sam2Processor` (SAM2 model)
- **sam3** -- `Sam3Processor` (SAM3 model)
- **sam_hq** -- [SamHQProcessor](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQProcessor) (SAM-HQ model)
- **seamless_m4t** -- `SeamlessM4TProcessor` (SeamlessM4T model)
- **sew** -- `Wav2Vec2Processor` (SEW model)
- **sew-d** -- `Wav2Vec2Processor` (SEW-D model)
- **shieldgemma2** -- `ShieldGemma2Processor` (Shieldgemma2 model)
- **siglip** -- [SiglipProcessor](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipProcessor) (SigLIP model)
- **siglip2** -- `Siglip2Processor` (SigLIP2 model)
- **smolvlm** -- [SmolVLMProcessor](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMProcessor) (SmolVLM model)
- **speech_to_text** -- `Speech2TextProcessor` (Speech2Text model)
- **speecht5** -- `SpeechT5Processor` (SpeechT5 model)
- **t5gemma2** -- [Gemma3Processor](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Processor) (T5Gemma2 model)
- **t5gemma2_encoder** -- [Gemma3Processor](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Processor) (T5Gemma2Encoder model)
- **trocr** -- `TrOCRProcessor` (TrOCR model)
- **tvp** -- [TvpProcessor](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpProcessor) (TVP model)
- **udop** -- `UdopProcessor` (UDOP model)
- **unispeech** -- `Wav2Vec2Processor` (UniSpeech model)
- **unispeech-sat** -- `Wav2Vec2Processor` (UniSpeechSat model)
- **vibevoice_asr** -- `VibeVoiceAsrProcessor` (VibeVoiceAsr model)
- **video_llava** -- `VideoLlavaProcessor` (VideoLlava model)
- **vilt** -- `ViltProcessor` (ViLT model)
- **vipllava** -- `LlavaProcessor` (VipLlava model)
- **vision-text-dual-encoder** -- `VisionTextDualEncoderProcessor` (VisionTextDualEncoder model)
- **voxtral** -- `VoxtralProcessor` (Voxtral model)
- **voxtral_realtime** -- `VoxtralRealtimeProcessor` (VoxtralRealtime model)
- **wav2vec2** -- `Wav2Vec2Processor` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2Processor` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2Processor` (Wav2Vec2-Conformer model)
- **wavlm** -- `Wav2Vec2Processor` (WavLM model)
- **whisper** -- [WhisperProcessor](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperProcessor) (Whisper model)
- **xclip** -- [XCLIPProcessor](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPProcessor) (X-CLIP model)

Passing `token=True` is required when you want to use a private model.

Examples:

```python
>>> from transformers import AutoProcessor

>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")

>>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # processor = AutoProcessor.from_pretrained("./test/saved_model/")
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : This can be either:  - a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on huggingface.co. - a path to a *directory* containing a processor files saved using the `save_pretrained()` method, e.g., `./my_model_directory/`.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.

token (`str` or *bool*, *optional*) : The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `hf auth login` (stored in `~/.huggingface`).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

return_unused_kwargs (`bool`, *optional*, defaults to `False`) : If `False`, then this function returns just the final feature extractor object. If `True`, then this functions returns a `Tuple(feature_extractor, unused_kwargs)` where *unused_kwargs* is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of `kwargs` which has not been used to update `feature_extractor` and is otherwise ignored.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

kwargs (`dict[str, Any]`, *optional*) : The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are *not* feature extractor attributes is controlled by the `return_unused_kwargs` keyword parameter.
#### register[[transformers.AutoProcessor.register]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/processing_auto.py#L452)

Register a new processor for this class.

**Parameters:**

config_class ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The configuration corresponding to the model to register.

processor_class ([ProcessorMixin](/docs/transformers/v5.5.1/ko/main_classes/processors#transformers.ProcessorMixin)) : The processor to register.

## 일반적인 모델 클래스[[generic-model-classes]]

다음 자동 클래스들은 특정 헤드 없이 기본 모델 클래스를 인스턴스화하는 데 사용할 수 있습니다.

### AutoModel[[transformers.AutoModel]][[transformers.AutoModel]]

#### transformers.AutoModel[[transformers.AutoModel]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1969)

This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModel.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `ASTConfig` configuration class: `ASTModel` (Audio Spectrogram Transformer model)
  - `AfmoeConfig` configuration class: `AfmoeModel` (AFMoE model)
  - `Aimv2Config` configuration class: `Aimv2Model` (AIMv2 model)
  - `Aimv2VisionConfig` configuration class: `Aimv2VisionModel` (Aimv2VisionModel model)
  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertModel](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertModel) (ALBERT model)
  - `AlignConfig` configuration class: `AlignModel` (ALIGN model)
  - [AltCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPConfig) configuration class: [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model)
  - `ApertusConfig` configuration class: `ApertusModel` (Apertus model)
  - `ArceeConfig` configuration class: `ArceeModel` (Arcee model)
  - `AriaConfig` configuration class: `AriaModel` (Aria model)
  - `AriaTextConfig` configuration class: `AriaTextModel` (AriaText model)
  - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
  - `AudioFlamingo3EncoderConfig` configuration class: `AudioFlamingo3Encoder` (AudioFlamingo3Encoder model)
  - [AutoformerConfig](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerConfig) configuration class: [AutoformerModel](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerModel) (Autoformer model)
  - `AyaVisionConfig` configuration class: `AyaVisionModel` (AyaVision model)
  - `BambaConfig` configuration class: `BambaModel` (Bamba model)
  - `BarkConfig` configuration class: `BarkModel` (Bark model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartModel](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartModel) (BART model)
  - `BeitConfig` configuration class: `BeitModel` (BEiT model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertModel) (BERT model)
  - `BertGenerationConfig` configuration class: `BertGenerationEncoder` (Bert Generation model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdModel](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdModel) (BigBird model)
  - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusModel` (BigBird-Pegasus model)
  - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptModel](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptModel) (BioGpt model)
  - `BitConfig` configuration class: `BitModel` (BiT model)
  - `BitNetConfig` configuration class: `BitNetModel` (BitNet model)
  - `BlenderbotConfig` configuration class: `BlenderbotModel` (Blenderbot model)
  - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallModel` (BlenderbotSmall model)
  - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2Model](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Model) (BLIP-2 model)
  - [Blip2QFormerConfig](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerConfig) configuration class: [Blip2QFormerModel](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerModel) (BLIP-2 QFormer model)
  - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model)
  - `BloomConfig` configuration class: `BloomModel` (BLOOM model)
  - `BltConfig` configuration class: `BltModel` (Blt model)
  - `BridgeTowerConfig` configuration class: `BridgeTowerModel` (BridgeTower model)
  - `BrosConfig` configuration class: `BrosModel` (BROS model)
  - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model)
  - [CLIPSegConfig](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegConfig) configuration class: [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model)
  - [CLIPTextConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextConfig) configuration class: [CLIPTextModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextModel) (CLIPTextModel model)
  - [CLIPVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionConfig) configuration class: [CLIPVisionModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionModel) (CLIPVisionModel model)
  - `CTRLConfig` configuration class: `CTRLModel` (CTRL model)
  - `CamembertConfig` configuration class: `CamembertModel` (CamemBERT model)
  - `CanineConfig` configuration class: `CanineModel` (CANINE model)
  - [ChameleonConfig](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonConfig) configuration class: [ChameleonModel](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonModel) (Chameleon model)
  - `ChineseCLIPConfig` configuration class: `ChineseCLIPModel` (Chinese-CLIP model)
  - `ChineseCLIPVisionConfig` configuration class: `ChineseCLIPVisionModel` (ChineseCLIPVisionModel model)
  - `ClapConfig` configuration class: `ClapModel` (CLAP model)
  - `ClvpConfig` configuration class: `ClvpModelForConditionalGeneration` (CLVP model)
  - [CodeGenConfig](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenConfig) configuration class: [CodeGenModel](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenModel) (CodeGen model)
  - `Cohere2Config` configuration class: `Cohere2Model` (Cohere2 model)
  - `Cohere2VisionConfig` configuration class: `Cohere2VisionModel` (Cohere2Vision model)
  - `CohereAsrConfig` configuration class: `CohereAsrModel` (CohereASR model)
  - [CohereConfig](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereConfig) configuration class: [CohereModel](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereModel) (Cohere model)
  - `ConditionalDetrConfig` configuration class: `ConditionalDetrModel` (Conditional DETR model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertModel](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertModel) (ConvBERT model)
  - `ConvNextConfig` configuration class: `ConvNextModel` (ConvNeXT model)
  - `ConvNextV2Config` configuration class: `ConvNextV2Model` (ConvNeXTV2 model)
  - `CpmAntConfig` configuration class: `CpmAntModel` (CPM-Ant model)
  - `CsmConfig` configuration class: `CsmForConditionalGeneration` (CSM model)
  - `CvtConfig` configuration class: `CvtModel` (CvT model)
  - `CwmConfig` configuration class: `CwmModel` (Code World Model (CWM) model)
  - `DFineConfig` configuration class: `DFineModel` (D-FINE model)
  - `DINOv3ConvNextConfig` configuration class: `DINOv3ConvNextModel` (DINOv3 ConvNext model)
  - `DINOv3ViTConfig` configuration class: `DINOv3ViTModel` (DINOv3 ViT model)
  - `DPRConfig` configuration class: `DPRQuestionEncoder` (DPR model)
  - `DPTConfig` configuration class: `DPTModel` (DPT model)
  - `DabDetrConfig` configuration class: `DabDetrModel` (DAB-DETR model)
  - `DacConfig` configuration class: `DacModel` (DAC model)
  - `Data2VecAudioConfig` configuration class: `Data2VecAudioModel` (Data2VecAudio model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextModel` (Data2VecText model)
  - `Data2VecVisionConfig` configuration class: `Data2VecVisionModel` (Data2VecVision model)
  - [DbrxConfig](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxConfig) configuration class: [DbrxModel](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxModel) (DBRX model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaModel](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaModel) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2Model](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Model) (DeBERTa-v2 model)
  - `DecisionTransformerConfig` configuration class: `DecisionTransformerModel` (Decision Transformer model)
  - `DeepseekV2Config` configuration class: `DeepseekV2Model` (DeepSeek-V2 model)
  - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: [DeepseekV3Model](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Model) (DeepSeek-V3 model)
  - `DeepseekVLConfig` configuration class: `DeepseekVLModel` (DeepseekVL model)
  - `DeepseekVLHybridConfig` configuration class: `DeepseekVLHybridModel` (DeepseekVLHybrid model)
  - `DeformableDetrConfig` configuration class: `DeformableDetrModel` (Deformable DETR model)
  - `DeiTConfig` configuration class: `DeiTModel` (DeiT model)
  - `DepthProConfig` configuration class: `DepthProModel` (DepthPro model)
  - `DetrConfig` configuration class: `DetrModel` (DETR model)
  - `DiaConfig` configuration class: `DiaModel` (Dia model)
  - `DiffLlamaConfig` configuration class: `DiffLlamaModel` (DiffLlama model)
  - `DinatConfig` configuration class: `DinatModel` (DiNAT model)
  - `Dinov2Config` configuration class: `Dinov2Model` (DINOv2 model)
  - `Dinov2WithRegistersConfig` configuration class: `Dinov2WithRegistersModel` (DINOv2 with Registers model)
  - `DistilBertConfig` configuration class: `DistilBertModel` (DistilBERT model)
  - `DogeConfig` configuration class: `DogeModel` (Doge model)
  - `DonutSwinConfig` configuration class: `DonutSwinModel` (DonutSwin model)
  - `Dots1Config` configuration class: `Dots1Model` (dots1 model)
  - `EdgeTamConfig` configuration class: `EdgeTamModel` (EdgeTAM model)
  - `EdgeTamVideoConfig` configuration class: `EdgeTamVideoModel` (EdgeTamVideo model)
  - `EdgeTamVisionConfig` configuration class: `EdgeTamVisionModel` (EdgeTamVisionModel model)
  - `EfficientLoFTRConfig` configuration class: `EfficientLoFTRModel` (EfficientLoFTR model)
  - `EfficientNetConfig` configuration class: `EfficientNetModel` (EfficientNet model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraModel](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraModel) (ELECTRA model)
  - `Emu3Config` configuration class: `Emu3Model` (Emu3 model)
  - `EncodecConfig` configuration class: `EncodecModel` (EnCodec model)
  - `Ernie4_5Config` configuration class: `Ernie4_5Model` (Ernie4_5 model)
  - `Ernie4_5_MoeConfig` configuration class: `Ernie4_5_MoeModel` (Ernie4_5_MoE model)
  - `Ernie4_5_VLMoeConfig` configuration class: `Ernie4_5_VLMoeModel` (Ernie4_5_VLMoE model)
  - `ErnieConfig` configuration class: `ErnieModel` (ERNIE model)
  - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmModel](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmModel) (ESM model)
  - `EuroBertConfig` configuration class: `EuroBertModel` (EuroBERT model)
  - `EvollaConfig` configuration class: `EvollaModel` (Evolla model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4Model](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Model) (EXAONE-4.0 model)
  - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeModel](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeModel) (EXAONE-MoE model)
  - `FNetConfig` configuration class: `FNetModel` (FNet model)
  - `FSMTConfig` configuration class: `FSMTModel` (FairSeq Machine-Translation model)
  - `FalconConfig` configuration class: `FalconModel` (Falcon model)
  - `FalconH1Config` configuration class: `FalconH1Model` (FalconH1 model)
  - `FalconMambaConfig` configuration class: `FalconMambaModel` (FalconMamba model)
  - `FastSpeech2ConformerConfig` configuration class: `FastSpeech2ConformerModel` (FastSpeech2Conformer model)
  - `FastSpeech2ConformerWithHifiGanConfig` configuration class: `FastSpeech2ConformerWithHifiGan` (FastSpeech2ConformerWithHifiGan model)
  - `FastVlmConfig` configuration class: `FastVlmModel` (FastVlm model)
  - `FlaubertConfig` configuration class: `FlaubertModel` (FlauBERT model)
  - `FlavaConfig` configuration class: `FlavaModel` (FLAVA model)
  - `FlexOlmoConfig` configuration class: `FlexOlmoModel` (FlexOlmo model)
  - `Florence2Config` configuration class: `Florence2Model` (Florence2 model)
  - `FocalNetConfig` configuration class: `FocalNetModel` (FocalNet model)
  - `FunnelConfig` configuration class: `FunnelModel` or `FunnelBaseModel` (Funnel Transformer model)
  - `FuyuConfig` configuration class: `FuyuModel` (Fuyu model)
  - `GLPNConfig` configuration class: `GLPNModel` (GLPN model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2Model](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Model) (OpenAI GPT-2 model)
  - `GPTBigCodeConfig` configuration class: `GPTBigCodeModel` (GPTBigCode model)
  - `GPTJConfig` configuration class: `GPTJModel` (GPT-J model)
  - `GPTNeoConfig` configuration class: `GPTNeoModel` (GPT Neo model)
  - `GPTNeoXConfig` configuration class: `GPTNeoXModel` (GPT NeoX model)
  - [GPTNeoXJapaneseConfig](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseConfig) configuration class: [GPTNeoXJapaneseModel](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseModel) (GPT NeoX Japanese model)
  - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2Model](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Model) (Gemma2 model)
  - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3Model](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Model) (Gemma3ForConditionalGeneration model)
  - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: [Gemma3TextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextModel) (Gemma3ForCausalLM model)
  - [Gemma3nAudioConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nAudioConfig) configuration class: `Gemma3nAudioEncoder` (Gemma3nAudioEncoder model)
  - [Gemma3nConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nConfig) configuration class: [Gemma3nModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nModel) (Gemma3nForConditionalGeneration model)
  - [Gemma3nTextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextConfig) configuration class: [Gemma3nTextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextModel) (Gemma3nForCausalLM model)
  - [Gemma3nVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nVisionConfig) configuration class: `TimmWrapperModel` (TimmWrapperModel model)
  - `Gemma4AudioConfig` configuration class: `Gemma4AudioModel` (Gemma4AudioModel model)
  - `Gemma4Config` configuration class: `Gemma4Model` (Gemma4ForConditionalGeneration model)
  - `Gemma4TextConfig` configuration class: `Gemma4TextModel` (Gemma4ForCausalLM model)
  - `Gemma4VisionConfig` configuration class: `Gemma4VisionModel` (Gemma4VisionModel model)
  - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaModel](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaModel) (Gemma model)
  - `GitConfig` configuration class: `GitModel` (GIT model)
  - `Glm46VConfig` configuration class: `Glm46VModel` (Glm46V model)
  - `Glm4Config` configuration class: `Glm4Model` (GLM4 model)
  - `Glm4MoeConfig` configuration class: `Glm4MoeModel` (Glm4MoE model)
  - `Glm4MoeLiteConfig` configuration class: `Glm4MoeLiteModel` (Glm4MoELite model)
  - `Glm4vConfig` configuration class: `Glm4vModel` (GLM4V model)
  - `Glm4vMoeConfig` configuration class: `Glm4vMoeModel` (GLM4VMOE model)
  - `Glm4vMoeTextConfig` configuration class: `Glm4vMoeTextModel` (GLM4VMOE model)
  - `Glm4vMoeVisionConfig` configuration class: `Glm4vMoeVisionModel` (Glm4vMoeVisionModel model)
  - `Glm4vTextConfig` configuration class: `Glm4vTextModel` (GLM4V model)
  - `Glm4vVisionConfig` configuration class: `Glm4vVisionModel` (Glm4vVisionModel model)
  - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model)
  - `GlmAsrEncoderConfig` configuration class: `GlmAsrEncoder` (GLM-ASR Encoder model)
  - `GlmConfig` configuration class: `GlmModel` (GLM model)
  - `GlmImageConfig` configuration class: `GlmImageModel` (GlmImage model)
  - `GlmImageTextConfig` configuration class: `GlmImageTextModel` (GlmImageText model)
  - `GlmImageVQVAEConfig` configuration class: `GlmImageVQVAE` (GlmImageVQVAE model)
  - `GlmImageVisionConfig` configuration class: `GlmImageVisionModel` (GlmImageVisionModel model)
  - `GlmMoeDsaConfig` configuration class: `GlmMoeDsaModel` (GlmMoeDsa model)
  - `GlmOcrConfig` configuration class: `GlmOcrModel` (Glmocr model)
  - `GlmOcrTextConfig` configuration class: `GlmOcrTextModel` (GlmOcrText model)
  - `GlmOcrVisionConfig` configuration class: `GlmOcrVisionModel` (GlmOcrVisionModel model)
  - `GotOcr2Config` configuration class: `GotOcr2Model` (GOT-OCR2 model)
  - `GptOssConfig` configuration class: `GptOssModel` (GptOss model)
  - `GraniteConfig` configuration class: `GraniteModel` (Granite model)
  - `GraniteMoeConfig` configuration class: `GraniteMoeModel` (GraniteMoeMoe model)
  - `GraniteMoeHybridConfig` configuration class: `GraniteMoeHybridModel` (GraniteMoeHybrid model)
  - `GraniteMoeSharedConfig` configuration class: `GraniteMoeSharedModel` (GraniteMoeSharedMoe model)
  - [GroundingDinoConfig](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoConfig) configuration class: [GroundingDinoModel](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoModel) (Grounding DINO model)
  - `GroupViTConfig` configuration class: `GroupViTModel` (GroupViT model)
  - `HGNetV2Config` configuration class: `HGNetV2Backbone` (HGNet-V2 model)
  - `HeliumConfig` configuration class: `HeliumModel` (Helium model)
  - `HieraConfig` configuration class: `HieraModel` (Hiera model)
  - `HiggsAudioV2Config` configuration class: `HiggsAudioV2ForConditionalGeneration` (HiggsAudioV2 model)
  - `HiggsAudioV2TokenizerConfig` configuration class: `HiggsAudioV2TokenizerModel` (HiggsAudioV2Tokenizer model)
  - `HubertConfig` configuration class: `HubertModel` (Hubert model)
  - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1Model` (HunYuanDenseV1 model)
  - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1Model` (HunYuanMoeV1 model)
  - `IBertConfig` configuration class: `IBertModel` (I-BERT model)
  - `IJepaConfig` configuration class: `IJepaModel` (I-JEPA model)
  - `Idefics2Config` configuration class: `Idefics2Model` (Idefics2 model)
  - `Idefics3Config` configuration class: `Idefics3Model` (Idefics3 model)
  - `Idefics3VisionConfig` configuration class: `Idefics3VisionTransformer` (Idefics3VisionTransformer model)
  - `IdeficsConfig` configuration class: `IdeficsModel` (IDEFICS model)
  - `ImageGPTConfig` configuration class: `ImageGPTModel` (ImageGPT model)
  - [InformerConfig](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerConfig) configuration class: [InformerModel](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerModel) (Informer model)
  - `InstructBlipConfig` configuration class: `InstructBlipModel` (InstructBLIP model)
  - `InstructBlipVideoConfig` configuration class: `InstructBlipVideoModel` (InstructBlipVideo model)
  - `InternVLConfig` configuration class: `InternVLModel` (InternVL model)
  - `InternVLVisionConfig` configuration class: `InternVLVisionModel` (InternVLVision model)
  - `Jais2Config` configuration class: `Jais2Model` (Jais2 model)
  - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaModel](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaModel) (Jamba model)
  - `JanusConfig` configuration class: `JanusModel` (Janus model)
  - `JetMoeConfig` configuration class: `JetMoeModel` (JetMoe model)
  - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3Model` (JinaEmbeddingsV3 model)
  - `Kosmos2Config` configuration class: `Kosmos2Model` (KOSMOS-2 model)
  - `Kosmos2_5Config` configuration class: `Kosmos2_5Model` (KOSMOS-2.5 model)
  - `KyutaiSpeechToTextConfig` configuration class: `KyutaiSpeechToTextModel` (KyutaiSpeechToText model)
  - `LEDConfig` configuration class: `LEDModel` (LED model)
  - `LasrCTCConfig` configuration class: `LasrForCTC` (Lasr model)
  - `LasrEncoderConfig` configuration class: `LasrEncoder` (LasrEncoder model)
  - `LayoutLMConfig` configuration class: `LayoutLMModel` (LayoutLM model)
  - `LayoutLMv2Config` configuration class: `LayoutLMv2Model` (LayoutLMv2 model)
  - `LayoutLMv3Config` configuration class: `LayoutLMv3Model` (LayoutLMv3 model)
  - `LevitConfig` configuration class: `LevitModel` (LeViT model)
  - [Lfm2Config](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Config) configuration class: [Lfm2Model](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Model) (Lfm2 model)
  - `Lfm2MoeConfig` configuration class: `Lfm2MoeModel` (Lfm2Moe model)
  - `Lfm2VlConfig` configuration class: `Lfm2VlModel` (Lfm2Vl model)
  - `LightGlueConfig` configuration class: `LightGlueForKeypointMatching` (LightGlue model)
  - `LightOnOcrConfig` configuration class: `LightOnOcrModel` (LightOnOcr model)
  - `LiltConfig` configuration class: `LiltModel` (LiLT model)
  - [Llama4Config](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Config) configuration class: [Llama4ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForConditionalGeneration) (Llama4 model)
  - [Llama4TextConfig](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextConfig) configuration class: [Llama4TextModel](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextModel) (Llama4ForCausalLM model)
  - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaModel](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaModel) (LLaMA model)
  - `LlavaConfig` configuration class: `LlavaModel` (LLaVa model)
  - `LlavaNextConfig` configuration class: `LlavaNextModel` (LLaVA-NeXT model)
  - `LlavaNextVideoConfig` configuration class: `LlavaNextVideoModel` (LLaVa-NeXT-Video model)
  - `LlavaOnevisionConfig` configuration class: `LlavaOnevisionModel` (LLaVA-Onevision model)
  - `LongT5Config` configuration class: `LongT5Model` (LongT5 model)
  - `LongcatFlashConfig` configuration class: `LongcatFlashModel` (LongCatFlash model)
  - `LongformerConfig` configuration class: `LongformerModel` (Longformer model)
  - `LukeConfig` configuration class: `LukeModel` (LUKE model)
  - `LwDetrConfig` configuration class: `LwDetrModel` (LwDetr model)
  - `LxmertConfig` configuration class: `LxmertModel` (LXMERT model)
  - `M2M100Config` configuration class: `M2M100Model` (M2M100 model)
  - `MBartConfig` configuration class: `MBartModel` (mBART model)
  - `MLCDVisionConfig` configuration class: `MLCDVisionModel` (MLCD model)
  - `MMGroundingDinoConfig` configuration class: `MMGroundingDinoModel` (MM Grounding DINO model)
  - `MPNetConfig` configuration class: `MPNetModel` (MPNet model)
  - `MT5Config` configuration class: `MT5Model` (MT5 model)
  - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2Model](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Model) (mamba2 model)
  - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaModel](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaModel) (Mamba model)
  - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianModel) (Marian model)
  - `MarkupLMConfig` configuration class: `MarkupLMModel` (MarkupLM model)
  - `Mask2FormerConfig` configuration class: `Mask2FormerModel` (Mask2Former model)
  - `MaskFormerConfig` configuration class: `MaskFormerModel` (MaskFormer model)
  - `MaskFormerSwinConfig` configuration class: `MaskFormerSwinModel` (MaskFormerSwin model)
  - `MegatronBertConfig` configuration class: `MegatronBertModel` (Megatron-BERT model)
  - `MetaClip2Config` configuration class: `MetaClip2Model` (MetaCLIP 2 model)
  - `MgpstrConfig` configuration class: `MgpstrForSceneTextRecognition` (MGP-STR model)
  - `MimiConfig` configuration class: `MimiModel` (Mimi model)
  - `MiniMaxConfig` configuration class: `MiniMaxModel` (MiniMax model)
  - `MiniMaxM2Config` configuration class: `MiniMaxM2Model` (MiniMax-M2 model)
  - `Ministral3Config` configuration class: `Ministral3Model` (Ministral3 model)
  - `MinistralConfig` configuration class: `MinistralModel` (Ministral model)
  - `Mistral3Config` configuration class: `Mistral3Model` (Mistral3 model)
  - `Mistral4Config` configuration class: `Mistral4Model` (Mistral4 model)
  - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralModel](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralModel) (Mistral model)
  - `MixtralConfig` configuration class: `MixtralModel` (Mixtral model)
  - `MllamaConfig` configuration class: `MllamaModel` (Mllama model)
  - `MobileBertConfig` configuration class: `MobileBertModel` (MobileBERT model)
  - `MobileNetV1Config` configuration class: `MobileNetV1Model` (MobileNetV1 model)
  - `MobileNetV2Config` configuration class: `MobileNetV2Model` (MobileNetV2 model)
  - `MobileViTConfig` configuration class: `MobileViTModel` (MobileViT model)
  - `MobileViTV2Config` configuration class: `MobileViTV2Model` (MobileViTV2 model)
  - `ModernBertConfig` configuration class: `ModernBertModel` (ModernBERT model)
  - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderModel` (ModernBertDecoder model)
  - `ModernVBertConfig` configuration class: `ModernVBertModel` (ModernVBert model)
  - `MoonshineConfig` configuration class: `MoonshineModel` (Moonshine model)
  - `MoonshineStreamingConfig` configuration class: `MoonshineStreamingModel` (MoonshineStreaming model)
  - `MoshiConfig` configuration class: `MoshiModel` (Moshi model)
  - `MptConfig` configuration class: `MptModel` (MPT model)
  - `MraConfig` configuration class: `MraModel` (MRA model)
  - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
  - `MusicgenConfig` configuration class: `MusicgenModel` (MusicGen model)
  - `MusicgenMelodyConfig` configuration class: `MusicgenMelodyModel` (MusicGen Melody model)
  - `MvpConfig` configuration class: `MvpModel` (MVP model)
  - `NanoChatConfig` configuration class: `NanoChatModel` (NanoChat model)
  - `NemotronConfig` configuration class: `NemotronModel` (Nemotron model)
  - `NemotronHConfig` configuration class: `NemotronHModel` (NemotronH model)
  - `NllbMoeConfig` configuration class: `NllbMoeModel` (NLLB-MOE model)
  - `NomicBertConfig` configuration class: `NomicBertModel` (NomicBERT model)
  - `NystromformerConfig` configuration class: `NystromformerModel` (Nyströmformer model)
  - `OPTConfig` configuration class: `OPTModel` (OPT model)
  - `Olmo2Config` configuration class: `Olmo2Model` (OLMo2 model)
  - `Olmo3Config` configuration class: `Olmo3Model` (Olmo3 model)
  - `OlmoConfig` configuration class: `OlmoModel` (OLMo model)
  - `OlmoHybridConfig` configuration class: `OlmoHybridModel` (OlmoHybrid model)
  - `OlmoeConfig` configuration class: `OlmoeModel` (OLMoE model)
  - `OmDetTurboConfig` configuration class: `OmDetTurboForObjectDetection` (OmDet-Turbo model)
  - `OneFormerConfig` configuration class: `OneFormerModel` (OneFormer model)
  - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTModel) (OpenAI GPT model)
  - `Ovis2Config` configuration class: `Ovis2Model` (Ovis2 model)
  - `OwlViTConfig` configuration class: `OwlViTModel` (OWL-ViT model)
  - `Owlv2Config` configuration class: `Owlv2Model` (OWLv2 model)
  - `PI0Config` configuration class: `PI0Model` (PI0 model)
  - `PLBartConfig` configuration class: `PLBartModel` (PLBart model)
  - `PPDocLayoutV3Config` configuration class: `PPDocLayoutV3Model` (PPDocLayoutV3 model)
  - `PPOCRV5MobileRecConfig` configuration class: `PPOCRV5MobileRecModel` (PPOCRV5MobileRec model)
  - `PPOCRV5ServerRecConfig` configuration class: `PPOCRV5ServerRecModel` (PPOCRV5ServerRec model)
  - [PaliGemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaConfig) configuration class: `PaliGemmaModel` (PaliGemma model)
  - `ParakeetCTCConfig` configuration class: `ParakeetForCTC` (Parakeet model)
  - `ParakeetEncoderConfig` configuration class: `ParakeetEncoder` (ParakeetEncoder model)
  - [PatchTSMixerConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerConfig) configuration class: [PatchTSMixerModel](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerModel) (PatchTSMixer model)
  - [PatchTSTConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTConfig) configuration class: [PatchTSTModel](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTModel) (PatchTST model)
  - `PeAudioConfig` configuration class: `PeAudioModel` (PeAudio model)
  - `PeAudioEncoderConfig` configuration class: `PeAudioEncoder` (PeAudioEncoder model)
  - `PeAudioVideoConfig` configuration class: `PeAudioVideoModel` (PeAudioVideo model)
  - `PeAudioVideoEncoderConfig` configuration class: `PeAudioVideoEncoder` (PeAudioVideoEncoder model)
  - `PeVideoConfig` configuration class: `PeVideoModel` (PeVideo model)
  - `PeVideoEncoderConfig` configuration class: `PeVideoEncoder` (PeVideoEncoder model)
  - `PegasusConfig` configuration class: `PegasusModel` (Pegasus model)
  - `PegasusXConfig` configuration class: `PegasusXModel` (PEGASUS-X model)
  - `PerceiverConfig` configuration class: `PerceiverModel` (Perceiver model)
  - `PerceptionLMConfig` configuration class: `PerceptionLMModel` (PerceptionLM model)
  - `PersimmonConfig` configuration class: `PersimmonModel` (Persimmon model)
  - `Phi3Config` configuration class: `Phi3Model` (Phi3 model)
  - `Phi4MultimodalConfig` configuration class: `Phi4MultimodalModel` (Phi4Multimodal model)
  - `PhiConfig` configuration class: `PhiModel` (Phi model)
  - `PhimoeConfig` configuration class: `PhimoeModel` (Phimoe model)
  - `PixioConfig` configuration class: `PixioModel` (Pixio model)
  - `PixtralVisionConfig` configuration class: `PixtralVisionModel` (Pixtral model)
  - `PoolFormerConfig` configuration class: `PoolFormerModel` (PoolFormer model)
  - `ProphetNetConfig` configuration class: `ProphetNetModel` (ProphetNet model)
  - `PvtConfig` configuration class: `PvtModel` (PVT model)
  - `PvtV2Config` configuration class: `PvtV2Model` (PVTv2 model)
  - `Qwen2AudioEncoderConfig` configuration class: `Qwen2AudioEncoder` (Qwen2AudioEncoder model)
  - `Qwen2Config` configuration class: `Qwen2Model` (Qwen2 model)
  - `Qwen2MoeConfig` configuration class: `Qwen2MoeModel` (Qwen2MoE model)
  - [Qwen2VLConfig](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLConfig) configuration class: [Qwen2VLModel](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLModel) (Qwen2VL model)
  - `Qwen2VLTextConfig` configuration class: `Qwen2VLTextModel` (Qwen2VL model)
  - `Qwen2_5_VLConfig` configuration class: `Qwen2_5_VLModel` (Qwen2_5_VL model)
  - `Qwen2_5_VLTextConfig` configuration class: `Qwen2_5_VLTextModel` (Qwen2_5_VL model)
  - `Qwen3Config` configuration class: `Qwen3Model` (Qwen3 model)
  - `Qwen3MoeConfig` configuration class: `Qwen3MoeModel` (Qwen3MoE model)
  - `Qwen3NextConfig` configuration class: `Qwen3NextModel` (Qwen3Next model)
  - `Qwen3VLConfig` configuration class: `Qwen3VLModel` (Qwen3VL model)
  - `Qwen3VLMoeConfig` configuration class: `Qwen3VLMoeModel` (Qwen3VLMoe model)
  - `Qwen3VLMoeTextConfig` configuration class: `Qwen3VLMoeTextModel` (Qwen3VLMoe model)
  - `Qwen3VLTextConfig` configuration class: `Qwen3VLTextModel` (Qwen3VL model)
  - `Qwen3_5Config` configuration class: `Qwen3_5Model` (Qwen3_5 model)
  - `Qwen3_5MoeConfig` configuration class: `Qwen3_5MoeModel` (Qwen3_5Moe model)
  - `Qwen3_5MoeTextConfig` configuration class: `Qwen3_5MoeTextModel` (Qwen3_5MoeText model)
  - `Qwen3_5TextConfig` configuration class: `Qwen3_5TextModel` (Qwen3_5Text model)
  - `RTDetrConfig` configuration class: `RTDetrModel` (RT-DETR model)
  - `RTDetrV2Config` configuration class: `RTDetrV2Model` (RT-DETRv2 model)
  - `RecurrentGemmaConfig` configuration class: `RecurrentGemmaModel` (RecurrentGemma model)
  - `ReformerConfig` configuration class: `ReformerModel` (Reformer model)
  - `RegNetConfig` configuration class: `RegNetModel` (RegNet model)
  - `RemBertConfig` configuration class: `RemBertModel` (RemBERT model)
  - `ResNetConfig` configuration class: `ResNetModel` (ResNet model)
  - `RoCBertConfig` configuration class: `RoCBertModel` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerModel` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaModel](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaModel) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormModel` (RoBERTa-PreLayerNorm model)
  - `RwkvConfig` configuration class: `RwkvModel` (RWKV model)
  - `SEWConfig` configuration class: `SEWModel` (SEW model)
  - `SEWDConfig` configuration class: `SEWDModel` (SEW-D model)
  - `Sam2Config` configuration class: `Sam2Model` (SAM2 model)
  - `Sam2HieraDetConfig` configuration class: `Sam2HieraDetModel` (Sam2HieraDetModel model)
  - `Sam2VideoConfig` configuration class: `Sam2VideoModel` (Sam2VideoModel model)
  - `Sam2VisionConfig` configuration class: `Sam2VisionModel` (Sam2VisionModel model)
  - `Sam3Config` configuration class: `Sam3Model` (SAM3 model)
  - `Sam3TrackerConfig` configuration class: `Sam3TrackerModel` (Sam3Tracker model)
  - `Sam3TrackerVideoConfig` configuration class: `Sam3TrackerVideoModel` (Sam3TrackerVideo model)
  - `Sam3ViTConfig` configuration class: `Sam3ViTModel` (Sam3ViTModel model)
  - `Sam3VideoConfig` configuration class: `Sam3VideoModel` (Sam3VideoModel model)
  - `Sam3VisionConfig` configuration class: `Sam3VisionModel` (Sam3VisionModel model)
  - `SamConfig` configuration class: `SamModel` (SAM model)
  - [SamHQConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQConfig) configuration class: [SamHQModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQModel) (SAM-HQ model)
  - [SamHQVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionConfig) configuration class: [SamHQVisionModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionModel) (SamHQVisionModel model)
  - `SamVisionConfig` configuration class: `SamVisionModel` (SamVisionModel model)
  - `SeamlessM4TConfig` configuration class: `SeamlessM4TModel` (SeamlessM4T model)
  - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2Model` (SeamlessM4Tv2 model)
  - `SeedOssConfig` configuration class: `SeedOssModel` (SeedOss model)
  - `SegGptConfig` configuration class: `SegGptModel` (SegGPT model)
  - `SegformerConfig` configuration class: `SegformerModel` (SegFormer model)
  - `Siglip2Config` configuration class: `Siglip2Model` (SigLIP2 model)
  - `Siglip2VisionConfig` configuration class: `Siglip2VisionModel` (Siglip2VisionModel model)
  - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model)
  - [SiglipVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionConfig) configuration class: [SiglipVisionModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionModel) (SiglipVisionModel model)
  - `SmolLM3Config` configuration class: `SmolLM3Model` (SmolLM3 model)
  - [SmolVLMConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMConfig) configuration class: [SmolVLMModel](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMModel) (SmolVLM model)
  - [SmolVLMVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionConfig) configuration class: [SmolVLMVisionTransformer](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionTransformer) (SmolVLMVisionTransformer model)
  - `SolarOpenConfig` configuration class: `SolarOpenModel` (SolarOpen model)
  - `Speech2TextConfig` configuration class: `Speech2TextModel` (Speech2Text model)
  - `SpeechT5Config` configuration class: `SpeechT5Model` (SpeechT5 model)
  - `SplinterConfig` configuration class: `SplinterModel` (Splinter model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertModel` (SqueezeBERT model)
  - `StableLmConfig` configuration class: `StableLmModel` (StableLm model)
  - `Starcoder2Config` configuration class: `Starcoder2Model` (Starcoder2 model)
  - `SwiftFormerConfig` configuration class: `SwiftFormerModel` (SwiftFormer model)
  - [Swin2SRConfig](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRConfig) configuration class: [Swin2SRModel](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRModel) (Swin2SR model)
  - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinModel](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinModel) (Swin Transformer model)
  - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2Model](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Model) (Swin Transformer V2 model)
  - `SwitchTransformersConfig` configuration class: `SwitchTransformersModel` (SwitchTransformers model)
  - `T5Config` configuration class: `T5Model` (T5 model)
  - `T5Gemma2Config` configuration class: `T5Gemma2Model` (T5Gemma2 model)
  - `T5Gemma2EncoderConfig` configuration class: `T5Gemma2Encoder` (T5Gemma2Encoder model)
  - `T5GemmaConfig` configuration class: `T5GemmaModel` (T5Gemma model)
  - `TableTransformerConfig` configuration class: `TableTransformerModel` (Table Transformer model)
  - `TapasConfig` configuration class: `TapasModel` (TAPAS model)
  - `TextNetConfig` configuration class: `TextNetModel` (TextNet model)
  - [TimeSeriesTransformerConfig](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerConfig) configuration class: [TimeSeriesTransformerModel](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel) (Time Series Transformer model)
  - `TimesFm2_5Config` configuration class: `TimesFm2_5Model` (TimesFm2p5 model)
  - `TimesFmConfig` configuration class: `TimesFmModel` (TimesFm model)
  - [TimesformerConfig](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerConfig) configuration class: [TimesformerModel](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerModel) (TimeSformer model)
  - `TimmBackboneConfig` configuration class: `TimmBackbone` (TimmBackbone model)
  - `TimmWrapperConfig` configuration class: `TimmWrapperModel` (TimmWrapperModel model)
  - [TvpConfig](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpConfig) configuration class: [TvpModel](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpModel) (TVP model)
  - `UMT5Config` configuration class: `UMT5Model` (UMT5 model)
  - `UVDocConfig` configuration class: `UVDocModel` (UVDoc model)
  - `UdopConfig` configuration class: `UdopModel` (UDOP model)
  - `UniSpeechConfig` configuration class: `UniSpeechModel` (UniSpeech model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatModel` (UniSpeechSat model)
  - `UnivNetConfig` configuration class: `UnivNetModel` (UnivNet model)
  - `VJEPA2Config` configuration class: `VJEPA2Model` (VJEPA2Model model)
  - `VaultGemmaConfig` configuration class: `VaultGemmaModel` (VaultGemma model)
  - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTModel](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTModel) (ViT model)
  - `ViTMAEConfig` configuration class: `ViTMAEModel` (ViTMAE model)
  - `ViTMSNConfig` configuration class: `ViTMSNModel` (ViTMSN model)
  - `VibeVoiceAcousticTokenizerConfig` configuration class: `VibeVoiceAcousticTokenizerModel` (VibeVoiceAcousticTokenizer model)
  - `VibeVoiceAcousticTokenizerDecoderConfig` configuration class: `VibeVoiceAcousticTokenizerDecoderModel` (VibeVoiceAcousticTokenizerDecoderConfig model)
  - `VibeVoiceAcousticTokenizerEncoderConfig` configuration class: `VibeVoiceAcousticTokenizerEncoderModel` (VibeVoiceAcousticTokenizerEncoderConfig model)
  - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
  - `VideoLlama3Config` configuration class: `VideoLlama3Model` (VideoLlama3 model)
  - `VideoLlama3VisionConfig` configuration class: `VideoLlama3VisionModel` (VideoLlama3Vision model)
  - `VideoLlavaConfig` configuration class: `VideoLlavaModel` (VideoLlava model)
  - `VideoMAEConfig` configuration class: `VideoMAEModel` (VideoMAE model)
  - `ViltConfig` configuration class: `ViltModel` (ViLT model)
  - `VipLlavaConfig` configuration class: `VipLlavaModel` (VipLlava model)
  - `VisionTextDualEncoderConfig` configuration class: `VisionTextDualEncoderModel` (VisionTextDualEncoder model)
  - `VisualBertConfig` configuration class: `VisualBertModel` (VisualBERT model)
  - `VitDetConfig` configuration class: `VitDetModel` (VitDet model)
  - `VitsConfig` configuration class: `VitsModel` (VITS model)
  - [VivitConfig](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitConfig) configuration class: [VivitModel](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitModel) (ViViT model)
  - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model)
  - `VoxtralEncoderConfig` configuration class: `VoxtralEncoder` (Voxtral Encoder model)
  - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
  - `VoxtralRealtimeEncoderConfig` configuration class: `VoxtralRealtimeEncoder` (VoxtralRealtime Encoder model)
  - `VoxtralRealtimeTextConfig` configuration class: `VoxtralRealtimeTextModel` (VoxtralRealtime Text Model model)
  - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertModel` (Wav2Vec2-BERT model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2Model` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerModel` (Wav2Vec2-Conformer model)
  - `WavLMConfig` configuration class: `WavLMModel` (WavLM model)
  - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperModel](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperModel) (Whisper model)
  - [XCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPConfig) configuration class: [XCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPModel) (X-CLIP model)
  - `XGLMConfig` configuration class: `XGLMModel` (XGLM model)
  - `XLMConfig` configuration class: `XLMModel` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaModel` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLModel` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetModel` (XLNet model)
  - `XcodecConfig` configuration class: `XcodecModel` (X-CODEC model)
  - `XmodConfig` configuration class: `XmodModel` (X-MOD model)
  - `YolosConfig` configuration class: `YolosModel` (YOLOS model)
  - `YosoConfig` configuration class: `YosoModel` (YOSO model)
  - `YoutuConfig` configuration class: `YoutuModel` (Youtu model)
  - `Zamba2Config` configuration class: `Zamba2Model` (Zamba2 model)
  - `ZambaConfig` configuration class: `ZambaModel` (Zamba model)
  - `xLSTMConfig` configuration class: `xLSTMModel` (xLSTM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the base model classes of the library from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModel

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModel.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `ASTConfig` configuration class: `ASTModel` (Audio Spectrogram Transformer model) - `AfmoeConfig` configuration class: `AfmoeModel` (AFMoE model) - `Aimv2Config` configuration class: `Aimv2Model` (AIMv2 model) - `Aimv2VisionConfig` configuration class: `Aimv2VisionModel` (Aimv2VisionModel model) - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertModel](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertModel) (ALBERT model) - `AlignConfig` configuration class: `AlignModel` (ALIGN model) - [AltCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPConfig) configuration class: [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model) - `ApertusConfig` configuration class: `ApertusModel` (Apertus model) - `ArceeConfig` configuration class: `ArceeModel` (Arcee model) - `AriaConfig` configuration class: `AriaModel` (Aria model) - `AriaTextConfig` configuration class: `AriaTextModel` (AriaText model) - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model) - `AudioFlamingo3EncoderConfig` configuration class: `AudioFlamingo3Encoder` (AudioFlamingo3Encoder model) - [AutoformerConfig](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerConfig) configuration class: [AutoformerModel](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerModel) (Autoformer model) - `AyaVisionConfig` configuration class: `AyaVisionModel` (AyaVision model) - `BambaConfig` configuration class: `BambaModel` (Bamba model) - `BarkConfig` configuration class: `BarkModel` (Bark model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartModel](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartModel) (BART model) - `BeitConfig` configuration class: `BeitModel` (BEiT model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertModel) (BERT model) - `BertGenerationConfig` configuration class: `BertGenerationEncoder` (Bert Generation model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdModel](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdModel) (BigBird model) - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusModel` (BigBird-Pegasus model) - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptModel](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptModel) (BioGpt model) - `BitConfig` configuration class: `BitModel` (BiT model) - `BitNetConfig` configuration class: `BitNetModel` (BitNet model) - `BlenderbotConfig` configuration class: `BlenderbotModel` (Blenderbot model) - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallModel` (BlenderbotSmall model) - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2Model](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Model) (BLIP-2 model) - [Blip2QFormerConfig](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerConfig) configuration class: [Blip2QFormerModel](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerModel) (BLIP-2 QFormer model) - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model) - `BloomConfig` configuration class: `BloomModel` (BLOOM model) - `BltConfig` configuration class: `BltModel` (Blt model) - `BridgeTowerConfig` configuration class: `BridgeTowerModel` (BridgeTower model) - `BrosConfig` configuration class: `BrosModel` (BROS model) - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model) - [CLIPSegConfig](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegConfig) configuration class: [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model) - [CLIPTextConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextConfig) configuration class: [CLIPTextModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextModel) (CLIPTextModel model) - [CLIPVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionConfig) configuration class: [CLIPVisionModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionModel) (CLIPVisionModel model) - `CTRLConfig` configuration class: `CTRLModel` (CTRL model) - `CamembertConfig` configuration class: `CamembertModel` (CamemBERT model) - `CanineConfig` configuration class: `CanineModel` (CANINE model) - [ChameleonConfig](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonConfig) configuration class: [ChameleonModel](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonModel) (Chameleon model) - `ChineseCLIPConfig` configuration class: `ChineseCLIPModel` (Chinese-CLIP model) - `ChineseCLIPVisionConfig` configuration class: `ChineseCLIPVisionModel` (ChineseCLIPVisionModel model) - `ClapConfig` configuration class: `ClapModel` (CLAP model) - `ClvpConfig` configuration class: `ClvpModelForConditionalGeneration` (CLVP model) - [CodeGenConfig](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenConfig) configuration class: [CodeGenModel](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenModel) (CodeGen model) - `Cohere2Config` configuration class: `Cohere2Model` (Cohere2 model) - `Cohere2VisionConfig` configuration class: `Cohere2VisionModel` (Cohere2Vision model) - `CohereAsrConfig` configuration class: `CohereAsrModel` (CohereASR model) - [CohereConfig](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereConfig) configuration class: [CohereModel](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereModel) (Cohere model) - `ConditionalDetrConfig` configuration class: `ConditionalDetrModel` (Conditional DETR model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertModel](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertModel) (ConvBERT model) - `ConvNextConfig` configuration class: `ConvNextModel` (ConvNeXT model) - `ConvNextV2Config` configuration class: `ConvNextV2Model` (ConvNeXTV2 model) - `CpmAntConfig` configuration class: `CpmAntModel` (CPM-Ant model) - `CsmConfig` configuration class: `CsmForConditionalGeneration` (CSM model) - `CvtConfig` configuration class: `CvtModel` (CvT model) - `CwmConfig` configuration class: `CwmModel` (Code World Model (CWM) model) - `DFineConfig` configuration class: `DFineModel` (D-FINE model) - `DINOv3ConvNextConfig` configuration class: `DINOv3ConvNextModel` (DINOv3 ConvNext model) - `DINOv3ViTConfig` configuration class: `DINOv3ViTModel` (DINOv3 ViT model) - `DPRConfig` configuration class: `DPRQuestionEncoder` (DPR model) - `DPTConfig` configuration class: `DPTModel` (DPT model) - `DabDetrConfig` configuration class: `DabDetrModel` (DAB-DETR model) - `DacConfig` configuration class: `DacModel` (DAC model) - `Data2VecAudioConfig` configuration class: `Data2VecAudioModel` (Data2VecAudio model) - `Data2VecTextConfig` configuration class: `Data2VecTextModel` (Data2VecText model) - `Data2VecVisionConfig` configuration class: `Data2VecVisionModel` (Data2VecVision model) - [DbrxConfig](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxConfig) configuration class: [DbrxModel](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxModel) (DBRX model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaModel](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaModel) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2Model](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Model) (DeBERTa-v2 model) - `DecisionTransformerConfig` configuration class: `DecisionTransformerModel` (Decision Transformer model) - `DeepseekV2Config` configuration class: `DeepseekV2Model` (DeepSeek-V2 model) - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: [DeepseekV3Model](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Model) (DeepSeek-V3 model) - `DeepseekVLConfig` configuration class: `DeepseekVLModel` (DeepseekVL model) - `DeepseekVLHybridConfig` configuration class: `DeepseekVLHybridModel` (DeepseekVLHybrid model) - `DeformableDetrConfig` configuration class: `DeformableDetrModel` (Deformable DETR model) - `DeiTConfig` configuration class: `DeiTModel` (DeiT model) - `DepthProConfig` configuration class: `DepthProModel` (DepthPro model) - `DetrConfig` configuration class: `DetrModel` (DETR model) - `DiaConfig` configuration class: `DiaModel` (Dia model) - `DiffLlamaConfig` configuration class: `DiffLlamaModel` (DiffLlama model) - `DinatConfig` configuration class: `DinatModel` (DiNAT model) - `Dinov2Config` configuration class: `Dinov2Model` (DINOv2 model) - `Dinov2WithRegistersConfig` configuration class: `Dinov2WithRegistersModel` (DINOv2 with Registers model) - `DistilBertConfig` configuration class: `DistilBertModel` (DistilBERT model) - `DogeConfig` configuration class: `DogeModel` (Doge model) - `DonutSwinConfig` configuration class: `DonutSwinModel` (DonutSwin model) - `Dots1Config` configuration class: `Dots1Model` (dots1 model) - `EdgeTamConfig` configuration class: `EdgeTamModel` (EdgeTAM model) - `EdgeTamVideoConfig` configuration class: `EdgeTamVideoModel` (EdgeTamVideo model) - `EdgeTamVisionConfig` configuration class: `EdgeTamVisionModel` (EdgeTamVisionModel model) - `EfficientLoFTRConfig` configuration class: `EfficientLoFTRModel` (EfficientLoFTR model) - `EfficientNetConfig` configuration class: `EfficientNetModel` (EfficientNet model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraModel](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraModel) (ELECTRA model) - `Emu3Config` configuration class: `Emu3Model` (Emu3 model) - `EncodecConfig` configuration class: `EncodecModel` (EnCodec model) - `Ernie4_5Config` configuration class: `Ernie4_5Model` (Ernie4_5 model) - `Ernie4_5_MoeConfig` configuration class: `Ernie4_5_MoeModel` (Ernie4_5_MoE model) - `Ernie4_5_VLMoeConfig` configuration class: `Ernie4_5_VLMoeModel` (Ernie4_5_VLMoE model) - `ErnieConfig` configuration class: `ErnieModel` (ERNIE model) - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmModel](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmModel) (ESM model) - `EuroBertConfig` configuration class: `EuroBertModel` (EuroBERT model) - `EvollaConfig` configuration class: `EvollaModel` (Evolla model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4Model](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Model) (EXAONE-4.0 model) - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeModel](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeModel) (EXAONE-MoE model) - `FNetConfig` configuration class: `FNetModel` (FNet model) - `FSMTConfig` configuration class: `FSMTModel` (FairSeq Machine-Translation model) - `FalconConfig` configuration class: `FalconModel` (Falcon model) - `FalconH1Config` configuration class: `FalconH1Model` (FalconH1 model) - `FalconMambaConfig` configuration class: `FalconMambaModel` (FalconMamba model) - `FastSpeech2ConformerConfig` configuration class: `FastSpeech2ConformerModel` (FastSpeech2Conformer model) - `FastSpeech2ConformerWithHifiGanConfig` configuration class: `FastSpeech2ConformerWithHifiGan` (FastSpeech2ConformerWithHifiGan model) - `FastVlmConfig` configuration class: `FastVlmModel` (FastVlm model) - `FlaubertConfig` configuration class: `FlaubertModel` (FlauBERT model) - `FlavaConfig` configuration class: `FlavaModel` (FLAVA model) - `FlexOlmoConfig` configuration class: `FlexOlmoModel` (FlexOlmo model) - `Florence2Config` configuration class: `Florence2Model` (Florence2 model) - `FocalNetConfig` configuration class: `FocalNetModel` (FocalNet model) - `FunnelConfig` configuration class: `FunnelModel` or `FunnelBaseModel` (Funnel Transformer model) - `FuyuConfig` configuration class: `FuyuModel` (Fuyu model) - `GLPNConfig` configuration class: `GLPNModel` (GLPN model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2Model](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Model) (OpenAI GPT-2 model) - `GPTBigCodeConfig` configuration class: `GPTBigCodeModel` (GPTBigCode model) - `GPTJConfig` configuration class: `GPTJModel` (GPT-J model) - `GPTNeoConfig` configuration class: `GPTNeoModel` (GPT Neo model) - `GPTNeoXConfig` configuration class: `GPTNeoXModel` (GPT NeoX model) - [GPTNeoXJapaneseConfig](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseConfig) configuration class: [GPTNeoXJapaneseModel](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseModel) (GPT NeoX Japanese model) - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2Model](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Model) (Gemma2 model) - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3Model](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Model) (Gemma3ForConditionalGeneration model) - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: [Gemma3TextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextModel) (Gemma3ForCausalLM model) - [Gemma3nAudioConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nAudioConfig) configuration class: `Gemma3nAudioEncoder` (Gemma3nAudioEncoder model) - [Gemma3nConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nConfig) configuration class: [Gemma3nModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nModel) (Gemma3nForConditionalGeneration model) - [Gemma3nTextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextConfig) configuration class: [Gemma3nTextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextModel) (Gemma3nForCausalLM model) - [Gemma3nVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nVisionConfig) configuration class: `TimmWrapperModel` (TimmWrapperModel model) - `Gemma4AudioConfig` configuration class: `Gemma4AudioModel` (Gemma4AudioModel model) - `Gemma4Config` configuration class: `Gemma4Model` (Gemma4ForConditionalGeneration model) - `Gemma4TextConfig` configuration class: `Gemma4TextModel` (Gemma4ForCausalLM model) - `Gemma4VisionConfig` configuration class: `Gemma4VisionModel` (Gemma4VisionModel model) - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaModel](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaModel) (Gemma model) - `GitConfig` configuration class: `GitModel` (GIT model) - `Glm46VConfig` configuration class: `Glm46VModel` (Glm46V model) - `Glm4Config` configuration class: `Glm4Model` (GLM4 model) - `Glm4MoeConfig` configuration class: `Glm4MoeModel` (Glm4MoE model) - `Glm4MoeLiteConfig` configuration class: `Glm4MoeLiteModel` (Glm4MoELite model) - `Glm4vConfig` configuration class: `Glm4vModel` (GLM4V model) - `Glm4vMoeConfig` configuration class: `Glm4vMoeModel` (GLM4VMOE model) - `Glm4vMoeTextConfig` configuration class: `Glm4vMoeTextModel` (GLM4VMOE model) - `Glm4vMoeVisionConfig` configuration class: `Glm4vMoeVisionModel` (Glm4vMoeVisionModel model) - `Glm4vTextConfig` configuration class: `Glm4vTextModel` (GLM4V model) - `Glm4vVisionConfig` configuration class: `Glm4vVisionModel` (Glm4vVisionModel model) - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model) - `GlmAsrEncoderConfig` configuration class: `GlmAsrEncoder` (GLM-ASR Encoder model) - `GlmConfig` configuration class: `GlmModel` (GLM model) - `GlmImageConfig` configuration class: `GlmImageModel` (GlmImage model) - `GlmImageTextConfig` configuration class: `GlmImageTextModel` (GlmImageText model) - `GlmImageVQVAEConfig` configuration class: `GlmImageVQVAE` (GlmImageVQVAE model) - `GlmImageVisionConfig` configuration class: `GlmImageVisionModel` (GlmImageVisionModel model) - `GlmMoeDsaConfig` configuration class: `GlmMoeDsaModel` (GlmMoeDsa model) - `GlmOcrConfig` configuration class: `GlmOcrModel` (Glmocr model) - `GlmOcrTextConfig` configuration class: `GlmOcrTextModel` (GlmOcrText model) - `GlmOcrVisionConfig` configuration class: `GlmOcrVisionModel` (GlmOcrVisionModel model) - `GotOcr2Config` configuration class: `GotOcr2Model` (GOT-OCR2 model) - `GptOssConfig` configuration class: `GptOssModel` (GptOss model) - `GraniteConfig` configuration class: `GraniteModel` (Granite model) - `GraniteMoeConfig` configuration class: `GraniteMoeModel` (GraniteMoeMoe model) - `GraniteMoeHybridConfig` configuration class: `GraniteMoeHybridModel` (GraniteMoeHybrid model) - `GraniteMoeSharedConfig` configuration class: `GraniteMoeSharedModel` (GraniteMoeSharedMoe model) - [GroundingDinoConfig](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoConfig) configuration class: [GroundingDinoModel](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoModel) (Grounding DINO model) - `GroupViTConfig` configuration class: `GroupViTModel` (GroupViT model) - `HGNetV2Config` configuration class: `HGNetV2Backbone` (HGNet-V2 model) - `HeliumConfig` configuration class: `HeliumModel` (Helium model) - `HieraConfig` configuration class: `HieraModel` (Hiera model) - `HiggsAudioV2Config` configuration class: `HiggsAudioV2ForConditionalGeneration` (HiggsAudioV2 model) - `HiggsAudioV2TokenizerConfig` configuration class: `HiggsAudioV2TokenizerModel` (HiggsAudioV2Tokenizer model) - `HubertConfig` configuration class: `HubertModel` (Hubert model) - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1Model` (HunYuanDenseV1 model) - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1Model` (HunYuanMoeV1 model) - `IBertConfig` configuration class: `IBertModel` (I-BERT model) - `IJepaConfig` configuration class: `IJepaModel` (I-JEPA model) - `Idefics2Config` configuration class: `Idefics2Model` (Idefics2 model) - `Idefics3Config` configuration class: `Idefics3Model` (Idefics3 model) - `Idefics3VisionConfig` configuration class: `Idefics3VisionTransformer` (Idefics3VisionTransformer model) - `IdeficsConfig` configuration class: `IdeficsModel` (IDEFICS model) - `ImageGPTConfig` configuration class: `ImageGPTModel` (ImageGPT model) - [InformerConfig](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerConfig) configuration class: [InformerModel](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerModel) (Informer model) - `InstructBlipConfig` configuration class: `InstructBlipModel` (InstructBLIP model) - `InstructBlipVideoConfig` configuration class: `InstructBlipVideoModel` (InstructBlipVideo model) - `InternVLConfig` configuration class: `InternVLModel` (InternVL model) - `InternVLVisionConfig` configuration class: `InternVLVisionModel` (InternVLVision model) - `Jais2Config` configuration class: `Jais2Model` (Jais2 model) - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaModel](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaModel) (Jamba model) - `JanusConfig` configuration class: `JanusModel` (Janus model) - `JetMoeConfig` configuration class: `JetMoeModel` (JetMoe model) - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3Model` (JinaEmbeddingsV3 model) - `Kosmos2Config` configuration class: `Kosmos2Model` (KOSMOS-2 model) - `Kosmos2_5Config` configuration class: `Kosmos2_5Model` (KOSMOS-2.5 model) - `KyutaiSpeechToTextConfig` configuration class: `KyutaiSpeechToTextModel` (KyutaiSpeechToText model) - `LEDConfig` configuration class: `LEDModel` (LED model) - `LasrCTCConfig` configuration class: `LasrForCTC` (Lasr model) - `LasrEncoderConfig` configuration class: `LasrEncoder` (LasrEncoder model) - `LayoutLMConfig` configuration class: `LayoutLMModel` (LayoutLM model) - `LayoutLMv2Config` configuration class: `LayoutLMv2Model` (LayoutLMv2 model) - `LayoutLMv3Config` configuration class: `LayoutLMv3Model` (LayoutLMv3 model) - `LevitConfig` configuration class: `LevitModel` (LeViT model) - [Lfm2Config](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Config) configuration class: [Lfm2Model](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Model) (Lfm2 model) - `Lfm2MoeConfig` configuration class: `Lfm2MoeModel` (Lfm2Moe model) - `Lfm2VlConfig` configuration class: `Lfm2VlModel` (Lfm2Vl model) - `LightGlueConfig` configuration class: `LightGlueForKeypointMatching` (LightGlue model) - `LightOnOcrConfig` configuration class: `LightOnOcrModel` (LightOnOcr model) - `LiltConfig` configuration class: `LiltModel` (LiLT model) - [Llama4Config](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Config) configuration class: [Llama4ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForConditionalGeneration) (Llama4 model) - [Llama4TextConfig](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextConfig) configuration class: [Llama4TextModel](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextModel) (Llama4ForCausalLM model) - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaModel](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaModel) (LLaMA model) - `LlavaConfig` configuration class: `LlavaModel` (LLaVa model) - `LlavaNextConfig` configuration class: `LlavaNextModel` (LLaVA-NeXT model) - `LlavaNextVideoConfig` configuration class: `LlavaNextVideoModel` (LLaVa-NeXT-Video model) - `LlavaOnevisionConfig` configuration class: `LlavaOnevisionModel` (LLaVA-Onevision model) - `LongT5Config` configuration class: `LongT5Model` (LongT5 model) - `LongcatFlashConfig` configuration class: `LongcatFlashModel` (LongCatFlash model) - `LongformerConfig` configuration class: `LongformerModel` (Longformer model) - `LukeConfig` configuration class: `LukeModel` (LUKE model) - `LwDetrConfig` configuration class: `LwDetrModel` (LwDetr model) - `LxmertConfig` configuration class: `LxmertModel` (LXMERT model) - `M2M100Config` configuration class: `M2M100Model` (M2M100 model) - `MBartConfig` configuration class: `MBartModel` (mBART model) - `MLCDVisionConfig` configuration class: `MLCDVisionModel` (MLCD model) - `MMGroundingDinoConfig` configuration class: `MMGroundingDinoModel` (MM Grounding DINO model) - `MPNetConfig` configuration class: `MPNetModel` (MPNet model) - `MT5Config` configuration class: `MT5Model` (MT5 model) - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2Model](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Model) (mamba2 model) - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaModel](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaModel) (Mamba model) - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianModel) (Marian model) - `MarkupLMConfig` configuration class: `MarkupLMModel` (MarkupLM model) - `Mask2FormerConfig` configuration class: `Mask2FormerModel` (Mask2Former model) - `MaskFormerConfig` configuration class: `MaskFormerModel` (MaskFormer model) - `MaskFormerSwinConfig` configuration class: `MaskFormerSwinModel` (MaskFormerSwin model) - `MegatronBertConfig` configuration class: `MegatronBertModel` (Megatron-BERT model) - `MetaClip2Config` configuration class: `MetaClip2Model` (MetaCLIP 2 model) - `MgpstrConfig` configuration class: `MgpstrForSceneTextRecognition` (MGP-STR model) - `MimiConfig` configuration class: `MimiModel` (Mimi model) - `MiniMaxConfig` configuration class: `MiniMaxModel` (MiniMax model) - `MiniMaxM2Config` configuration class: `MiniMaxM2Model` (MiniMax-M2 model) - `Ministral3Config` configuration class: `Ministral3Model` (Ministral3 model) - `MinistralConfig` configuration class: `MinistralModel` (Ministral model) - `Mistral3Config` configuration class: `Mistral3Model` (Mistral3 model) - `Mistral4Config` configuration class: `Mistral4Model` (Mistral4 model) - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralModel](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralModel) (Mistral model) - `MixtralConfig` configuration class: `MixtralModel` (Mixtral model) - `MllamaConfig` configuration class: `MllamaModel` (Mllama model) - `MobileBertConfig` configuration class: `MobileBertModel` (MobileBERT model) - `MobileNetV1Config` configuration class: `MobileNetV1Model` (MobileNetV1 model) - `MobileNetV2Config` configuration class: `MobileNetV2Model` (MobileNetV2 model) - `MobileViTConfig` configuration class: `MobileViTModel` (MobileViT model) - `MobileViTV2Config` configuration class: `MobileViTV2Model` (MobileViTV2 model) - `ModernBertConfig` configuration class: `ModernBertModel` (ModernBERT model) - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderModel` (ModernBertDecoder model) - `ModernVBertConfig` configuration class: `ModernVBertModel` (ModernVBert model) - `MoonshineConfig` configuration class: `MoonshineModel` (Moonshine model) - `MoonshineStreamingConfig` configuration class: `MoonshineStreamingModel` (MoonshineStreaming model) - `MoshiConfig` configuration class: `MoshiModel` (Moshi model) - `MptConfig` configuration class: `MptModel` (MPT model) - `MraConfig` configuration class: `MraModel` (MRA model) - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model) - `MusicgenConfig` configuration class: `MusicgenModel` (MusicGen model) - `MusicgenMelodyConfig` configuration class: `MusicgenMelodyModel` (MusicGen Melody model) - `MvpConfig` configuration class: `MvpModel` (MVP model) - `NanoChatConfig` configuration class: `NanoChatModel` (NanoChat model) - `NemotronConfig` configuration class: `NemotronModel` (Nemotron model) - `NemotronHConfig` configuration class: `NemotronHModel` (NemotronH model) - `NllbMoeConfig` configuration class: `NllbMoeModel` (NLLB-MOE model) - `NomicBertConfig` configuration class: `NomicBertModel` (NomicBERT model) - `NystromformerConfig` configuration class: `NystromformerModel` (Nyströmformer model) - `OPTConfig` configuration class: `OPTModel` (OPT model) - `Olmo2Config` configuration class: `Olmo2Model` (OLMo2 model) - `Olmo3Config` configuration class: `Olmo3Model` (Olmo3 model) - `OlmoConfig` configuration class: `OlmoModel` (OLMo model) - `OlmoHybridConfig` configuration class: `OlmoHybridModel` (OlmoHybrid model) - `OlmoeConfig` configuration class: `OlmoeModel` (OLMoE model) - `OmDetTurboConfig` configuration class: `OmDetTurboForObjectDetection` (OmDet-Turbo model) - `OneFormerConfig` configuration class: `OneFormerModel` (OneFormer model) - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTModel) (OpenAI GPT model) - `Ovis2Config` configuration class: `Ovis2Model` (Ovis2 model) - `OwlViTConfig` configuration class: `OwlViTModel` (OWL-ViT model) - `Owlv2Config` configuration class: `Owlv2Model` (OWLv2 model) - `PI0Config` configuration class: `PI0Model` (PI0 model) - `PLBartConfig` configuration class: `PLBartModel` (PLBart model) - `PPDocLayoutV3Config` configuration class: `PPDocLayoutV3Model` (PPDocLayoutV3 model) - `PPOCRV5MobileRecConfig` configuration class: `PPOCRV5MobileRecModel` (PPOCRV5MobileRec model) - `PPOCRV5ServerRecConfig` configuration class: `PPOCRV5ServerRecModel` (PPOCRV5ServerRec model) - [PaliGemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaConfig) configuration class: `PaliGemmaModel` (PaliGemma model) - `ParakeetCTCConfig` configuration class: `ParakeetForCTC` (Parakeet model) - `ParakeetEncoderConfig` configuration class: `ParakeetEncoder` (ParakeetEncoder model) - [PatchTSMixerConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerConfig) configuration class: [PatchTSMixerModel](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerModel) (PatchTSMixer model) - [PatchTSTConfig](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTConfig) configuration class: [PatchTSTModel](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTModel) (PatchTST model) - `PeAudioConfig` configuration class: `PeAudioModel` (PeAudio model) - `PeAudioEncoderConfig` configuration class: `PeAudioEncoder` (PeAudioEncoder model) - `PeAudioVideoConfig` configuration class: `PeAudioVideoModel` (PeAudioVideo model) - `PeAudioVideoEncoderConfig` configuration class: `PeAudioVideoEncoder` (PeAudioVideoEncoder model) - `PeVideoConfig` configuration class: `PeVideoModel` (PeVideo model) - `PeVideoEncoderConfig` configuration class: `PeVideoEncoder` (PeVideoEncoder model) - `PegasusConfig` configuration class: `PegasusModel` (Pegasus model) - `PegasusXConfig` configuration class: `PegasusXModel` (PEGASUS-X model) - `PerceiverConfig` configuration class: `PerceiverModel` (Perceiver model) - `PerceptionLMConfig` configuration class: `PerceptionLMModel` (PerceptionLM model) - `PersimmonConfig` configuration class: `PersimmonModel` (Persimmon model) - `Phi3Config` configuration class: `Phi3Model` (Phi3 model) - `Phi4MultimodalConfig` configuration class: `Phi4MultimodalModel` (Phi4Multimodal model) - `PhiConfig` configuration class: `PhiModel` (Phi model) - `PhimoeConfig` configuration class: `PhimoeModel` (Phimoe model) - `PixioConfig` configuration class: `PixioModel` (Pixio model) - `PixtralVisionConfig` configuration class: `PixtralVisionModel` (Pixtral model) - `PoolFormerConfig` configuration class: `PoolFormerModel` (PoolFormer model) - `ProphetNetConfig` configuration class: `ProphetNetModel` (ProphetNet model) - `PvtConfig` configuration class: `PvtModel` (PVT model) - `PvtV2Config` configuration class: `PvtV2Model` (PVTv2 model) - `Qwen2AudioEncoderConfig` configuration class: `Qwen2AudioEncoder` (Qwen2AudioEncoder model) - `Qwen2Config` configuration class: `Qwen2Model` (Qwen2 model) - `Qwen2MoeConfig` configuration class: `Qwen2MoeModel` (Qwen2MoE model) - [Qwen2VLConfig](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLConfig) configuration class: [Qwen2VLModel](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLModel) (Qwen2VL model) - `Qwen2VLTextConfig` configuration class: `Qwen2VLTextModel` (Qwen2VL model) - `Qwen2_5_VLConfig` configuration class: `Qwen2_5_VLModel` (Qwen2_5_VL model) - `Qwen2_5_VLTextConfig` configuration class: `Qwen2_5_VLTextModel` (Qwen2_5_VL model) - `Qwen3Config` configuration class: `Qwen3Model` (Qwen3 model) - `Qwen3MoeConfig` configuration class: `Qwen3MoeModel` (Qwen3MoE model) - `Qwen3NextConfig` configuration class: `Qwen3NextModel` (Qwen3Next model) - `Qwen3VLConfig` configuration class: `Qwen3VLModel` (Qwen3VL model) - `Qwen3VLMoeConfig` configuration class: `Qwen3VLMoeModel` (Qwen3VLMoe model) - `Qwen3VLMoeTextConfig` configuration class: `Qwen3VLMoeTextModel` (Qwen3VLMoe model) - `Qwen3VLTextConfig` configuration class: `Qwen3VLTextModel` (Qwen3VL model) - `Qwen3_5Config` configuration class: `Qwen3_5Model` (Qwen3_5 model) - `Qwen3_5MoeConfig` configuration class: `Qwen3_5MoeModel` (Qwen3_5Moe model) - `Qwen3_5MoeTextConfig` configuration class: `Qwen3_5MoeTextModel` (Qwen3_5MoeText model) - `Qwen3_5TextConfig` configuration class: `Qwen3_5TextModel` (Qwen3_5Text model) - `RTDetrConfig` configuration class: `RTDetrModel` (RT-DETR model) - `RTDetrV2Config` configuration class: `RTDetrV2Model` (RT-DETRv2 model) - `RecurrentGemmaConfig` configuration class: `RecurrentGemmaModel` (RecurrentGemma model) - `ReformerConfig` configuration class: `ReformerModel` (Reformer model) - `RegNetConfig` configuration class: `RegNetModel` (RegNet model) - `RemBertConfig` configuration class: `RemBertModel` (RemBERT model) - `ResNetConfig` configuration class: `ResNetModel` (ResNet model) - `RoCBertConfig` configuration class: `RoCBertModel` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerModel` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaModel](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaModel) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormModel` (RoBERTa-PreLayerNorm model) - `RwkvConfig` configuration class: `RwkvModel` (RWKV model) - `SEWConfig` configuration class: `SEWModel` (SEW model) - `SEWDConfig` configuration class: `SEWDModel` (SEW-D model) - `Sam2Config` configuration class: `Sam2Model` (SAM2 model) - `Sam2HieraDetConfig` configuration class: `Sam2HieraDetModel` (Sam2HieraDetModel model) - `Sam2VideoConfig` configuration class: `Sam2VideoModel` (Sam2VideoModel model) - `Sam2VisionConfig` configuration class: `Sam2VisionModel` (Sam2VisionModel model) - `Sam3Config` configuration class: `Sam3Model` (SAM3 model) - `Sam3TrackerConfig` configuration class: `Sam3TrackerModel` (Sam3Tracker model) - `Sam3TrackerVideoConfig` configuration class: `Sam3TrackerVideoModel` (Sam3TrackerVideo model) - `Sam3ViTConfig` configuration class: `Sam3ViTModel` (Sam3ViTModel model) - `Sam3VideoConfig` configuration class: `Sam3VideoModel` (Sam3VideoModel model) - `Sam3VisionConfig` configuration class: `Sam3VisionModel` (Sam3VisionModel model) - `SamConfig` configuration class: `SamModel` (SAM model) - [SamHQConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQConfig) configuration class: [SamHQModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQModel) (SAM-HQ model) - [SamHQVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionConfig) configuration class: [SamHQVisionModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionModel) (SamHQVisionModel model) - `SamVisionConfig` configuration class: `SamVisionModel` (SamVisionModel model) - `SeamlessM4TConfig` configuration class: `SeamlessM4TModel` (SeamlessM4T model) - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2Model` (SeamlessM4Tv2 model) - `SeedOssConfig` configuration class: `SeedOssModel` (SeedOss model) - `SegGptConfig` configuration class: `SegGptModel` (SegGPT model) - `SegformerConfig` configuration class: `SegformerModel` (SegFormer model) - `Siglip2Config` configuration class: `Siglip2Model` (SigLIP2 model) - `Siglip2VisionConfig` configuration class: `Siglip2VisionModel` (Siglip2VisionModel model) - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model) - [SiglipVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionConfig) configuration class: [SiglipVisionModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionModel) (SiglipVisionModel model) - `SmolLM3Config` configuration class: `SmolLM3Model` (SmolLM3 model) - [SmolVLMConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMConfig) configuration class: [SmolVLMModel](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMModel) (SmolVLM model) - [SmolVLMVisionConfig](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionConfig) configuration class: [SmolVLMVisionTransformer](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionTransformer) (SmolVLMVisionTransformer model) - `SolarOpenConfig` configuration class: `SolarOpenModel` (SolarOpen model) - `Speech2TextConfig` configuration class: `Speech2TextModel` (Speech2Text model) - `SpeechT5Config` configuration class: `SpeechT5Model` (SpeechT5 model) - `SplinterConfig` configuration class: `SplinterModel` (Splinter model) - `SqueezeBertConfig` configuration class: `SqueezeBertModel` (SqueezeBERT model) - `StableLmConfig` configuration class: `StableLmModel` (StableLm model) - `Starcoder2Config` configuration class: `Starcoder2Model` (Starcoder2 model) - `SwiftFormerConfig` configuration class: `SwiftFormerModel` (SwiftFormer model) - [Swin2SRConfig](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRConfig) configuration class: [Swin2SRModel](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRModel) (Swin2SR model) - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinModel](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinModel) (Swin Transformer model) - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2Model](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Model) (Swin Transformer V2 model) - `SwitchTransformersConfig` configuration class: `SwitchTransformersModel` (SwitchTransformers model) - `T5Config` configuration class: `T5Model` (T5 model) - `T5Gemma2Config` configuration class: `T5Gemma2Model` (T5Gemma2 model) - `T5Gemma2EncoderConfig` configuration class: `T5Gemma2Encoder` (T5Gemma2Encoder model) - `T5GemmaConfig` configuration class: `T5GemmaModel` (T5Gemma model) - `TableTransformerConfig` configuration class: `TableTransformerModel` (Table Transformer model) - `TapasConfig` configuration class: `TapasModel` (TAPAS model) - `TextNetConfig` configuration class: `TextNetModel` (TextNet model) - [TimeSeriesTransformerConfig](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerConfig) configuration class: [TimeSeriesTransformerModel](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel) (Time Series Transformer model) - `TimesFm2_5Config` configuration class: `TimesFm2_5Model` (TimesFm2p5 model) - `TimesFmConfig` configuration class: `TimesFmModel` (TimesFm model) - [TimesformerConfig](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerConfig) configuration class: [TimesformerModel](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerModel) (TimeSformer model) - `TimmBackboneConfig` configuration class: `TimmBackbone` (TimmBackbone model) - `TimmWrapperConfig` configuration class: `TimmWrapperModel` (TimmWrapperModel model) - [TvpConfig](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpConfig) configuration class: [TvpModel](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpModel) (TVP model) - `UMT5Config` configuration class: `UMT5Model` (UMT5 model) - `UVDocConfig` configuration class: `UVDocModel` (UVDoc model) - `UdopConfig` configuration class: `UdopModel` (UDOP model) - `UniSpeechConfig` configuration class: `UniSpeechModel` (UniSpeech model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatModel` (UniSpeechSat model) - `UnivNetConfig` configuration class: `UnivNetModel` (UnivNet model) - `VJEPA2Config` configuration class: `VJEPA2Model` (VJEPA2Model model) - `VaultGemmaConfig` configuration class: `VaultGemmaModel` (VaultGemma model) - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTModel](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTModel) (ViT model) - `ViTMAEConfig` configuration class: `ViTMAEModel` (ViTMAE model) - `ViTMSNConfig` configuration class: `ViTMSNModel` (ViTMSN model) - `VibeVoiceAcousticTokenizerConfig` configuration class: `VibeVoiceAcousticTokenizerModel` (VibeVoiceAcousticTokenizer model) - `VibeVoiceAcousticTokenizerDecoderConfig` configuration class: `VibeVoiceAcousticTokenizerDecoderModel` (VibeVoiceAcousticTokenizerDecoderConfig model) - `VibeVoiceAcousticTokenizerEncoderConfig` configuration class: `VibeVoiceAcousticTokenizerEncoderModel` (VibeVoiceAcousticTokenizerEncoderConfig model) - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model) - `VideoLlama3Config` configuration class: `VideoLlama3Model` (VideoLlama3 model) - `VideoLlama3VisionConfig` configuration class: `VideoLlama3VisionModel` (VideoLlama3Vision model) - `VideoLlavaConfig` configuration class: `VideoLlavaModel` (VideoLlava model) - `VideoMAEConfig` configuration class: `VideoMAEModel` (VideoMAE model) - `ViltConfig` configuration class: `ViltModel` (ViLT model) - `VipLlavaConfig` configuration class: `VipLlavaModel` (VipLlava model) - `VisionTextDualEncoderConfig` configuration class: `VisionTextDualEncoderModel` (VisionTextDualEncoder model) - `VisualBertConfig` configuration class: `VisualBertModel` (VisualBERT model) - `VitDetConfig` configuration class: `VitDetModel` (VitDet model) - `VitsConfig` configuration class: `VitsModel` (VITS model) - [VivitConfig](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitConfig) configuration class: [VivitModel](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitModel) (ViViT model) - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model) - `VoxtralEncoderConfig` configuration class: `VoxtralEncoder` (Voxtral Encoder model) - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model) - `VoxtralRealtimeEncoderConfig` configuration class: `VoxtralRealtimeEncoder` (VoxtralRealtime Encoder model) - `VoxtralRealtimeTextConfig` configuration class: `VoxtralRealtimeTextModel` (VoxtralRealtime Text Model model) - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertModel` (Wav2Vec2-BERT model) - `Wav2Vec2Config` configuration class: `Wav2Vec2Model` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerModel` (Wav2Vec2-Conformer model) - `WavLMConfig` configuration class: `WavLMModel` (WavLM model) - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperModel](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperModel) (Whisper model) - [XCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPConfig) configuration class: [XCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPModel) (X-CLIP model) - `XGLMConfig` configuration class: `XGLMModel` (XGLM model) - `XLMConfig` configuration class: `XLMModel` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaModel` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLModel` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetModel` (XLNet model) - `XcodecConfig` configuration class: `XcodecModel` (X-CODEC model) - `XmodConfig` configuration class: `XmodModel` (X-MOD model) - `YolosConfig` configuration class: `YolosModel` (YOLOS model) - `YosoConfig` configuration class: `YosoModel` (YOSO model) - `YoutuConfig` configuration class: `YoutuModel` (Youtu model) - `Zamba2Config` configuration class: `Zamba2Model` (Zamba2 model) - `ZambaConfig` configuration class: `ZambaModel` (Zamba model) - `xLSTMConfig` configuration class: `xLSTMModel` (xLSTM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModel.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the base model classes of the library from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **afmoe** -- `AfmoeModel` (AFMoE model)
- **aimv2** -- `Aimv2Model` (AIMv2 model)
- **aimv2_vision_model** -- `Aimv2VisionModel` (Aimv2VisionModel model)
- **albert** -- [AlbertModel](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertModel) (ALBERT model)
- **align** -- `AlignModel` (ALIGN model)
- **altclip** -- [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model)
- **apertus** -- `ApertusModel` (Apertus model)
- **arcee** -- `ArceeModel` (Arcee model)
- **aria** -- `AriaModel` (Aria model)
- **aria_text** -- `AriaTextModel` (AriaText model)
- **audio-spectrogram-transformer** -- `ASTModel` (Audio Spectrogram Transformer model)
- **audioflamingo3** -- `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
- **audioflamingo3_encoder** -- `AudioFlamingo3Encoder` (AudioFlamingo3Encoder model)
- **autoformer** -- [AutoformerModel](/docs/transformers/v5.5.1/ko/model_doc/autoformer#transformers.AutoformerModel) (Autoformer model)
- **aya_vision** -- `AyaVisionModel` (AyaVision model)
- **bamba** -- `BambaModel` (Bamba model)
- **bark** -- `BarkModel` (Bark model)
- **bart** -- [BartModel](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartModel) (BART model)
- **beit** -- `BeitModel` (BEiT model)
- **bert** -- [BertModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertModel) (BERT model)
- **bert-generation** -- `BertGenerationEncoder` (Bert Generation model)
- **big_bird** -- [BigBirdModel](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdModel) (BigBird model)
- **bigbird_pegasus** -- `BigBirdPegasusModel` (BigBird-Pegasus model)
- **biogpt** -- [BioGptModel](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptModel) (BioGpt model)
- **bit** -- `BitModel` (BiT model)
- **bitnet** -- `BitNetModel` (BitNet model)
- **blenderbot** -- `BlenderbotModel` (Blenderbot model)
- **blenderbot-small** -- `BlenderbotSmallModel` (BlenderbotSmall model)
- **blip** -- [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model)
- **blip-2** -- [Blip2Model](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Model) (BLIP-2 model)
- **blip_2_qformer** -- [Blip2QFormerModel](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2QFormerModel) (BLIP-2 QFormer model)
- **bloom** -- `BloomModel` (BLOOM model)
- **blt** -- `BltModel` (Blt model)
- **bridgetower** -- `BridgeTowerModel` (BridgeTower model)
- **bros** -- `BrosModel` (BROS model)
- **camembert** -- `CamembertModel` (CamemBERT model)
- **canine** -- `CanineModel` (CANINE model)
- **chameleon** -- [ChameleonModel](/docs/transformers/v5.5.1/ko/model_doc/chameleon#transformers.ChameleonModel) (Chameleon model)
- **chinese_clip** -- `ChineseCLIPModel` (Chinese-CLIP model)
- **chinese_clip_vision_model** -- `ChineseCLIPVisionModel` (ChineseCLIPVisionModel model)
- **clap** -- `ClapModel` (CLAP model)
- **clip** -- [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model)
- **clip_text_model** -- [CLIPTextModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPTextModel) (CLIPTextModel model)
- **clip_vision_model** -- [CLIPVisionModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPVisionModel) (CLIPVisionModel model)
- **clipseg** -- [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model)
- **clvp** -- `ClvpModelForConditionalGeneration` (CLVP model)
- **code_llama** -- [LlamaModel](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaModel) (CodeLlama model)
- **codegen** -- [CodeGenModel](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenModel) (CodeGen model)
- **cohere** -- [CohereModel](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereModel) (Cohere model)
- **cohere2** -- `Cohere2Model` (Cohere2 model)
- **cohere2_vision** -- `Cohere2VisionModel` (Cohere2Vision model)
- **cohere_asr** -- `CohereAsrModel` (CohereASR model)
- **conditional_detr** -- `ConditionalDetrModel` (Conditional DETR model)
- **convbert** -- [ConvBertModel](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertModel) (ConvBERT model)
- **convnext** -- `ConvNextModel` (ConvNeXT model)
- **convnextv2** -- `ConvNextV2Model` (ConvNeXTV2 model)
- **cpmant** -- `CpmAntModel` (CPM-Ant model)
- **csm** -- `CsmForConditionalGeneration` (CSM model)
- **ctrl** -- `CTRLModel` (CTRL model)
- **cvt** -- `CvtModel` (CvT model)
- **cwm** -- `CwmModel` (Code World Model (CWM) model)
- **d_fine** -- `DFineModel` (D-FINE model)
- **dab-detr** -- `DabDetrModel` (DAB-DETR model)
- **dac** -- `DacModel` (DAC model)
- **data2vec-audio** -- `Data2VecAudioModel` (Data2VecAudio model)
- **data2vec-text** -- `Data2VecTextModel` (Data2VecText model)
- **data2vec-vision** -- `Data2VecVisionModel` (Data2VecVision model)
- **dbrx** -- [DbrxModel](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxModel) (DBRX model)
- **deberta** -- [DebertaModel](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaModel) (DeBERTa model)
- **deberta-v2** -- [DebertaV2Model](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Model) (DeBERTa-v2 model)
- **decision_transformer** -- `DecisionTransformerModel` (Decision Transformer model)
- **deepseek_v2** -- `DeepseekV2Model` (DeepSeek-V2 model)
- **deepseek_v3** -- [DeepseekV3Model](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Model) (DeepSeek-V3 model)
- **deepseek_vl** -- `DeepseekVLModel` (DeepseekVL model)
- **deepseek_vl_hybrid** -- `DeepseekVLHybridModel` (DeepseekVLHybrid model)
- **deformable_detr** -- `DeformableDetrModel` (Deformable DETR model)
- **deit** -- `DeiTModel` (DeiT model)
- **depth_pro** -- `DepthProModel` (DepthPro model)
- **detr** -- `DetrModel` (DETR model)
- **dia** -- `DiaModel` (Dia model)
- **diffllama** -- `DiffLlamaModel` (DiffLlama model)
- **dinat** -- `DinatModel` (DiNAT model)
- **dinov2** -- `Dinov2Model` (DINOv2 model)
- **dinov2_with_registers** -- `Dinov2WithRegistersModel` (DINOv2 with Registers model)
- **dinov3_convnext** -- `DINOv3ConvNextModel` (DINOv3 ConvNext model)
- **dinov3_vit** -- `DINOv3ViTModel` (DINOv3 ViT model)
- **distilbert** -- `DistilBertModel` (DistilBERT model)
- **doge** -- `DogeModel` (Doge model)
- **donut-swin** -- `DonutSwinModel` (DonutSwin model)
- **dots1** -- `Dots1Model` (dots1 model)
- **dpr** -- `DPRQuestionEncoder` (DPR model)
- **dpt** -- `DPTModel` (DPT model)
- **edgetam** -- `EdgeTamModel` (EdgeTAM model)
- **edgetam_video** -- `EdgeTamVideoModel` (EdgeTamVideo model)
- **edgetam_vision_model** -- `EdgeTamVisionModel` (EdgeTamVisionModel model)
- **efficientloftr** -- `EfficientLoFTRModel` (EfficientLoFTR model)
- **efficientnet** -- `EfficientNetModel` (EfficientNet model)
- **electra** -- [ElectraModel](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraModel) (ELECTRA model)
- **emu3** -- `Emu3Model` (Emu3 model)
- **encodec** -- `EncodecModel` (EnCodec model)
- **ernie** -- `ErnieModel` (ERNIE model)
- **ernie4_5** -- `Ernie4_5Model` (Ernie4_5 model)
- **ernie4_5_moe** -- `Ernie4_5_MoeModel` (Ernie4_5_MoE model)
- **ernie4_5_vl_moe** -- `Ernie4_5_VLMoeModel` (Ernie4_5_VLMoE model)
- **esm** -- [EsmModel](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmModel) (ESM model)
- **eurobert** -- `EuroBertModel` (EuroBERT model)
- **evolla** -- `EvollaModel` (Evolla model)
- **exaone4** -- [Exaone4Model](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Model) (EXAONE-4.0 model)
- **exaone_moe** -- [ExaoneMoeModel](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeModel) (EXAONE-MoE model)
- **falcon** -- `FalconModel` (Falcon model)
- **falcon_h1** -- `FalconH1Model` (FalconH1 model)
- **falcon_mamba** -- `FalconMambaModel` (FalconMamba model)
- **fast_vlm** -- `FastVlmModel` (FastVlm model)
- **fastspeech2_conformer** -- `FastSpeech2ConformerModel` (FastSpeech2Conformer model)
- **fastspeech2_conformer_with_hifigan** -- `FastSpeech2ConformerWithHifiGan` (FastSpeech2ConformerWithHifiGan model)
- **flaubert** -- `FlaubertModel` (FlauBERT model)
- **flava** -- `FlavaModel` (FLAVA model)
- **flex_olmo** -- `FlexOlmoModel` (FlexOlmo model)
- **florence2** -- `Florence2Model` (Florence2 model)
- **fnet** -- `FNetModel` (FNet model)
- **focalnet** -- `FocalNetModel` (FocalNet model)
- **fsmt** -- `FSMTModel` (FairSeq Machine-Translation model)
- **funnel** -- `FunnelModel` or `FunnelBaseModel` (Funnel Transformer model)
- **fuyu** -- `FuyuModel` (Fuyu model)
- **gemma** -- [GemmaModel](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaModel) (Gemma model)
- **gemma2** -- [Gemma2Model](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Model) (Gemma2 model)
- **gemma3** -- [Gemma3Model](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Model) (Gemma3ForConditionalGeneration model)
- **gemma3_text** -- [Gemma3TextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextModel) (Gemma3ForCausalLM model)
- **gemma3n** -- [Gemma3nModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nModel) (Gemma3nForConditionalGeneration model)
- **gemma3n_audio** -- `Gemma3nAudioEncoder` (Gemma3nAudioEncoder model)
- **gemma3n_text** -- [Gemma3nTextModel](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextModel) (Gemma3nForCausalLM model)
- **gemma3n_vision** -- `TimmWrapperModel` (TimmWrapperModel model)
- **gemma4** -- `Gemma4Model` (Gemma4ForConditionalGeneration model)
- **gemma4_audio** -- `Gemma4AudioModel` (Gemma4AudioModel model)
- **gemma4_text** -- `Gemma4TextModel` (Gemma4ForCausalLM model)
- **gemma4_vision** -- `Gemma4VisionModel` (Gemma4VisionModel model)
- **git** -- `GitModel` (GIT model)
- **glm** -- `GlmModel` (GLM model)
- **glm4** -- `Glm4Model` (GLM4 model)
- **glm46v** -- `Glm46VModel` (Glm46V model)
- **glm4_moe** -- `Glm4MoeModel` (Glm4MoE model)
- **glm4_moe_lite** -- `Glm4MoeLiteModel` (Glm4MoELite model)
- **glm4v** -- `Glm4vModel` (GLM4V model)
- **glm4v_moe** -- `Glm4vMoeModel` (GLM4VMOE model)
- **glm4v_moe_text** -- `Glm4vMoeTextModel` (GLM4VMOE model)
- **glm4v_moe_vision** -- `Glm4vMoeVisionModel` (Glm4vMoeVisionModel model)
- **glm4v_text** -- `Glm4vTextModel` (GLM4V model)
- **glm4v_vision** -- `Glm4vVisionModel` (Glm4vVisionModel model)
- **glm_image** -- `GlmImageModel` (GlmImage model)
- **glm_image_text** -- `GlmImageTextModel` (GlmImageText model)
- **glm_image_vision** -- `GlmImageVisionModel` (GlmImageVisionModel model)
- **glm_image_vqmodel** -- `GlmImageVQVAE` (GlmImageVQVAE model)
- **glm_moe_dsa** -- `GlmMoeDsaModel` (GlmMoeDsa model)
- **glm_ocr** -- `GlmOcrModel` (Glmocr model)
- **glm_ocr_text** -- `GlmOcrTextModel` (GlmOcrText model)
- **glm_ocr_vision** -- `GlmOcrVisionModel` (GlmOcrVisionModel model)
- **glmasr** -- `GlmAsrForConditionalGeneration` (GLM-ASR model)
- **glmasr_encoder** -- `GlmAsrEncoder` (GLM-ASR Encoder model)
- **glpn** -- `GLPNModel` (GLPN model)
- **got_ocr2** -- `GotOcr2Model` (GOT-OCR2 model)
- **gpt-sw3** -- [GPT2Model](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Model) (GPT-Sw3 model)
- **gpt2** -- [GPT2Model](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Model) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeModel` (GPTBigCode model)
- **gpt_neo** -- `GPTNeoModel` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXModel` (GPT NeoX model)
- **gpt_neox_japanese** -- [GPTNeoXJapaneseModel](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseModel) (GPT NeoX Japanese model)
- **gpt_oss** -- `GptOssModel` (GptOss model)
- **gptj** -- `GPTJModel` (GPT-J model)
- **granite** -- `GraniteModel` (Granite model)
- **granitemoe** -- `GraniteMoeModel` (GraniteMoeMoe model)
- **granitemoehybrid** -- `GraniteMoeHybridModel` (GraniteMoeHybrid model)
- **granitemoeshared** -- `GraniteMoeSharedModel` (GraniteMoeSharedMoe model)
- **grounding-dino** -- [GroundingDinoModel](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoModel) (Grounding DINO model)
- **groupvit** -- `GroupViTModel` (GroupViT model)
- **helium** -- `HeliumModel` (Helium model)
- **hgnet_v2** -- `HGNetV2Backbone` (HGNet-V2 model)
- **hiera** -- `HieraModel` (Hiera model)
- **higgs_audio_v2** -- `HiggsAudioV2ForConditionalGeneration` (HiggsAudioV2 model)
- **higgs_audio_v2_tokenizer** -- `HiggsAudioV2TokenizerModel` (HiggsAudioV2Tokenizer model)
- **hubert** -- `HubertModel` (Hubert model)
- **hunyuan_v1_dense** -- `HunYuanDenseV1Model` (HunYuanDenseV1 model)
- **hunyuan_v1_moe** -- `HunYuanMoEV1Model` (HunYuanMoeV1 model)
- **ibert** -- `IBertModel` (I-BERT model)
- **idefics** -- `IdeficsModel` (IDEFICS model)
- **idefics2** -- `Idefics2Model` (Idefics2 model)
- **idefics3** -- `Idefics3Model` (Idefics3 model)
- **idefics3_vision** -- `Idefics3VisionTransformer` (Idefics3VisionTransformer model)
- **ijepa** -- `IJepaModel` (I-JEPA model)
- **imagegpt** -- `ImageGPTModel` (ImageGPT model)
- **informer** -- [InformerModel](/docs/transformers/v5.5.1/ko/model_doc/informer#transformers.InformerModel) (Informer model)
- **instructblip** -- `InstructBlipModel` (InstructBLIP model)
- **instructblipvideo** -- `InstructBlipVideoModel` (InstructBlipVideo model)
- **internvl** -- `InternVLModel` (InternVL model)
- **internvl_vision** -- `InternVLVisionModel` (InternVLVision model)
- **jais2** -- `Jais2Model` (Jais2 model)
- **jamba** -- [JambaModel](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaModel) (Jamba model)
- **janus** -- `JanusModel` (Janus model)
- **jetmoe** -- `JetMoeModel` (JetMoe model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3Model` (JinaEmbeddingsV3 model)
- **kosmos-2** -- `Kosmos2Model` (KOSMOS-2 model)
- **kosmos-2.5** -- `Kosmos2_5Model` (KOSMOS-2.5 model)
- **kyutai_speech_to_text** -- `KyutaiSpeechToTextModel` (KyutaiSpeechToText model)
- **lasr_ctc** -- `LasrForCTC` (Lasr model)
- **lasr_encoder** -- `LasrEncoder` (LasrEncoder model)
- **layoutlm** -- `LayoutLMModel` (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2Model` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3Model` (LayoutLMv3 model)
- **led** -- `LEDModel` (LED model)
- **levit** -- `LevitModel` (LeViT model)
- **lfm2** -- [Lfm2Model](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Model) (Lfm2 model)
- **lfm2_moe** -- `Lfm2MoeModel` (Lfm2Moe model)
- **lfm2_vl** -- `Lfm2VlModel` (Lfm2Vl model)
- **lightglue** -- `LightGlueForKeypointMatching` (LightGlue model)
- **lighton_ocr** -- `LightOnOcrModel` (LightOnOcr model)
- **lilt** -- `LiltModel` (LiLT model)
- **llama** -- [LlamaModel](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaModel) (LLaMA model)
- **llama4** -- [Llama4ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForConditionalGeneration) (Llama4 model)
- **llama4_text** -- [Llama4TextModel](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextModel) (Llama4ForCausalLM model)
- **llava** -- `LlavaModel` (LLaVa model)
- **llava_next** -- `LlavaNextModel` (LLaVA-NeXT model)
- **llava_next_video** -- `LlavaNextVideoModel` (LLaVa-NeXT-Video model)
- **llava_onevision** -- `LlavaOnevisionModel` (LLaVA-Onevision model)
- **longcat_flash** -- `LongcatFlashModel` (LongCatFlash model)
- **longformer** -- `LongformerModel` (Longformer model)
- **longt5** -- `LongT5Model` (LongT5 model)
- **luke** -- `LukeModel` (LUKE model)
- **lw_detr** -- `LwDetrModel` (LwDetr model)
- **lxmert** -- `LxmertModel` (LXMERT model)
- **m2m_100** -- `M2M100Model` (M2M100 model)
- **mamba** -- [MambaModel](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaModel) (Mamba model)
- **mamba2** -- [Mamba2Model](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Model) (mamba2 model)
- **marian** -- [MarianModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianModel) (Marian model)
- **markuplm** -- `MarkupLMModel` (MarkupLM model)
- **mask2former** -- `Mask2FormerModel` (Mask2Former model)
- **maskformer** -- `MaskFormerModel` (MaskFormer model)
- **maskformer-swin** -- `MaskFormerSwinModel` (MaskFormerSwin model)
- **mbart** -- `MBartModel` (mBART model)
- **megatron-bert** -- `MegatronBertModel` (Megatron-BERT model)
- **metaclip_2** -- `MetaClip2Model` (MetaCLIP 2 model)
- **mgp-str** -- `MgpstrForSceneTextRecognition` (MGP-STR model)
- **mimi** -- `MimiModel` (Mimi model)
- **minimax** -- `MiniMaxModel` (MiniMax model)
- **minimax_m2** -- `MiniMaxM2Model` (MiniMax-M2 model)
- **ministral** -- `MinistralModel` (Ministral model)
- **ministral3** -- `Ministral3Model` (Ministral3 model)
- **mistral** -- [MistralModel](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralModel) (Mistral model)
- **mistral3** -- `Mistral3Model` (Mistral3 model)
- **mistral4** -- `Mistral4Model` (Mistral4 model)
- **mixtral** -- `MixtralModel` (Mixtral model)
- **mlcd** -- `MLCDVisionModel` (MLCD model)
- **mlcd_vision_model** -- `MLCDVisionModel` (MLCD model)
- **mllama** -- `MllamaModel` (Mllama model)
- **mm-grounding-dino** -- `MMGroundingDinoModel` (MM Grounding DINO model)
- **mobilebert** -- `MobileBertModel` (MobileBERT model)
- **mobilenet_v1** -- `MobileNetV1Model` (MobileNetV1 model)
- **mobilenet_v2** -- `MobileNetV2Model` (MobileNetV2 model)
- **mobilevit** -- `MobileViTModel` (MobileViT model)
- **mobilevitv2** -- `MobileViTV2Model` (MobileViTV2 model)
- **modernbert** -- `ModernBertModel` (ModernBERT model)
- **modernbert-decoder** -- `ModernBertDecoderModel` (ModernBertDecoder model)
- **modernvbert** -- `ModernVBertModel` (ModernVBert model)
- **moonshine** -- `MoonshineModel` (Moonshine model)
- **moonshine_streaming** -- `MoonshineStreamingModel` (MoonshineStreaming model)
- **moshi** -- `MoshiModel` (Moshi model)
- **mpnet** -- `MPNetModel` (MPNet model)
- **mpt** -- `MptModel` (MPT model)
- **mra** -- `MraModel` (MRA model)
- **mt5** -- `MT5Model` (MT5 model)
- **musicflamingo** -- `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
- **musicflamingo_encoder** -- `AudioFlamingo3Encoder` (AudioFlamingo3Encoder model)
- **musicgen** -- `MusicgenModel` (MusicGen model)
- **musicgen_melody** -- `MusicgenMelodyModel` (MusicGen Melody model)
- **mvp** -- `MvpModel` (MVP model)
- **nanochat** -- `NanoChatModel` (NanoChat model)
- **nemotron** -- `NemotronModel` (Nemotron model)
- **nemotron_h** -- `NemotronHModel` (NemotronH model)
- **nllb-moe** -- `NllbMoeModel` (NLLB-MOE model)
- **nomic_bert** -- `NomicBertModel` (NomicBERT model)
- **nystromformer** -- `NystromformerModel` (Nyströmformer model)
- **olmo** -- `OlmoModel` (OLMo model)
- **olmo2** -- `Olmo2Model` (OLMo2 model)
- **olmo3** -- `Olmo3Model` (Olmo3 model)
- **olmo_hybrid** -- `OlmoHybridModel` (OlmoHybrid model)
- **olmoe** -- `OlmoeModel` (OLMoE model)
- **omdet-turbo** -- `OmDetTurboForObjectDetection` (OmDet-Turbo model)
- **oneformer** -- `OneFormerModel` (OneFormer model)
- **openai-gpt** -- [OpenAIGPTModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTModel) (OpenAI GPT model)
- **opt** -- `OPTModel` (OPT model)
- **ovis2** -- `Ovis2Model` (Ovis2 model)
- **owlv2** -- `Owlv2Model` (OWLv2 model)
- **owlvit** -- `OwlViTModel` (OWL-ViT model)
- **paligemma** -- `PaliGemmaModel` (PaliGemma model)
- **parakeet_ctc** -- `ParakeetForCTC` (Parakeet model)
- **parakeet_encoder** -- `ParakeetEncoder` (ParakeetEncoder model)
- **patchtsmixer** -- [PatchTSMixerModel](/docs/transformers/v5.5.1/ko/model_doc/patchtsmixer#transformers.PatchTSMixerModel) (PatchTSMixer model)
- **patchtst** -- [PatchTSTModel](/docs/transformers/v5.5.1/ko/model_doc/patchtst#transformers.PatchTSTModel) (PatchTST model)
- **pe_audio** -- `PeAudioModel` (PeAudio model)
- **pe_audio_encoder** -- `PeAudioEncoder` (PeAudioEncoder model)
- **pe_audio_video** -- `PeAudioVideoModel` (PeAudioVideo model)
- **pe_audio_video_encoder** -- `PeAudioVideoEncoder` (PeAudioVideoEncoder model)
- **pe_video** -- `PeVideoModel` (PeVideo model)
- **pe_video_encoder** -- `PeVideoEncoder` (PeVideoEncoder model)
- **pegasus** -- `PegasusModel` (Pegasus model)
- **pegasus_x** -- `PegasusXModel` (PEGASUS-X model)
- **perceiver** -- `PerceiverModel` (Perceiver model)
- **perception_lm** -- `PerceptionLMModel` (PerceptionLM model)
- **persimmon** -- `PersimmonModel` (Persimmon model)
- **phi** -- `PhiModel` (Phi model)
- **phi3** -- `Phi3Model` (Phi3 model)
- **phi4_multimodal** -- `Phi4MultimodalModel` (Phi4Multimodal model)
- **phimoe** -- `PhimoeModel` (Phimoe model)
- **pi0** -- `PI0Model` (PI0 model)
- **pixio** -- `PixioModel` (Pixio model)
- **pixtral** -- `PixtralVisionModel` (Pixtral model)
- **plbart** -- `PLBartModel` (PLBart model)
- **poolformer** -- `PoolFormerModel` (PoolFormer model)
- **pp_doclayout_v3** -- `PPDocLayoutV3Model` (PPDocLayoutV3 model)
- **pp_ocrv5_mobile_rec** -- `PPOCRV5MobileRecModel` (PPOCRV5MobileRec model)
- **pp_ocrv5_server_rec** -- `PPOCRV5ServerRecModel` (PPOCRV5ServerRec model)
- **prophetnet** -- `ProphetNetModel` (ProphetNet model)
- **pvt** -- `PvtModel` (PVT model)
- **pvt_v2** -- `PvtV2Model` (PVTv2 model)
- **qwen2** -- `Qwen2Model` (Qwen2 model)
- **qwen2_5_vl** -- `Qwen2_5_VLModel` (Qwen2_5_VL model)
- **qwen2_5_vl_text** -- `Qwen2_5_VLTextModel` (Qwen2_5_VL model)
- **qwen2_audio_encoder** -- `Qwen2AudioEncoder` (Qwen2AudioEncoder model)
- **qwen2_moe** -- `Qwen2MoeModel` (Qwen2MoE model)
- **qwen2_vl** -- [Qwen2VLModel](/docs/transformers/v5.5.1/ko/model_doc/qwen2_vl#transformers.Qwen2VLModel) (Qwen2VL model)
- **qwen2_vl_text** -- `Qwen2VLTextModel` (Qwen2VL model)
- **qwen3** -- `Qwen3Model` (Qwen3 model)
- **qwen3_5** -- `Qwen3_5Model` (Qwen3_5 model)
- **qwen3_5_moe** -- `Qwen3_5MoeModel` (Qwen3_5Moe model)
- **qwen3_5_moe_text** -- `Qwen3_5MoeTextModel` (Qwen3_5MoeText model)
- **qwen3_5_text** -- `Qwen3_5TextModel` (Qwen3_5Text model)
- **qwen3_moe** -- `Qwen3MoeModel` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextModel` (Qwen3Next model)
- **qwen3_vl** -- `Qwen3VLModel` (Qwen3VL model)
- **qwen3_vl_moe** -- `Qwen3VLMoeModel` (Qwen3VLMoe model)
- **qwen3_vl_moe_text** -- `Qwen3VLMoeTextModel` (Qwen3VLMoe model)
- **qwen3_vl_text** -- `Qwen3VLTextModel` (Qwen3VL model)
- **recurrent_gemma** -- `RecurrentGemmaModel` (RecurrentGemma model)
- **reformer** -- `ReformerModel` (Reformer model)
- **regnet** -- `RegNetModel` (RegNet model)
- **rembert** -- `RemBertModel` (RemBERT model)
- **resnet** -- `ResNetModel` (ResNet model)
- **roberta** -- [RobertaModel](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaModel) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormModel` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertModel` (RoCBert model)
- **roformer** -- `RoFormerModel` (RoFormer model)
- **rt_detr** -- `RTDetrModel` (RT-DETR model)
- **rt_detr_v2** -- `RTDetrV2Model` (RT-DETRv2 model)
- **rwkv** -- `RwkvModel` (RWKV model)
- **sam** -- `SamModel` (SAM model)
- **sam2** -- `Sam2Model` (SAM2 model)
- **sam2_hiera_det_model** -- `Sam2HieraDetModel` (Sam2HieraDetModel model)
- **sam2_video** -- `Sam2VideoModel` (Sam2VideoModel model)
- **sam2_vision_model** -- `Sam2VisionModel` (Sam2VisionModel model)
- **sam3** -- `Sam3Model` (SAM3 model)
- **sam3_tracker** -- `Sam3TrackerModel` (Sam3Tracker model)
- **sam3_tracker_video** -- `Sam3TrackerVideoModel` (Sam3TrackerVideo model)
- **sam3_video** -- `Sam3VideoModel` (Sam3VideoModel model)
- **sam3_vision_model** -- `Sam3VisionModel` (Sam3VisionModel model)
- **sam3_vit_model** -- `Sam3ViTModel` (Sam3ViTModel model)
- **sam_hq** -- [SamHQModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQModel) (SAM-HQ model)
- **sam_hq_vision_model** -- [SamHQVisionModel](/docs/transformers/v5.5.1/ko/model_doc/sam_hq#transformers.SamHQVisionModel) (SamHQVisionModel model)
- **sam_vision_model** -- `SamVisionModel` (SamVisionModel model)
- **seamless_m4t** -- `SeamlessM4TModel` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4Tv2Model` (SeamlessM4Tv2 model)
- **seed_oss** -- `SeedOssModel` (SeedOss model)
- **segformer** -- `SegformerModel` (SegFormer model)
- **seggpt** -- `SegGptModel` (SegGPT model)
- **sew** -- `SEWModel` (SEW model)
- **sew-d** -- `SEWDModel` (SEW-D model)
- **siglip** -- [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model)
- **siglip2** -- `Siglip2Model` (SigLIP2 model)
- **siglip2_vision_model** -- `Siglip2VisionModel` (Siglip2VisionModel model)
- **siglip_vision_model** -- [SiglipVisionModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipVisionModel) (SiglipVisionModel model)
- **smollm3** -- `SmolLM3Model` (SmolLM3 model)
- **smolvlm** -- [SmolVLMModel](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMModel) (SmolVLM model)
- **smolvlm_vision** -- [SmolVLMVisionTransformer](/docs/transformers/v5.5.1/ko/model_doc/smolvlm#transformers.SmolVLMVisionTransformer) (SmolVLMVisionTransformer model)
- **solar_open** -- `SolarOpenModel` (SolarOpen model)
- **speech_to_text** -- `Speech2TextModel` (Speech2Text model)
- **speecht5** -- `SpeechT5Model` (SpeechT5 model)
- **splinter** -- `SplinterModel` (Splinter model)
- **squeezebert** -- `SqueezeBertModel` (SqueezeBERT model)
- **stablelm** -- `StableLmModel` (StableLm model)
- **starcoder2** -- `Starcoder2Model` (Starcoder2 model)
- **swiftformer** -- `SwiftFormerModel` (SwiftFormer model)
- **swin** -- [SwinModel](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinModel) (Swin Transformer model)
- **swin2sr** -- [Swin2SRModel](/docs/transformers/v5.5.1/ko/model_doc/swin2sr#transformers.Swin2SRModel) (Swin2SR model)
- **swinv2** -- [Swinv2Model](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Model) (Swin Transformer V2 model)
- **switch_transformers** -- `SwitchTransformersModel` (SwitchTransformers model)
- **t5** -- `T5Model` (T5 model)
- **t5gemma** -- `T5GemmaModel` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2Model` (T5Gemma2 model)
- **t5gemma2_encoder** -- `T5Gemma2Encoder` (T5Gemma2Encoder model)
- **table-transformer** -- `TableTransformerModel` (Table Transformer model)
- **tapas** -- `TapasModel` (TAPAS model)
- **textnet** -- `TextNetModel` (TextNet model)
- **time_series_transformer** -- [TimeSeriesTransformerModel](/docs/transformers/v5.5.1/ko/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel) (Time Series Transformer model)
- **timesfm** -- `TimesFmModel` (TimesFm model)
- **timesfm2_5** -- `TimesFm2_5Model` (TimesFm2p5 model)
- **timesformer** -- [TimesformerModel](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerModel) (TimeSformer model)
- **timm_backbone** -- `TimmBackbone` (TimmBackbone model)
- **timm_wrapper** -- `TimmWrapperModel` (TimmWrapperModel model)
- **tvp** -- [TvpModel](/docs/transformers/v5.5.1/ko/model_doc/tvp#transformers.TvpModel) (TVP model)
- **udop** -- `UdopModel` (UDOP model)
- **umt5** -- `UMT5Model` (UMT5 model)
- **unispeech** -- `UniSpeechModel` (UniSpeech model)
- **unispeech-sat** -- `UniSpeechSatModel` (UniSpeechSat model)
- **univnet** -- `UnivNetModel` (UnivNet model)
- **uvdoc** -- `UVDocModel` (UVDoc model)
- **vaultgemma** -- `VaultGemmaModel` (VaultGemma model)
- **vibevoice_acoustic_tokenizer** -- `VibeVoiceAcousticTokenizerModel` (VibeVoiceAcousticTokenizer model)
- **vibevoice_acoustic_tokenizer_decoder** -- `VibeVoiceAcousticTokenizerDecoderModel` (VibeVoiceAcousticTokenizerDecoderConfig model)
- **vibevoice_acoustic_tokenizer_encoder** -- `VibeVoiceAcousticTokenizerEncoderModel` (VibeVoiceAcousticTokenizerEncoderConfig model)
- **vibevoice_asr** -- `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
- **video_llama_3** -- `VideoLlama3Model` (VideoLlama3 model)
- **video_llama_3_vision** -- `VideoLlama3VisionModel` (VideoLlama3Vision model)
- **video_llava** -- `VideoLlavaModel` (VideoLlava model)
- **videomae** -- `VideoMAEModel` (VideoMAE model)
- **vilt** -- `ViltModel` (ViLT model)
- **vipllava** -- `VipLlavaModel` (VipLlava model)
- **vision-text-dual-encoder** -- `VisionTextDualEncoderModel` (VisionTextDualEncoder model)
- **visual_bert** -- `VisualBertModel` (VisualBERT model)
- **vit** -- [ViTModel](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTModel) (ViT model)
- **vit_mae** -- `ViTMAEModel` (ViTMAE model)
- **vit_msn** -- `ViTMSNModel` (ViTMSN model)
- **vitdet** -- `VitDetModel` (VitDet model)
- **vits** -- `VitsModel` (VITS model)
- **vivit** -- [VivitModel](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitModel) (ViViT model)
- **vjepa2** -- `VJEPA2Model` (VJEPA2Model model)
- **voxtral** -- `VoxtralForConditionalGeneration` (Voxtral model)
- **voxtral_encoder** -- `VoxtralEncoder` (Voxtral Encoder model)
- **voxtral_realtime** -- `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
- **voxtral_realtime_encoder** -- `VoxtralRealtimeEncoder` (VoxtralRealtime Encoder model)
- **voxtral_realtime_text** -- `VoxtralRealtimeTextModel` (VoxtralRealtime Text Model model)
- **wav2vec2** -- `Wav2Vec2Model` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertModel` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerModel` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMModel` (WavLM model)
- **whisper** -- [WhisperModel](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperModel) (Whisper model)
- **xclip** -- [XCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/xclip#transformers.XCLIPModel) (X-CLIP model)
- **xcodec** -- `XcodecModel` (X-CODEC model)
- **xglm** -- `XGLMModel` (XGLM model)
- **xlm** -- `XLMModel` (XLM model)
- **xlm-roberta** -- `XLMRobertaModel` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLModel` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetModel` (XLNet model)
- **xlstm** -- `xLSTMModel` (xLSTM model)
- **xmod** -- `XmodModel` (X-MOD model)
- **yolos** -- `YolosModel` (YOLOS model)
- **yoso** -- `YosoModel` (YOSO model)
- **youtu** -- `YoutuModel` (Youtu model)
- **zamba** -- `ZambaModel` (Zamba model)
- **zamba2** -- `Zamba2Model` (Zamba2 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModel

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

## 일반적인 사전 학습 클래스[[generic-pretraining-classes]]

다음 자동 클래스들은 사전 훈련 헤드가 포함된 모델을 인스턴스화하는 데 사용할 수 있습니다.

### AutoModelForPreTraining[[transformers.AutoModelForPreTraining]][[transformers.AutoModelForPreTraining]]

#### transformers.AutoModelForPreTraining[[transformers.AutoModelForPreTraining]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1976)

This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForPreTraining.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForPreTraining) (ALBERT model)
  - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForPreTraining) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForPreTraining) (BigBird model)
  - `BloomConfig` configuration class: `BloomForCausalLM` (BLOOM model)
  - `CTRLConfig` configuration class: `CTRLLMHeadModel` (CTRL model)
  - `CamembertConfig` configuration class: `CamembertForMaskedLM` (CamemBERT model)
  - `ColModernVBertConfig` configuration class: `ColModernVBertForRetrieval` (ColModernVBert model)
  - `ColPaliConfig` configuration class: `ColPaliForRetrieval` (ColPali model)
  - `ColQwen2Config` configuration class: `ColQwen2ForRetrieval` (ColQwen2 model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForMaskedLM` (Data2VecText model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model)
  - `DistilBertConfig` configuration class: `DistilBertForMaskedLM` (DistilBERT model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForPreTraining) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForPreTraining` (ERNIE model)
  - `EvollaConfig` configuration class: `EvollaForProteinText2Text` (Evolla model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model)
  - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model)
  - `FNetConfig` configuration class: `FNetForPreTraining` (FNet model)
  - `FSMTConfig` configuration class: `FSMTForConditionalGeneration` (FairSeq Machine-Translation model)
  - `FalconMambaConfig` configuration class: `FalconMambaForCausalLM` (FalconMamba model)
  - `FlaubertConfig` configuration class: `FlaubertWithLMHeadModel` (FlauBERT model)
  - `FlavaConfig` configuration class: `FlavaForPreTraining` (FLAVA model)
  - `Florence2Config` configuration class: `Florence2ForConditionalGeneration` (Florence2 model)
  - `FunnelConfig` configuration class: `FunnelForPreTraining` (Funnel Transformer model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model)
  - `GPTBigCodeConfig` configuration class: `GPTBigCodeForCausalLM` (GPTBigCode model)
  - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model)
  - `Gemma4Config` configuration class: `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model)
  - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model)
  - `HieraConfig` configuration class: `HieraForPreTraining` (Hiera model)
  - `IBertConfig` configuration class: `IBertForMaskedLM` (I-BERT model)
  - `Idefics2Config` configuration class: `Idefics2ForConditionalGeneration` (Idefics2 model)
  - `Idefics3Config` configuration class: `Idefics3ForConditionalGeneration` (Idefics3 model)
  - `IdeficsConfig` configuration class: `IdeficsForVisionText2Text` (IDEFICS model)
  - `JanusConfig` configuration class: `JanusForConditionalGeneration` (Janus model)
  - `LayoutLMConfig` configuration class: `LayoutLMForMaskedLM` (LayoutLM model)
  - `LlavaConfig` configuration class: `LlavaForConditionalGeneration` (LLaVa model)
  - `LlavaNextConfig` configuration class: `LlavaNextForConditionalGeneration` (LLaVA-NeXT model)
  - `LlavaNextVideoConfig` configuration class: `LlavaNextVideoForConditionalGeneration` (LLaVa-NeXT-Video model)
  - `LlavaOnevisionConfig` configuration class: `LlavaOnevisionForConditionalGeneration` (LLaVA-Onevision model)
  - `LongformerConfig` configuration class: `LongformerForMaskedLM` (Longformer model)
  - `LukeConfig` configuration class: `LukeForMaskedLM` (LUKE model)
  - `LxmertConfig` configuration class: `LxmertForPreTraining` (LXMERT model)
  - `MPNetConfig` configuration class: `MPNetForMaskedLM` (MPNet model)
  - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model)
  - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model)
  - `MegatronBertConfig` configuration class: `MegatronBertForPreTraining` (Megatron-BERT model)
  - `Mistral3Config` configuration class: `Mistral3ForConditionalGeneration` (Mistral3 model)
  - `Mistral4Config` configuration class: `Mistral4ForCausalLM` (Mistral4 model)
  - `MllamaConfig` configuration class: `MllamaForConditionalGeneration` (Mllama model)
  - `MobileBertConfig` configuration class: `MobileBertForPreTraining` (MobileBERT model)
  - `MptConfig` configuration class: `MptForCausalLM` (MPT model)
  - `MraConfig` configuration class: `MraForMaskedLM` (MRA model)
  - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
  - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model)
  - `NanoChatConfig` configuration class: `NanoChatForCausalLM` (NanoChat model)
  - `NllbMoeConfig` configuration class: `NllbMoeForConditionalGeneration` (NLLB-MOE model)
  - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model)
  - [PaliGemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaConfig) configuration class: [PaliGemmaForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaForConditionalGeneration) (PaliGemma model)
  - `Qwen2AudioConfig` configuration class: `Qwen2AudioForConditionalGeneration` (Qwen2Audio model)
  - `RoCBertConfig` configuration class: `RoCBertForPreTraining` (RoCBert model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model)
  - `RwkvConfig` configuration class: `RwkvForCausalLM` (RWKV model)
  - `SplinterConfig` configuration class: `SplinterForPreTraining` (Splinter model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForMaskedLM` (SqueezeBERT model)
  - `SwitchTransformersConfig` configuration class: `SwitchTransformersForConditionalGeneration` (SwitchTransformers model)
  - `T5Config` configuration class: `T5ForConditionalGeneration` (T5 model)
  - `T5Gemma2Config` configuration class: `T5Gemma2ForConditionalGeneration` (T5Gemma2 model)
  - `T5GemmaConfig` configuration class: `T5GemmaForConditionalGeneration` (T5Gemma model)
  - `TapasConfig` configuration class: `TapasForMaskedLM` (TAPAS model)
  - `UniSpeechConfig` configuration class: `UniSpeechForPreTraining` (UniSpeech model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatForPreTraining` (UniSpeechSat model)
  - `ViTMAEConfig` configuration class: `ViTMAEForPreTraining` (ViTMAE model)
  - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
  - `VideoLlavaConfig` configuration class: `VideoLlavaForConditionalGeneration` (VideoLlava model)
  - `VideoMAEConfig` configuration class: `VideoMAEForPreTraining` (VideoMAE model)
  - `VipLlavaConfig` configuration class: `VipLlavaForConditionalGeneration` (VipLlava model)
  - `VisualBertConfig` configuration class: `VisualBertForPreTraining` (VisualBERT model)
  - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model)
  - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2ForPreTraining` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForPreTraining` (Wav2Vec2-Conformer model)
  - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForMaskedLM` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetLMHeadModel` (XLNet model)
  - `XmodConfig` configuration class: `XmodForMaskedLM` (X-MOD model)
  - `xLSTMConfig` configuration class: `xLSTMForCausalLM` (xLSTM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a pretraining head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForPreTraining

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForPreTraining.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForPreTraining) (ALBERT model) - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForPreTraining) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForPreTraining) (BigBird model) - `BloomConfig` configuration class: `BloomForCausalLM` (BLOOM model) - `CTRLConfig` configuration class: `CTRLLMHeadModel` (CTRL model) - `CamembertConfig` configuration class: `CamembertForMaskedLM` (CamemBERT model) - `ColModernVBertConfig` configuration class: `ColModernVBertForRetrieval` (ColModernVBert model) - `ColPaliConfig` configuration class: `ColPaliForRetrieval` (ColPali model) - `ColQwen2Config` configuration class: `ColQwen2ForRetrieval` (ColQwen2 model) - `Data2VecTextConfig` configuration class: `Data2VecTextForMaskedLM` (Data2VecText model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model) - `DistilBertConfig` configuration class: `DistilBertForMaskedLM` (DistilBERT model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForPreTraining) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForPreTraining` (ERNIE model) - `EvollaConfig` configuration class: `EvollaForProteinText2Text` (Evolla model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model) - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model) - `FNetConfig` configuration class: `FNetForPreTraining` (FNet model) - `FSMTConfig` configuration class: `FSMTForConditionalGeneration` (FairSeq Machine-Translation model) - `FalconMambaConfig` configuration class: `FalconMambaForCausalLM` (FalconMamba model) - `FlaubertConfig` configuration class: `FlaubertWithLMHeadModel` (FlauBERT model) - `FlavaConfig` configuration class: `FlavaForPreTraining` (FLAVA model) - `Florence2Config` configuration class: `Florence2ForConditionalGeneration` (Florence2 model) - `FunnelConfig` configuration class: `FunnelForPreTraining` (Funnel Transformer model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model) - `GPTBigCodeConfig` configuration class: `GPTBigCodeForCausalLM` (GPTBigCode model) - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model) - `Gemma4Config` configuration class: `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model) - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model) - `HieraConfig` configuration class: `HieraForPreTraining` (Hiera model) - `IBertConfig` configuration class: `IBertForMaskedLM` (I-BERT model) - `Idefics2Config` configuration class: `Idefics2ForConditionalGeneration` (Idefics2 model) - `Idefics3Config` configuration class: `Idefics3ForConditionalGeneration` (Idefics3 model) - `IdeficsConfig` configuration class: `IdeficsForVisionText2Text` (IDEFICS model) - `JanusConfig` configuration class: `JanusForConditionalGeneration` (Janus model) - `LayoutLMConfig` configuration class: `LayoutLMForMaskedLM` (LayoutLM model) - `LlavaConfig` configuration class: `LlavaForConditionalGeneration` (LLaVa model) - `LlavaNextConfig` configuration class: `LlavaNextForConditionalGeneration` (LLaVA-NeXT model) - `LlavaNextVideoConfig` configuration class: `LlavaNextVideoForConditionalGeneration` (LLaVa-NeXT-Video model) - `LlavaOnevisionConfig` configuration class: `LlavaOnevisionForConditionalGeneration` (LLaVA-Onevision model) - `LongformerConfig` configuration class: `LongformerForMaskedLM` (Longformer model) - `LukeConfig` configuration class: `LukeForMaskedLM` (LUKE model) - `LxmertConfig` configuration class: `LxmertForPreTraining` (LXMERT model) - `MPNetConfig` configuration class: `MPNetForMaskedLM` (MPNet model) - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model) - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model) - `MegatronBertConfig` configuration class: `MegatronBertForPreTraining` (Megatron-BERT model) - `Mistral3Config` configuration class: `Mistral3ForConditionalGeneration` (Mistral3 model) - `Mistral4Config` configuration class: `Mistral4ForCausalLM` (Mistral4 model) - `MllamaConfig` configuration class: `MllamaForConditionalGeneration` (Mllama model) - `MobileBertConfig` configuration class: `MobileBertForPreTraining` (MobileBERT model) - `MptConfig` configuration class: `MptForCausalLM` (MPT model) - `MraConfig` configuration class: `MraForMaskedLM` (MRA model) - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model) - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model) - `NanoChatConfig` configuration class: `NanoChatForCausalLM` (NanoChat model) - `NllbMoeConfig` configuration class: `NllbMoeForConditionalGeneration` (NLLB-MOE model) - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model) - [PaliGemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaConfig) configuration class: [PaliGemmaForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaForConditionalGeneration) (PaliGemma model) - `Qwen2AudioConfig` configuration class: `Qwen2AudioForConditionalGeneration` (Qwen2Audio model) - `RoCBertConfig` configuration class: `RoCBertForPreTraining` (RoCBert model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model) - `RwkvConfig` configuration class: `RwkvForCausalLM` (RWKV model) - `SplinterConfig` configuration class: `SplinterForPreTraining` (Splinter model) - `SqueezeBertConfig` configuration class: `SqueezeBertForMaskedLM` (SqueezeBERT model) - `SwitchTransformersConfig` configuration class: `SwitchTransformersForConditionalGeneration` (SwitchTransformers model) - `T5Config` configuration class: `T5ForConditionalGeneration` (T5 model) - `T5Gemma2Config` configuration class: `T5Gemma2ForConditionalGeneration` (T5Gemma2 model) - `T5GemmaConfig` configuration class: `T5GemmaForConditionalGeneration` (T5Gemma model) - `TapasConfig` configuration class: `TapasForMaskedLM` (TAPAS model) - `UniSpeechConfig` configuration class: `UniSpeechForPreTraining` (UniSpeech model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatForPreTraining` (UniSpeechSat model) - `ViTMAEConfig` configuration class: `ViTMAEForPreTraining` (ViTMAE model) - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model) - `VideoLlavaConfig` configuration class: `VideoLlavaForConditionalGeneration` (VideoLlava model) - `VideoMAEConfig` configuration class: `VideoMAEForPreTraining` (VideoMAE model) - `VipLlavaConfig` configuration class: `VipLlavaForConditionalGeneration` (VipLlava model) - `VisualBertConfig` configuration class: `VisualBertForPreTraining` (VisualBERT model) - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model) - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model) - `Wav2Vec2Config` configuration class: `Wav2Vec2ForPreTraining` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForPreTraining` (Wav2Vec2-Conformer model) - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForMaskedLM` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetLMHeadModel` (XLNet model) - `XmodConfig` configuration class: `XmodForMaskedLM` (X-MOD model) - `xLSTMConfig` configuration class: `xLSTMForCausalLM` (xLSTM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForPreTraining.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForPreTraining) (ALBERT model)
- **audioflamingo3** -- `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
- **bart** -- [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
- **bert** -- [BertForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForPreTraining) (BERT model)
- **big_bird** -- [BigBirdForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForPreTraining) (BigBird model)
- **bloom** -- `BloomForCausalLM` (BLOOM model)
- **camembert** -- `CamembertForMaskedLM` (CamemBERT model)
- **colmodernvbert** -- `ColModernVBertForRetrieval` (ColModernVBert model)
- **colpali** -- `ColPaliForRetrieval` (ColPali model)
- **colqwen2** -- `ColQwen2ForRetrieval` (ColQwen2 model)
- **ctrl** -- `CTRLLMHeadModel` (CTRL model)
- **data2vec-text** -- `Data2VecTextForMaskedLM` (Data2VecText model)
- **deberta** -- [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model)
- **deberta-v2** -- [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model)
- **distilbert** -- `DistilBertForMaskedLM` (DistilBERT model)
- **electra** -- [ElectraForPreTraining](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForPreTraining) (ELECTRA model)
- **ernie** -- `ErnieForPreTraining` (ERNIE model)
- **evolla** -- `EvollaForProteinText2Text` (Evolla model)
- **exaone4** -- [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model)
- **exaone_moe** -- [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model)
- **falcon_mamba** -- `FalconMambaForCausalLM` (FalconMamba model)
- **flaubert** -- `FlaubertWithLMHeadModel` (FlauBERT model)
- **flava** -- `FlavaForPreTraining` (FLAVA model)
- **florence2** -- `Florence2ForConditionalGeneration` (Florence2 model)
- **fnet** -- `FNetForPreTraining` (FNet model)
- **fsmt** -- `FSMTForConditionalGeneration` (FairSeq Machine-Translation model)
- **funnel** -- `FunnelForPreTraining` (Funnel Transformer model)
- **gemma3** -- [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model)
- **gemma4** -- `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model)
- **glmasr** -- `GlmAsrForConditionalGeneration` (GLM-ASR model)
- **gpt-sw3** -- [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (GPT-Sw3 model)
- **gpt2** -- [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeForCausalLM` (GPTBigCode model)
- **hiera** -- `HieraForPreTraining` (Hiera model)
- **ibert** -- `IBertForMaskedLM` (I-BERT model)
- **idefics** -- `IdeficsForVisionText2Text` (IDEFICS model)
- **idefics2** -- `Idefics2ForConditionalGeneration` (Idefics2 model)
- **idefics3** -- `Idefics3ForConditionalGeneration` (Idefics3 model)
- **janus** -- `JanusForConditionalGeneration` (Janus model)
- **layoutlm** -- `LayoutLMForMaskedLM` (LayoutLM model)
- **llava** -- `LlavaForConditionalGeneration` (LLaVa model)
- **llava_next** -- `LlavaNextForConditionalGeneration` (LLaVA-NeXT model)
- **llava_next_video** -- `LlavaNextVideoForConditionalGeneration` (LLaVa-NeXT-Video model)
- **llava_onevision** -- `LlavaOnevisionForConditionalGeneration` (LLaVA-Onevision model)
- **longformer** -- `LongformerForMaskedLM` (Longformer model)
- **luke** -- `LukeForMaskedLM` (LUKE model)
- **lxmert** -- `LxmertForPreTraining` (LXMERT model)
- **mamba** -- [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model)
- **mamba2** -- [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model)
- **megatron-bert** -- `MegatronBertForPreTraining` (Megatron-BERT model)
- **mistral3** -- `Mistral3ForConditionalGeneration` (Mistral3 model)
- **mistral4** -- `Mistral4ForCausalLM` (Mistral4 model)
- **mllama** -- `MllamaForConditionalGeneration` (Mllama model)
- **mobilebert** -- `MobileBertForPreTraining` (MobileBERT model)
- **mpnet** -- `MPNetForMaskedLM` (MPNet model)
- **mpt** -- `MptForCausalLM` (MPT model)
- **mra** -- `MraForMaskedLM` (MRA model)
- **musicflamingo** -- `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
- **mvp** -- `MvpForConditionalGeneration` (MVP model)
- **nanochat** -- `NanoChatForCausalLM` (NanoChat model)
- **nllb-moe** -- `NllbMoeForConditionalGeneration` (NLLB-MOE model)
- **openai-gpt** -- [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model)
- **paligemma** -- [PaliGemmaForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/paligemma#transformers.PaliGemmaForConditionalGeneration) (PaliGemma model)
- **qwen2_audio** -- `Qwen2AudioForConditionalGeneration` (Qwen2Audio model)
- **roberta** -- [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForPreTraining` (RoCBert model)
- **rwkv** -- `RwkvForCausalLM` (RWKV model)
- **splinter** -- `SplinterForPreTraining` (Splinter model)
- **squeezebert** -- `SqueezeBertForMaskedLM` (SqueezeBERT model)
- **switch_transformers** -- `SwitchTransformersForConditionalGeneration` (SwitchTransformers model)
- **t5** -- `T5ForConditionalGeneration` (T5 model)
- **t5gemma** -- `T5GemmaForConditionalGeneration` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2ForConditionalGeneration` (T5Gemma2 model)
- **tapas** -- `TapasForMaskedLM` (TAPAS model)
- **unispeech** -- `UniSpeechForPreTraining` (UniSpeech model)
- **unispeech-sat** -- `UniSpeechSatForPreTraining` (UniSpeechSat model)
- **vibevoice_asr** -- `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
- **video_llava** -- `VideoLlavaForConditionalGeneration` (VideoLlava model)
- **videomae** -- `VideoMAEForPreTraining` (VideoMAE model)
- **vipllava** -- `VipLlavaForConditionalGeneration` (VipLlava model)
- **visual_bert** -- `VisualBertForPreTraining` (VisualBERT model)
- **vit_mae** -- `ViTMAEForPreTraining` (ViTMAE model)
- **voxtral** -- `VoxtralForConditionalGeneration` (Voxtral model)
- **voxtral_realtime** -- `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
- **wav2vec2** -- `Wav2Vec2ForPreTraining` (Wav2Vec2 model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerForPreTraining` (Wav2Vec2-Conformer model)
- **xlm** -- `XLMWithLMHeadModel` (XLM model)
- **xlm-roberta** -- `XLMRobertaForMaskedLM` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetLMHeadModel` (XLNet model)
- **xlstm** -- `xLSTMForCausalLM` (xLSTM model)
- **xmod** -- `XmodForMaskedLM` (X-MOD model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForPreTraining

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

## 자연어 처리[[natural-language-processing]]

다음 자동 클래스들은 아래의 자연어 처리 작업에 사용할 수 있습니다.

### AutoModelForCausalLM[[transformers.AutoModelForCausalLM]][[transformers.AutoModelForCausalLM]]

#### transformers.AutoModelForCausalLM[[transformers.AutoModelForCausalLM]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1983)

This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForCausalLM.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `AfmoeConfig` configuration class: `AfmoeForCausalLM` (AFMoE model)
  - `ApertusConfig` configuration class: `ApertusForCausalLM` (Apertus model)
  - `ArceeConfig` configuration class: `ArceeForCausalLM` (Arcee model)
  - `AriaTextConfig` configuration class: `AriaTextForCausalLM` (AriaText model)
  - `BambaConfig` configuration class: `BambaForCausalLM` (Bamba model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForCausalLM) (BART model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertLMHeadModel) (BERT model)
  - `BertGenerationConfig` configuration class: `BertGenerationDecoder` (Bert Generation model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForCausalLM) (BigBird model)
  - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForCausalLM` (BigBird-Pegasus model)
  - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForCausalLM) (BioGpt model)
  - `BitNetConfig` configuration class: `BitNetForCausalLM` (BitNet model)
  - `BlenderbotConfig` configuration class: `BlenderbotForCausalLM` (Blenderbot model)
  - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallForCausalLM` (BlenderbotSmall model)
  - `BloomConfig` configuration class: `BloomForCausalLM` (BLOOM model)
  - `BltConfig` configuration class: `BltForCausalLM` (Blt model)
  - `CTRLConfig` configuration class: `CTRLLMHeadModel` (CTRL model)
  - `CamembertConfig` configuration class: `CamembertForCausalLM` (CamemBERT model)
  - [CodeGenConfig](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenConfig) configuration class: [CodeGenForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenForCausalLM) (CodeGen model)
  - `Cohere2Config` configuration class: `Cohere2ForCausalLM` (Cohere2 model)
  - [CohereConfig](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereConfig) configuration class: [CohereForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereForCausalLM) (Cohere model)
  - `CpmAntConfig` configuration class: `CpmAntForCausalLM` (CPM-Ant model)
  - `CwmConfig` configuration class: `CwmForCausalLM` (Code World Model (CWM) model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForCausalLM` (Data2VecText model)
  - [DbrxConfig](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxConfig) configuration class: [DbrxForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxForCausalLM) (DBRX model)
  - `DeepseekV2Config` configuration class: `DeepseekV2ForCausalLM` (DeepSeek-V2 model)
  - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: [DeepseekV3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3ForCausalLM) (DeepSeek-V3 model)
  - `DiffLlamaConfig` configuration class: `DiffLlamaForCausalLM` (DiffLlama model)
  - `DogeConfig` configuration class: `DogeForCausalLM` (Doge model)
  - `Dots1Config` configuration class: `Dots1ForCausalLM` (dots1 model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForCausalLM) (ELECTRA model)
  - `Emu3Config` configuration class: `Emu3ForCausalLM` (Emu3 model)
  - `Ernie4_5Config` configuration class: `Ernie4_5ForCausalLM` (Ernie4_5 model)
  - `Ernie4_5_MoeConfig` configuration class: `Ernie4_5_MoeForCausalLM` (Ernie4_5_MoE model)
  - `ErnieConfig` configuration class: `ErnieForCausalLM` (ERNIE model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model)
  - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model)
  - `FalconConfig` configuration class: `FalconForCausalLM` (Falcon model)
  - `FalconH1Config` configuration class: `FalconH1ForCausalLM` (FalconH1 model)
  - `FalconMambaConfig` configuration class: `FalconMambaForCausalLM` (FalconMamba model)
  - `FlexOlmoConfig` configuration class: `FlexOlmoForCausalLM` (FlexOlmo model)
  - `FuyuConfig` configuration class: `FuyuForCausalLM` (Fuyu model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model)
  - `GPTBigCodeConfig` configuration class: `GPTBigCodeForCausalLM` (GPTBigCode model)
  - `GPTJConfig` configuration class: `GPTJForCausalLM` (GPT-J model)
  - `GPTNeoConfig` configuration class: `GPTNeoForCausalLM` (GPT Neo model)
  - `GPTNeoXConfig` configuration class: `GPTNeoXForCausalLM` (GPT NeoX model)
  - [GPTNeoXJapaneseConfig](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseConfig) configuration class: [GPTNeoXJapaneseForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseForCausalLM) (GPT NeoX Japanese model)
  - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForCausalLM) (Gemma2 model)
  - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model)
  - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: [Gemma3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForCausalLM) (Gemma3ForCausalLM model)
  - [Gemma3nConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nConfig) configuration class: [Gemma3nForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForConditionalGeneration) (Gemma3nForConditionalGeneration model)
  - [Gemma3nTextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextConfig) configuration class: [Gemma3nForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForCausalLM) (Gemma3nForCausalLM model)
  - `Gemma4Config` configuration class: `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model)
  - `Gemma4TextConfig` configuration class: `Gemma4ForCausalLM` (Gemma4ForCausalLM model)
  - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForCausalLM) (Gemma model)
  - `GitConfig` configuration class: `GitForCausalLM` (GIT model)
  - `Glm4Config` configuration class: `Glm4ForCausalLM` (GLM4 model)
  - `Glm4MoeConfig` configuration class: `Glm4MoeForCausalLM` (Glm4MoE model)
  - `Glm4MoeLiteConfig` configuration class: `Glm4MoeLiteForCausalLM` (Glm4MoELite model)
  - `GlmConfig` configuration class: `GlmForCausalLM` (GLM model)
  - `GlmMoeDsaConfig` configuration class: `GlmMoeDsaForCausalLM` (GlmMoeDsa model)
  - `GotOcr2Config` configuration class: `GotOcr2ForConditionalGeneration` (GOT-OCR2 model)
  - `GptOssConfig` configuration class: `GptOssForCausalLM` (GptOss model)
  - `GraniteConfig` configuration class: `GraniteForCausalLM` (Granite model)
  - `GraniteMoeConfig` configuration class: `GraniteMoeForCausalLM` (GraniteMoeMoe model)
  - `GraniteMoeHybridConfig` configuration class: `GraniteMoeHybridForCausalLM` (GraniteMoeHybrid model)
  - `GraniteMoeSharedConfig` configuration class: `GraniteMoeSharedForCausalLM` (GraniteMoeSharedMoe model)
  - `HeliumConfig` configuration class: `HeliumForCausalLM` (Helium model)
  - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1ForCausalLM` (HunYuanDenseV1 model)
  - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1ForCausalLM` (HunYuanMoeV1 model)
  - `Jais2Config` configuration class: `Jais2ForCausalLM` (Jais2 model)
  - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForCausalLM) (Jamba model)
  - `JetMoeConfig` configuration class: `JetMoeForCausalLM` (JetMoe model)
  - [Lfm2Config](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Config) configuration class: [Lfm2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2ForCausalLM) (Lfm2 model)
  - `Lfm2MoeConfig` configuration class: `Lfm2MoeForCausalLM` (Lfm2Moe model)
  - [Llama4Config](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Config) configuration class: [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4 model)
  - [Llama4TextConfig](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextConfig) configuration class: [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4ForCausalLM model)
  - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForCausalLM) (LLaMA model)
  - `LongcatFlashConfig` configuration class: `LongcatFlashForCausalLM` (LongCatFlash model)
  - `MBartConfig` configuration class: `MBartForCausalLM` (mBART model)
  - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model)
  - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model)
  - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianForCausalLM) (Marian model)
  - `MegatronBertConfig` configuration class: `MegatronBertForCausalLM` (Megatron-BERT model)
  - `MiniMaxConfig` configuration class: `MiniMaxForCausalLM` (MiniMax model)
  - `MiniMaxM2Config` configuration class: `MiniMaxM2ForCausalLM` (MiniMax-M2 model)
  - `Ministral3Config` configuration class: `Ministral3ForCausalLM` (Ministral3 model)
  - `MinistralConfig` configuration class: `MinistralForCausalLM` (Ministral model)
  - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForCausalLM) (Mistral model)
  - `MixtralConfig` configuration class: `MixtralForCausalLM` (Mixtral model)
  - `MllamaConfig` configuration class: `MllamaForCausalLM` (Mllama model)
  - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderForCausalLM` (ModernBertDecoder model)
  - `MoshiConfig` configuration class: `MoshiForCausalLM` (Moshi model)
  - `MptConfig` configuration class: `MptForCausalLM` (MPT model)
  - `MusicgenConfig` configuration class: `MusicgenForCausalLM` (MusicGen model)
  - `MusicgenMelodyConfig` configuration class: `MusicgenMelodyForCausalLM` (MusicGen Melody model)
  - `MvpConfig` configuration class: `MvpForCausalLM` (MVP model)
  - `NanoChatConfig` configuration class: `NanoChatForCausalLM` (NanoChat model)
  - `NemotronConfig` configuration class: `NemotronForCausalLM` (Nemotron model)
  - `NemotronHConfig` configuration class: `NemotronHForCausalLM` (NemotronH model)
  - `OPTConfig` configuration class: `OPTForCausalLM` (OPT model)
  - `Olmo2Config` configuration class: `Olmo2ForCausalLM` (OLMo2 model)
  - `Olmo3Config` configuration class: `Olmo3ForCausalLM` (Olmo3 model)
  - `OlmoConfig` configuration class: `OlmoForCausalLM` (OLMo model)
  - `OlmoHybridConfig` configuration class: `OlmoHybridForCausalLM` (OlmoHybrid model)
  - `OlmoeConfig` configuration class: `OlmoeForCausalLM` (OLMoE model)
  - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model)
  - `PLBartConfig` configuration class: `PLBartForCausalLM` (PLBart model)
  - `PegasusConfig` configuration class: `PegasusForCausalLM` (Pegasus model)
  - `PersimmonConfig` configuration class: `PersimmonForCausalLM` (Persimmon model)
  - `Phi3Config` configuration class: `Phi3ForCausalLM` (Phi3 model)
  - `Phi4MultimodalConfig` configuration class: `Phi4MultimodalForCausalLM` (Phi4Multimodal model)
  - `PhiConfig` configuration class: `PhiForCausalLM` (Phi model)
  - `PhimoeConfig` configuration class: `PhimoeForCausalLM` (Phimoe model)
  - `ProphetNetConfig` configuration class: `ProphetNetForCausalLM` (ProphetNet model)
  - `Qwen2Config` configuration class: `Qwen2ForCausalLM` (Qwen2 model)
  - `Qwen2MoeConfig` configuration class: `Qwen2MoeForCausalLM` (Qwen2MoE model)
  - `Qwen3Config` configuration class: `Qwen3ForCausalLM` (Qwen3 model)
  - `Qwen3MoeConfig` configuration class: `Qwen3MoeForCausalLM` (Qwen3MoE model)
  - `Qwen3NextConfig` configuration class: `Qwen3NextForCausalLM` (Qwen3Next model)
  - `Qwen3_5Config` configuration class: `Qwen3_5ForCausalLM` (Qwen3_5 model)
  - `Qwen3_5MoeConfig` configuration class: `Qwen3_5MoeForCausalLM` (Qwen3_5Moe model)
  - `Qwen3_5MoeTextConfig` configuration class: `Qwen3_5MoeForCausalLM` (Qwen3_5MoeText model)
  - `Qwen3_5TextConfig` configuration class: `Qwen3_5ForCausalLM` (Qwen3_5Text model)
  - `RecurrentGemmaConfig` configuration class: `RecurrentGemmaForCausalLM` (RecurrentGemma model)
  - `ReformerConfig` configuration class: `ReformerModelWithLMHead` (Reformer model)
  - `RemBertConfig` configuration class: `RemBertForCausalLM` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForCausalLM` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForCausalLM` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForCausalLM) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForCausalLM` (RoBERTa-PreLayerNorm model)
  - `RwkvConfig` configuration class: `RwkvForCausalLM` (RWKV model)
  - `SeedOssConfig` configuration class: `SeedOssForCausalLM` (SeedOss model)
  - `SmolLM3Config` configuration class: `SmolLM3ForCausalLM` (SmolLM3 model)
  - `SolarOpenConfig` configuration class: `SolarOpenForCausalLM` (SolarOpen model)
  - `StableLmConfig` configuration class: `StableLmForCausalLM` (StableLm model)
  - `Starcoder2Config` configuration class: `Starcoder2ForCausalLM` (Starcoder2 model)
  - `TrOCRConfig` configuration class: `TrOCRForCausalLM` (TrOCR model)
  - `VaultGemmaConfig` configuration class: `VaultGemmaForCausalLM` (VaultGemma model)
  - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: `WhisperForCausalLM` (Whisper model)
  - `XGLMConfig` configuration class: `XGLMForCausalLM` (XGLM model)
  - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForCausalLM` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForCausalLM` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetLMHeadModel` (XLNet model)
  - `XmodConfig` configuration class: `XmodForCausalLM` (X-MOD model)
  - `YoutuConfig` configuration class: `YoutuForCausalLM` (Youtu model)
  - `Zamba2Config` configuration class: `Zamba2ForCausalLM` (Zamba2 model)
  - `ZambaConfig` configuration class: `ZambaForCausalLM` (Zamba model)
  - `xLSTMConfig` configuration class: `xLSTMForCausalLM` (xLSTM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForCausalLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForCausalLM.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `AfmoeConfig` configuration class: `AfmoeForCausalLM` (AFMoE model) - `ApertusConfig` configuration class: `ApertusForCausalLM` (Apertus model) - `ArceeConfig` configuration class: `ArceeForCausalLM` (Arcee model) - `AriaTextConfig` configuration class: `AriaTextForCausalLM` (AriaText model) - `BambaConfig` configuration class: `BambaForCausalLM` (Bamba model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForCausalLM) (BART model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertLMHeadModel) (BERT model) - `BertGenerationConfig` configuration class: `BertGenerationDecoder` (Bert Generation model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForCausalLM) (BigBird model) - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForCausalLM` (BigBird-Pegasus model) - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForCausalLM) (BioGpt model) - `BitNetConfig` configuration class: `BitNetForCausalLM` (BitNet model) - `BlenderbotConfig` configuration class: `BlenderbotForCausalLM` (Blenderbot model) - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallForCausalLM` (BlenderbotSmall model) - `BloomConfig` configuration class: `BloomForCausalLM` (BLOOM model) - `BltConfig` configuration class: `BltForCausalLM` (Blt model) - `CTRLConfig` configuration class: `CTRLLMHeadModel` (CTRL model) - `CamembertConfig` configuration class: `CamembertForCausalLM` (CamemBERT model) - [CodeGenConfig](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenConfig) configuration class: [CodeGenForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenForCausalLM) (CodeGen model) - `Cohere2Config` configuration class: `Cohere2ForCausalLM` (Cohere2 model) - [CohereConfig](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereConfig) configuration class: [CohereForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereForCausalLM) (Cohere model) - `CpmAntConfig` configuration class: `CpmAntForCausalLM` (CPM-Ant model) - `CwmConfig` configuration class: `CwmForCausalLM` (Code World Model (CWM) model) - `Data2VecTextConfig` configuration class: `Data2VecTextForCausalLM` (Data2VecText model) - [DbrxConfig](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxConfig) configuration class: [DbrxForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxForCausalLM) (DBRX model) - `DeepseekV2Config` configuration class: `DeepseekV2ForCausalLM` (DeepSeek-V2 model) - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: [DeepseekV3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3ForCausalLM) (DeepSeek-V3 model) - `DiffLlamaConfig` configuration class: `DiffLlamaForCausalLM` (DiffLlama model) - `DogeConfig` configuration class: `DogeForCausalLM` (Doge model) - `Dots1Config` configuration class: `Dots1ForCausalLM` (dots1 model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForCausalLM) (ELECTRA model) - `Emu3Config` configuration class: `Emu3ForCausalLM` (Emu3 model) - `Ernie4_5Config` configuration class: `Ernie4_5ForCausalLM` (Ernie4_5 model) - `Ernie4_5_MoeConfig` configuration class: `Ernie4_5_MoeForCausalLM` (Ernie4_5_MoE model) - `ErnieConfig` configuration class: `ErnieForCausalLM` (ERNIE model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model) - [ExaoneMoeConfig](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeConfig) configuration class: [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model) - `FalconConfig` configuration class: `FalconForCausalLM` (Falcon model) - `FalconH1Config` configuration class: `FalconH1ForCausalLM` (FalconH1 model) - `FalconMambaConfig` configuration class: `FalconMambaForCausalLM` (FalconMamba model) - `FlexOlmoConfig` configuration class: `FlexOlmoForCausalLM` (FlexOlmo model) - `FuyuConfig` configuration class: `FuyuForCausalLM` (Fuyu model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model) - `GPTBigCodeConfig` configuration class: `GPTBigCodeForCausalLM` (GPTBigCode model) - `GPTJConfig` configuration class: `GPTJForCausalLM` (GPT-J model) - `GPTNeoConfig` configuration class: `GPTNeoForCausalLM` (GPT Neo model) - `GPTNeoXConfig` configuration class: `GPTNeoXForCausalLM` (GPT NeoX model) - [GPTNeoXJapaneseConfig](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseConfig) configuration class: [GPTNeoXJapaneseForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseForCausalLM) (GPT NeoX Japanese model) - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForCausalLM) (Gemma2 model) - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model) - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: [Gemma3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForCausalLM) (Gemma3ForCausalLM model) - [Gemma3nConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nConfig) configuration class: [Gemma3nForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForConditionalGeneration) (Gemma3nForConditionalGeneration model) - [Gemma3nTextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nTextConfig) configuration class: [Gemma3nForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForCausalLM) (Gemma3nForCausalLM model) - `Gemma4Config` configuration class: `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model) - `Gemma4TextConfig` configuration class: `Gemma4ForCausalLM` (Gemma4ForCausalLM model) - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForCausalLM) (Gemma model) - `GitConfig` configuration class: `GitForCausalLM` (GIT model) - `Glm4Config` configuration class: `Glm4ForCausalLM` (GLM4 model) - `Glm4MoeConfig` configuration class: `Glm4MoeForCausalLM` (Glm4MoE model) - `Glm4MoeLiteConfig` configuration class: `Glm4MoeLiteForCausalLM` (Glm4MoELite model) - `GlmConfig` configuration class: `GlmForCausalLM` (GLM model) - `GlmMoeDsaConfig` configuration class: `GlmMoeDsaForCausalLM` (GlmMoeDsa model) - `GotOcr2Config` configuration class: `GotOcr2ForConditionalGeneration` (GOT-OCR2 model) - `GptOssConfig` configuration class: `GptOssForCausalLM` (GptOss model) - `GraniteConfig` configuration class: `GraniteForCausalLM` (Granite model) - `GraniteMoeConfig` configuration class: `GraniteMoeForCausalLM` (GraniteMoeMoe model) - `GraniteMoeHybridConfig` configuration class: `GraniteMoeHybridForCausalLM` (GraniteMoeHybrid model) - `GraniteMoeSharedConfig` configuration class: `GraniteMoeSharedForCausalLM` (GraniteMoeSharedMoe model) - `HeliumConfig` configuration class: `HeliumForCausalLM` (Helium model) - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1ForCausalLM` (HunYuanDenseV1 model) - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1ForCausalLM` (HunYuanMoeV1 model) - `Jais2Config` configuration class: `Jais2ForCausalLM` (Jais2 model) - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForCausalLM) (Jamba model) - `JetMoeConfig` configuration class: `JetMoeForCausalLM` (JetMoe model) - [Lfm2Config](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2Config) configuration class: [Lfm2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2ForCausalLM) (Lfm2 model) - `Lfm2MoeConfig` configuration class: `Lfm2MoeForCausalLM` (Lfm2Moe model) - [Llama4Config](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4Config) configuration class: [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4 model) - [Llama4TextConfig](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4TextConfig) configuration class: [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4ForCausalLM model) - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForCausalLM) (LLaMA model) - `LongcatFlashConfig` configuration class: `LongcatFlashForCausalLM` (LongCatFlash model) - `MBartConfig` configuration class: `MBartForCausalLM` (mBART model) - [Mamba2Config](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2Config) configuration class: [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model) - [MambaConfig](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaConfig) configuration class: [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model) - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianForCausalLM) (Marian model) - `MegatronBertConfig` configuration class: `MegatronBertForCausalLM` (Megatron-BERT model) - `MiniMaxConfig` configuration class: `MiniMaxForCausalLM` (MiniMax model) - `MiniMaxM2Config` configuration class: `MiniMaxM2ForCausalLM` (MiniMax-M2 model) - `Ministral3Config` configuration class: `Ministral3ForCausalLM` (Ministral3 model) - `MinistralConfig` configuration class: `MinistralForCausalLM` (Ministral model) - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForCausalLM) (Mistral model) - `MixtralConfig` configuration class: `MixtralForCausalLM` (Mixtral model) - `MllamaConfig` configuration class: `MllamaForCausalLM` (Mllama model) - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderForCausalLM` (ModernBertDecoder model) - `MoshiConfig` configuration class: `MoshiForCausalLM` (Moshi model) - `MptConfig` configuration class: `MptForCausalLM` (MPT model) - `MusicgenConfig` configuration class: `MusicgenForCausalLM` (MusicGen model) - `MusicgenMelodyConfig` configuration class: `MusicgenMelodyForCausalLM` (MusicGen Melody model) - `MvpConfig` configuration class: `MvpForCausalLM` (MVP model) - `NanoChatConfig` configuration class: `NanoChatForCausalLM` (NanoChat model) - `NemotronConfig` configuration class: `NemotronForCausalLM` (Nemotron model) - `NemotronHConfig` configuration class: `NemotronHForCausalLM` (NemotronH model) - `OPTConfig` configuration class: `OPTForCausalLM` (OPT model) - `Olmo2Config` configuration class: `Olmo2ForCausalLM` (OLMo2 model) - `Olmo3Config` configuration class: `Olmo3ForCausalLM` (Olmo3 model) - `OlmoConfig` configuration class: `OlmoForCausalLM` (OLMo model) - `OlmoHybridConfig` configuration class: `OlmoHybridForCausalLM` (OlmoHybrid model) - `OlmoeConfig` configuration class: `OlmoeForCausalLM` (OLMoE model) - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model) - `PLBartConfig` configuration class: `PLBartForCausalLM` (PLBart model) - `PegasusConfig` configuration class: `PegasusForCausalLM` (Pegasus model) - `PersimmonConfig` configuration class: `PersimmonForCausalLM` (Persimmon model) - `Phi3Config` configuration class: `Phi3ForCausalLM` (Phi3 model) - `Phi4MultimodalConfig` configuration class: `Phi4MultimodalForCausalLM` (Phi4Multimodal model) - `PhiConfig` configuration class: `PhiForCausalLM` (Phi model) - `PhimoeConfig` configuration class: `PhimoeForCausalLM` (Phimoe model) - `ProphetNetConfig` configuration class: `ProphetNetForCausalLM` (ProphetNet model) - `Qwen2Config` configuration class: `Qwen2ForCausalLM` (Qwen2 model) - `Qwen2MoeConfig` configuration class: `Qwen2MoeForCausalLM` (Qwen2MoE model) - `Qwen3Config` configuration class: `Qwen3ForCausalLM` (Qwen3 model) - `Qwen3MoeConfig` configuration class: `Qwen3MoeForCausalLM` (Qwen3MoE model) - `Qwen3NextConfig` configuration class: `Qwen3NextForCausalLM` (Qwen3Next model) - `Qwen3_5Config` configuration class: `Qwen3_5ForCausalLM` (Qwen3_5 model) - `Qwen3_5MoeConfig` configuration class: `Qwen3_5MoeForCausalLM` (Qwen3_5Moe model) - `Qwen3_5MoeTextConfig` configuration class: `Qwen3_5MoeForCausalLM` (Qwen3_5MoeText model) - `Qwen3_5TextConfig` configuration class: `Qwen3_5ForCausalLM` (Qwen3_5Text model) - `RecurrentGemmaConfig` configuration class: `RecurrentGemmaForCausalLM` (RecurrentGemma model) - `ReformerConfig` configuration class: `ReformerModelWithLMHead` (Reformer model) - `RemBertConfig` configuration class: `RemBertForCausalLM` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForCausalLM` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForCausalLM` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForCausalLM) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForCausalLM` (RoBERTa-PreLayerNorm model) - `RwkvConfig` configuration class: `RwkvForCausalLM` (RWKV model) - `SeedOssConfig` configuration class: `SeedOssForCausalLM` (SeedOss model) - `SmolLM3Config` configuration class: `SmolLM3ForCausalLM` (SmolLM3 model) - `SolarOpenConfig` configuration class: `SolarOpenForCausalLM` (SolarOpen model) - `StableLmConfig` configuration class: `StableLmForCausalLM` (StableLm model) - `Starcoder2Config` configuration class: `Starcoder2ForCausalLM` (Starcoder2 model) - `TrOCRConfig` configuration class: `TrOCRForCausalLM` (TrOCR model) - `VaultGemmaConfig` configuration class: `VaultGemmaForCausalLM` (VaultGemma model) - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: `WhisperForCausalLM` (Whisper model) - `XGLMConfig` configuration class: `XGLMForCausalLM` (XGLM model) - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForCausalLM` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForCausalLM` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetLMHeadModel` (XLNet model) - `XmodConfig` configuration class: `XmodForCausalLM` (X-MOD model) - `YoutuConfig` configuration class: `YoutuForCausalLM` (Youtu model) - `Zamba2Config` configuration class: `Zamba2ForCausalLM` (Zamba2 model) - `ZambaConfig` configuration class: `ZambaForCausalLM` (Zamba model) - `xLSTMConfig` configuration class: `xLSTMForCausalLM` (xLSTM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForCausalLM.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **afmoe** -- `AfmoeForCausalLM` (AFMoE model)
- **apertus** -- `ApertusForCausalLM` (Apertus model)
- **arcee** -- `ArceeForCausalLM` (Arcee model)
- **aria_text** -- `AriaTextForCausalLM` (AriaText model)
- **bamba** -- `BambaForCausalLM` (Bamba model)
- **bart** -- [BartForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForCausalLM) (BART model)
- **bert** -- [BertLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertLMHeadModel) (BERT model)
- **bert-generation** -- `BertGenerationDecoder` (Bert Generation model)
- **big_bird** -- [BigBirdForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForCausalLM) (BigBird model)
- **bigbird_pegasus** -- `BigBirdPegasusForCausalLM` (BigBird-Pegasus model)
- **biogpt** -- [BioGptForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForCausalLM) (BioGpt model)
- **bitnet** -- `BitNetForCausalLM` (BitNet model)
- **blenderbot** -- `BlenderbotForCausalLM` (Blenderbot model)
- **blenderbot-small** -- `BlenderbotSmallForCausalLM` (BlenderbotSmall model)
- **bloom** -- `BloomForCausalLM` (BLOOM model)
- **blt** -- `BltForCausalLM` (Blt model)
- **camembert** -- `CamembertForCausalLM` (CamemBERT model)
- **code_llama** -- [LlamaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForCausalLM) (CodeLlama model)
- **codegen** -- [CodeGenForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/codegen#transformers.CodeGenForCausalLM) (CodeGen model)
- **cohere** -- [CohereForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/cohere#transformers.CohereForCausalLM) (Cohere model)
- **cohere2** -- `Cohere2ForCausalLM` (Cohere2 model)
- **cpmant** -- `CpmAntForCausalLM` (CPM-Ant model)
- **ctrl** -- `CTRLLMHeadModel` (CTRL model)
- **cwm** -- `CwmForCausalLM` (Code World Model (CWM) model)
- **data2vec-text** -- `Data2VecTextForCausalLM` (Data2VecText model)
- **dbrx** -- [DbrxForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/dbrx#transformers.DbrxForCausalLM) (DBRX model)
- **deepseek_v2** -- `DeepseekV2ForCausalLM` (DeepSeek-V2 model)
- **deepseek_v3** -- [DeepseekV3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3ForCausalLM) (DeepSeek-V3 model)
- **diffllama** -- `DiffLlamaForCausalLM` (DiffLlama model)
- **doge** -- `DogeForCausalLM` (Doge model)
- **dots1** -- `Dots1ForCausalLM` (dots1 model)
- **electra** -- [ElectraForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForCausalLM) (ELECTRA model)
- **emu3** -- `Emu3ForCausalLM` (Emu3 model)
- **ernie** -- `ErnieForCausalLM` (ERNIE model)
- **ernie4_5** -- `Ernie4_5ForCausalLM` (Ernie4_5 model)
- **ernie4_5_moe** -- `Ernie4_5_MoeForCausalLM` (Ernie4_5_MoE model)
- **exaone4** -- [Exaone4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForCausalLM) (EXAONE-4.0 model)
- **exaone_moe** -- [ExaoneMoeForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/exaone_moe#transformers.ExaoneMoeForCausalLM) (EXAONE-MoE model)
- **falcon** -- `FalconForCausalLM` (Falcon model)
- **falcon_h1** -- `FalconH1ForCausalLM` (FalconH1 model)
- **falcon_mamba** -- `FalconMambaForCausalLM` (FalconMamba model)
- **flex_olmo** -- `FlexOlmoForCausalLM` (FlexOlmo model)
- **fuyu** -- `FuyuForCausalLM` (Fuyu model)
- **gemma** -- [GemmaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForCausalLM) (Gemma model)
- **gemma2** -- [Gemma2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForCausalLM) (Gemma2 model)
- **gemma3** -- [Gemma3ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForConditionalGeneration) (Gemma3ForConditionalGeneration model)
- **gemma3_text** -- [Gemma3ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForCausalLM) (Gemma3ForCausalLM model)
- **gemma3n** -- [Gemma3nForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForConditionalGeneration) (Gemma3nForConditionalGeneration model)
- **gemma3n_text** -- [Gemma3nForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gemma3n#transformers.Gemma3nForCausalLM) (Gemma3nForCausalLM model)
- **gemma4** -- `Gemma4ForConditionalGeneration` (Gemma4ForConditionalGeneration model)
- **gemma4_text** -- `Gemma4ForCausalLM` (Gemma4ForCausalLM model)
- **git** -- `GitForCausalLM` (GIT model)
- **glm** -- `GlmForCausalLM` (GLM model)
- **glm4** -- `Glm4ForCausalLM` (GLM4 model)
- **glm4_moe** -- `Glm4MoeForCausalLM` (Glm4MoE model)
- **glm4_moe_lite** -- `Glm4MoeLiteForCausalLM` (Glm4MoELite model)
- **glm_moe_dsa** -- `GlmMoeDsaForCausalLM` (GlmMoeDsa model)
- **got_ocr2** -- `GotOcr2ForConditionalGeneration` (GOT-OCR2 model)
- **gpt-sw3** -- [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (GPT-Sw3 model)
- **gpt2** -- [GPT2LMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2LMHeadModel) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeForCausalLM` (GPTBigCode model)
- **gpt_neo** -- `GPTNeoForCausalLM` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXForCausalLM` (GPT NeoX model)
- **gpt_neox_japanese** -- [GPTNeoXJapaneseForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/gpt_neox_japanese#transformers.GPTNeoXJapaneseForCausalLM) (GPT NeoX Japanese model)
- **gpt_oss** -- `GptOssForCausalLM` (GptOss model)
- **gptj** -- `GPTJForCausalLM` (GPT-J model)
- **granite** -- `GraniteForCausalLM` (Granite model)
- **granitemoe** -- `GraniteMoeForCausalLM` (GraniteMoeMoe model)
- **granitemoehybrid** -- `GraniteMoeHybridForCausalLM` (GraniteMoeHybrid model)
- **granitemoeshared** -- `GraniteMoeSharedForCausalLM` (GraniteMoeSharedMoe model)
- **helium** -- `HeliumForCausalLM` (Helium model)
- **hunyuan_v1_dense** -- `HunYuanDenseV1ForCausalLM` (HunYuanDenseV1 model)
- **hunyuan_v1_moe** -- `HunYuanMoEV1ForCausalLM` (HunYuanMoeV1 model)
- **jais2** -- `Jais2ForCausalLM` (Jais2 model)
- **jamba** -- [JambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForCausalLM) (Jamba model)
- **jetmoe** -- `JetMoeForCausalLM` (JetMoe model)
- **lfm2** -- [Lfm2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/lfm2#transformers.Lfm2ForCausalLM) (Lfm2 model)
- **lfm2_moe** -- `Lfm2MoeForCausalLM` (Lfm2Moe model)
- **llama** -- [LlamaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForCausalLM) (LLaMA model)
- **llama4** -- [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4 model)
- **llama4_text** -- [Llama4ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/llama4#transformers.Llama4ForCausalLM) (Llama4ForCausalLM model)
- **longcat_flash** -- `LongcatFlashForCausalLM` (LongCatFlash model)
- **mamba** -- [MambaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba#transformers.MambaForCausalLM) (Mamba model)
- **mamba2** -- [Mamba2ForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mamba2#transformers.Mamba2ForCausalLM) (mamba2 model)
- **marian** -- [MarianForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianForCausalLM) (Marian model)
- **mbart** -- `MBartForCausalLM` (mBART model)
- **megatron-bert** -- `MegatronBertForCausalLM` (Megatron-BERT model)
- **minimax** -- `MiniMaxForCausalLM` (MiniMax model)
- **minimax_m2** -- `MiniMaxM2ForCausalLM` (MiniMax-M2 model)
- **ministral** -- `MinistralForCausalLM` (Ministral model)
- **ministral3** -- `Ministral3ForCausalLM` (Ministral3 model)
- **mistral** -- [MistralForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForCausalLM) (Mistral model)
- **mixtral** -- `MixtralForCausalLM` (Mixtral model)
- **mllama** -- `MllamaForCausalLM` (Mllama model)
- **modernbert-decoder** -- `ModernBertDecoderForCausalLM` (ModernBertDecoder model)
- **moshi** -- `MoshiForCausalLM` (Moshi model)
- **mpt** -- `MptForCausalLM` (MPT model)
- **musicgen** -- `MusicgenForCausalLM` (MusicGen model)
- **musicgen_melody** -- `MusicgenMelodyForCausalLM` (MusicGen Melody model)
- **mvp** -- `MvpForCausalLM` (MVP model)
- **nanochat** -- `NanoChatForCausalLM` (NanoChat model)
- **nemotron** -- `NemotronForCausalLM` (Nemotron model)
- **nemotron_h** -- `NemotronHForCausalLM` (NemotronH model)
- **olmo** -- `OlmoForCausalLM` (OLMo model)
- **olmo2** -- `Olmo2ForCausalLM` (OLMo2 model)
- **olmo3** -- `Olmo3ForCausalLM` (Olmo3 model)
- **olmo_hybrid** -- `OlmoHybridForCausalLM` (OlmoHybrid model)
- **olmoe** -- `OlmoeForCausalLM` (OLMoE model)
- **openai-gpt** -- [OpenAIGPTLMHeadModel](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTLMHeadModel) (OpenAI GPT model)
- **opt** -- `OPTForCausalLM` (OPT model)
- **pegasus** -- `PegasusForCausalLM` (Pegasus model)
- **persimmon** -- `PersimmonForCausalLM` (Persimmon model)
- **phi** -- `PhiForCausalLM` (Phi model)
- **phi3** -- `Phi3ForCausalLM` (Phi3 model)
- **phi4_multimodal** -- `Phi4MultimodalForCausalLM` (Phi4Multimodal model)
- **phimoe** -- `PhimoeForCausalLM` (Phimoe model)
- **plbart** -- `PLBartForCausalLM` (PLBart model)
- **prophetnet** -- `ProphetNetForCausalLM` (ProphetNet model)
- **qwen2** -- `Qwen2ForCausalLM` (Qwen2 model)
- **qwen2_moe** -- `Qwen2MoeForCausalLM` (Qwen2MoE model)
- **qwen3** -- `Qwen3ForCausalLM` (Qwen3 model)
- **qwen3_5** -- `Qwen3_5ForCausalLM` (Qwen3_5 model)
- **qwen3_5_moe** -- `Qwen3_5MoeForCausalLM` (Qwen3_5Moe model)
- **qwen3_5_moe_text** -- `Qwen3_5MoeForCausalLM` (Qwen3_5MoeText model)
- **qwen3_5_text** -- `Qwen3_5ForCausalLM` (Qwen3_5Text model)
- **qwen3_moe** -- `Qwen3MoeForCausalLM` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextForCausalLM` (Qwen3Next model)
- **recurrent_gemma** -- `RecurrentGemmaForCausalLM` (RecurrentGemma model)
- **reformer** -- `ReformerModelWithLMHead` (Reformer model)
- **rembert** -- `RemBertForCausalLM` (RemBERT model)
- **roberta** -- [RobertaForCausalLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForCausalLM) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForCausalLM` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForCausalLM` (RoCBert model)
- **roformer** -- `RoFormerForCausalLM` (RoFormer model)
- **rwkv** -- `RwkvForCausalLM` (RWKV model)
- **seed_oss** -- `SeedOssForCausalLM` (SeedOss model)
- **smollm3** -- `SmolLM3ForCausalLM` (SmolLM3 model)
- **solar_open** -- `SolarOpenForCausalLM` (SolarOpen model)
- **stablelm** -- `StableLmForCausalLM` (StableLm model)
- **starcoder2** -- `Starcoder2ForCausalLM` (Starcoder2 model)
- **trocr** -- `TrOCRForCausalLM` (TrOCR model)
- **vaultgemma** -- `VaultGemmaForCausalLM` (VaultGemma model)
- **whisper** -- `WhisperForCausalLM` (Whisper model)
- **xglm** -- `XGLMForCausalLM` (XGLM model)
- **xlm** -- `XLMWithLMHeadModel` (XLM model)
- **xlm-roberta** -- `XLMRobertaForCausalLM` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForCausalLM` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetLMHeadModel` (XLNet model)
- **xlstm** -- `xLSTMForCausalLM` (xLSTM model)
- **xmod** -- `XmodForCausalLM` (X-MOD model)
- **youtu** -- `YoutuForCausalLM` (Youtu model)
- **zamba** -- `ZambaForCausalLM` (Zamba model)
- **zamba2** -- `Zamba2ForCausalLM` (Zamba2 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForCausalLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForMaskedLM[[transformers.AutoModelForMaskedLM]][[transformers.AutoModelForMaskedLM]]

#### transformers.AutoModelForMaskedLM[[transformers.AutoModelForMaskedLM]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2000)

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForMaskedLM.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMaskedLM) (ALBERT model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMaskedLM) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMaskedLM) (BigBird model)
  - `CamembertConfig` configuration class: `CamembertForMaskedLM` (CamemBERT model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMaskedLM) (ConvBERT model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForMaskedLM` (Data2VecText model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model)
  - `DistilBertConfig` configuration class: `DistilBertForMaskedLM` (DistilBERT model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMaskedLM) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForMaskedLM` (ERNIE model)
  - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForMaskedLM) (ESM model)
  - `EuroBertConfig` configuration class: `EuroBertForMaskedLM` (EuroBERT model)
  - `FNetConfig` configuration class: `FNetForMaskedLM` (FNet model)
  - `FlaubertConfig` configuration class: `FlaubertWithLMHeadModel` (FlauBERT model)
  - `FunnelConfig` configuration class: `FunnelForMaskedLM` (Funnel Transformer model)
  - `IBertConfig` configuration class: `IBertForMaskedLM` (I-BERT model)
  - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForMaskedLM` (JinaEmbeddingsV3 model)
  - `LayoutLMConfig` configuration class: `LayoutLMForMaskedLM` (LayoutLM model)
  - `LongformerConfig` configuration class: `LongformerForMaskedLM` (Longformer model)
  - `LukeConfig` configuration class: `LukeForMaskedLM` (LUKE model)
  - `MBartConfig` configuration class: `MBartForConditionalGeneration` (mBART model)
  - `MPNetConfig` configuration class: `MPNetForMaskedLM` (MPNet model)
  - `MegatronBertConfig` configuration class: `MegatronBertForMaskedLM` (Megatron-BERT model)
  - `MobileBertConfig` configuration class: `MobileBertForMaskedLM` (MobileBERT model)
  - `ModernBertConfig` configuration class: `ModernBertForMaskedLM` (ModernBERT model)
  - `ModernVBertConfig` configuration class: `ModernVBertForMaskedLM` (ModernVBert model)
  - `MraConfig` configuration class: `MraForMaskedLM` (MRA model)
  - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model)
  - `NomicBertConfig` configuration class: `NomicBertForMaskedLM` (NomicBERT model)
  - `NystromformerConfig` configuration class: `NystromformerForMaskedLM` (Nyströmformer model)
  - `PerceiverConfig` configuration class: `PerceiverForMaskedLM` (Perceiver model)
  - `ReformerConfig` configuration class: `ReformerForMaskedLM` (Reformer model)
  - `RemBertConfig` configuration class: `RemBertForMaskedLM` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForMaskedLM` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForMaskedLM` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForMaskedLM` (SqueezeBERT model)
  - `TapasConfig` configuration class: `TapasForMaskedLM` (TAPAS model)
  - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForMaskedLM` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model)
  - `XmodConfig` configuration class: `XmodForMaskedLM` (X-MOD model)
  - `YosoConfig` configuration class: `YosoForMaskedLM` (YOSO model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMaskedLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMaskedLM.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMaskedLM) (ALBERT model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMaskedLM) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMaskedLM) (BigBird model) - `CamembertConfig` configuration class: `CamembertForMaskedLM` (CamemBERT model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMaskedLM) (ConvBERT model) - `Data2VecTextConfig` configuration class: `Data2VecTextForMaskedLM` (Data2VecText model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model) - `DistilBertConfig` configuration class: `DistilBertForMaskedLM` (DistilBERT model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMaskedLM) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForMaskedLM` (ERNIE model) - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForMaskedLM) (ESM model) - `EuroBertConfig` configuration class: `EuroBertForMaskedLM` (EuroBERT model) - `FNetConfig` configuration class: `FNetForMaskedLM` (FNet model) - `FlaubertConfig` configuration class: `FlaubertWithLMHeadModel` (FlauBERT model) - `FunnelConfig` configuration class: `FunnelForMaskedLM` (Funnel Transformer model) - `IBertConfig` configuration class: `IBertForMaskedLM` (I-BERT model) - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForMaskedLM` (JinaEmbeddingsV3 model) - `LayoutLMConfig` configuration class: `LayoutLMForMaskedLM` (LayoutLM model) - `LongformerConfig` configuration class: `LongformerForMaskedLM` (Longformer model) - `LukeConfig` configuration class: `LukeForMaskedLM` (LUKE model) - `MBartConfig` configuration class: `MBartForConditionalGeneration` (mBART model) - `MPNetConfig` configuration class: `MPNetForMaskedLM` (MPNet model) - `MegatronBertConfig` configuration class: `MegatronBertForMaskedLM` (Megatron-BERT model) - `MobileBertConfig` configuration class: `MobileBertForMaskedLM` (MobileBERT model) - `ModernBertConfig` configuration class: `ModernBertForMaskedLM` (ModernBERT model) - `ModernVBertConfig` configuration class: `ModernVBertForMaskedLM` (ModernVBert model) - `MraConfig` configuration class: `MraForMaskedLM` (MRA model) - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model) - `NomicBertConfig` configuration class: `NomicBertForMaskedLM` (NomicBERT model) - `NystromformerConfig` configuration class: `NystromformerForMaskedLM` (Nyströmformer model) - `PerceiverConfig` configuration class: `PerceiverForMaskedLM` (Perceiver model) - `ReformerConfig` configuration class: `ReformerForMaskedLM` (Reformer model) - `RemBertConfig` configuration class: `RemBertForMaskedLM` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForMaskedLM` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForMaskedLM` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model) - `SqueezeBertConfig` configuration class: `SqueezeBertForMaskedLM` (SqueezeBERT model) - `TapasConfig` configuration class: `TapasForMaskedLM` (TAPAS model) - `XLMConfig` configuration class: `XLMWithLMHeadModel` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForMaskedLM` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model) - `XmodConfig` configuration class: `XmodForMaskedLM` (X-MOD model) - `YosoConfig` configuration class: `YosoForMaskedLM` (YOSO model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForMaskedLM.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMaskedLM) (ALBERT model)
- **bart** -- [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
- **bert** -- [BertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMaskedLM) (BERT model)
- **big_bird** -- [BigBirdForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMaskedLM) (BigBird model)
- **camembert** -- `CamembertForMaskedLM` (CamemBERT model)
- **convbert** -- [ConvBertForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMaskedLM) (ConvBERT model)
- **data2vec-text** -- `Data2VecTextForMaskedLM` (Data2VecText model)
- **deberta** -- [DebertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForMaskedLM) (DeBERTa model)
- **deberta-v2** -- [DebertaV2ForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMaskedLM) (DeBERTa-v2 model)
- **distilbert** -- `DistilBertForMaskedLM` (DistilBERT model)
- **electra** -- [ElectraForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMaskedLM) (ELECTRA model)
- **ernie** -- `ErnieForMaskedLM` (ERNIE model)
- **esm** -- [EsmForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForMaskedLM) (ESM model)
- **eurobert** -- `EuroBertForMaskedLM` (EuroBERT model)
- **flaubert** -- `FlaubertWithLMHeadModel` (FlauBERT model)
- **fnet** -- `FNetForMaskedLM` (FNet model)
- **funnel** -- `FunnelForMaskedLM` (Funnel Transformer model)
- **ibert** -- `IBertForMaskedLM` (I-BERT model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3ForMaskedLM` (JinaEmbeddingsV3 model)
- **layoutlm** -- `LayoutLMForMaskedLM` (LayoutLM model)
- **longformer** -- `LongformerForMaskedLM` (Longformer model)
- **luke** -- `LukeForMaskedLM` (LUKE model)
- **mbart** -- `MBartForConditionalGeneration` (mBART model)
- **megatron-bert** -- `MegatronBertForMaskedLM` (Megatron-BERT model)
- **mobilebert** -- `MobileBertForMaskedLM` (MobileBERT model)
- **modernbert** -- `ModernBertForMaskedLM` (ModernBERT model)
- **modernvbert** -- `ModernVBertForMaskedLM` (ModernVBert model)
- **mpnet** -- `MPNetForMaskedLM` (MPNet model)
- **mra** -- `MraForMaskedLM` (MRA model)
- **mvp** -- `MvpForConditionalGeneration` (MVP model)
- **nomic_bert** -- `NomicBertForMaskedLM` (NomicBERT model)
- **nystromformer** -- `NystromformerForMaskedLM` (Nyströmformer model)
- **perceiver** -- `PerceiverForMaskedLM` (Perceiver model)
- **reformer** -- `ReformerForMaskedLM` (Reformer model)
- **rembert** -- `RemBertForMaskedLM` (RemBERT model)
- **roberta** -- [RobertaForMaskedLM](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMaskedLM) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForMaskedLM` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForMaskedLM` (RoCBert model)
- **roformer** -- `RoFormerForMaskedLM` (RoFormer model)
- **squeezebert** -- `SqueezeBertForMaskedLM` (SqueezeBERT model)
- **tapas** -- `TapasForMaskedLM` (TAPAS model)
- **xlm** -- `XLMWithLMHeadModel` (XLM model)
- **xlm-roberta** -- `XLMRobertaForMaskedLM` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForMaskedLM` (XLM-RoBERTa-XL model)
- **xmod** -- `XmodForMaskedLM` (X-MOD model)
- **yoso** -- `YosoForMaskedLM` (YOSO model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMaskedLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForMaskGeneration[[transformers.AutoModelForMaskGeneration]][[transformers.AutoModelForMaskGeneration]]

#### transformers.AutoModelForMaskGeneration[[transformers.AutoModelForMaskGeneration]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1949)

### AutoModelForSeq2SeqLM[[transformers.AutoModelForSeq2SeqLM]][[transformers.AutoModelForSeq2SeqLM]]

#### transformers.AutoModelForSeq2SeqLM[[transformers.AutoModelForSeq2SeqLM]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2007)

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForSeq2SeqLM.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
  - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForConditionalGeneration` (BigBird-Pegasus model)
  - `BlenderbotConfig` configuration class: `BlenderbotForConditionalGeneration` (Blenderbot model)
  - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallForConditionalGeneration` (BlenderbotSmall model)
  - [EncoderDecoderConfig](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderConfig) configuration class: [EncoderDecoderModel](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderModel) (Encoder decoder model)
  - `FSMTConfig` configuration class: `FSMTForConditionalGeneration` (FairSeq Machine-Translation model)
  - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model)
  - `GraniteSpeechConfig` configuration class: `GraniteSpeechForConditionalGeneration` (GraniteSpeech model)
  - `LEDConfig` configuration class: `LEDForConditionalGeneration` (LED model)
  - `LongT5Config` configuration class: `LongT5ForConditionalGeneration` (LongT5 model)
  - `M2M100Config` configuration class: `M2M100ForConditionalGeneration` (M2M100 model)
  - `MBartConfig` configuration class: `MBartForConditionalGeneration` (mBART model)
  - `MT5Config` configuration class: `MT5ForConditionalGeneration` (MT5 model)
  - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianMTModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianMTModel) (Marian model)
  - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
  - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model)
  - `NllbMoeConfig` configuration class: `NllbMoeForConditionalGeneration` (NLLB-MOE model)
  - `PLBartConfig` configuration class: `PLBartForConditionalGeneration` (PLBart model)
  - `PegasusConfig` configuration class: `PegasusForConditionalGeneration` (Pegasus model)
  - `PegasusXConfig` configuration class: `PegasusXForConditionalGeneration` (PEGASUS-X model)
  - `ProphetNetConfig` configuration class: `ProphetNetForConditionalGeneration` (ProphetNet model)
  - `Qwen2AudioConfig` configuration class: `Qwen2AudioForConditionalGeneration` (Qwen2Audio model)
  - `SeamlessM4TConfig` configuration class: `SeamlessM4TForTextToText` (SeamlessM4T model)
  - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2ForTextToText` (SeamlessM4Tv2 model)
  - `SwitchTransformersConfig` configuration class: `SwitchTransformersForConditionalGeneration` (SwitchTransformers model)
  - `T5Config` configuration class: `T5ForConditionalGeneration` (T5 model)
  - `T5Gemma2Config` configuration class: `T5Gemma2ForConditionalGeneration` (T5Gemma2 model)
  - `T5GemmaConfig` configuration class: `T5GemmaForConditionalGeneration` (T5Gemma model)
  - `UMT5Config` configuration class: `UMT5ForConditionalGeneration` (UMT5 model)
  - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
  - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model)
  - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-t5/t5-base")
>>> model = AutoModelForSeq2SeqLM.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `AudioFlamingo3Config` configuration class: `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model) - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForConditionalGeneration` (BigBird-Pegasus model) - `BlenderbotConfig` configuration class: `BlenderbotForConditionalGeneration` (Blenderbot model) - `BlenderbotSmallConfig` configuration class: `BlenderbotSmallForConditionalGeneration` (BlenderbotSmall model) - [EncoderDecoderConfig](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderConfig) configuration class: [EncoderDecoderModel](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderModel) (Encoder decoder model) - `FSMTConfig` configuration class: `FSMTForConditionalGeneration` (FairSeq Machine-Translation model) - `GlmAsrConfig` configuration class: `GlmAsrForConditionalGeneration` (GLM-ASR model) - `GraniteSpeechConfig` configuration class: `GraniteSpeechForConditionalGeneration` (GraniteSpeech model) - `LEDConfig` configuration class: `LEDForConditionalGeneration` (LED model) - `LongT5Config` configuration class: `LongT5ForConditionalGeneration` (LongT5 model) - `M2M100Config` configuration class: `M2M100ForConditionalGeneration` (M2M100 model) - `MBartConfig` configuration class: `MBartForConditionalGeneration` (mBART model) - `MT5Config` configuration class: `MT5ForConditionalGeneration` (MT5 model) - [MarianConfig](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianConfig) configuration class: [MarianMTModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianMTModel) (Marian model) - `MusicFlamingoConfig` configuration class: `MusicFlamingoForConditionalGeneration` (MusicFlamingo model) - `MvpConfig` configuration class: `MvpForConditionalGeneration` (MVP model) - `NllbMoeConfig` configuration class: `NllbMoeForConditionalGeneration` (NLLB-MOE model) - `PLBartConfig` configuration class: `PLBartForConditionalGeneration` (PLBart model) - `PegasusConfig` configuration class: `PegasusForConditionalGeneration` (Pegasus model) - `PegasusXConfig` configuration class: `PegasusXForConditionalGeneration` (PEGASUS-X model) - `ProphetNetConfig` configuration class: `ProphetNetForConditionalGeneration` (ProphetNet model) - `Qwen2AudioConfig` configuration class: `Qwen2AudioForConditionalGeneration` (Qwen2Audio model) - `SeamlessM4TConfig` configuration class: `SeamlessM4TForTextToText` (SeamlessM4T model) - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2ForTextToText` (SeamlessM4Tv2 model) - `SwitchTransformersConfig` configuration class: `SwitchTransformersForConditionalGeneration` (SwitchTransformers model) - `T5Config` configuration class: `T5ForConditionalGeneration` (T5 model) - `T5Gemma2Config` configuration class: `T5Gemma2ForConditionalGeneration` (T5Gemma2 model) - `T5GemmaConfig` configuration class: `T5GemmaForConditionalGeneration` (T5Gemma model) - `UMT5Config` configuration class: `UMT5ForConditionalGeneration` (UMT5 model) - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model) - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model) - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForSeq2SeqLM.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **audioflamingo3** -- `AudioFlamingo3ForConditionalGeneration` (AudioFlamingo3 model)
- **bart** -- [BartForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForConditionalGeneration) (BART model)
- **bigbird_pegasus** -- `BigBirdPegasusForConditionalGeneration` (BigBird-Pegasus model)
- **blenderbot** -- `BlenderbotForConditionalGeneration` (Blenderbot model)
- **blenderbot-small** -- `BlenderbotSmallForConditionalGeneration` (BlenderbotSmall model)
- **encoder-decoder** -- [EncoderDecoderModel](/docs/transformers/v5.5.1/ko/model_doc/encoder-decoder#transformers.EncoderDecoderModel) (Encoder decoder model)
- **fsmt** -- `FSMTForConditionalGeneration` (FairSeq Machine-Translation model)
- **glmasr** -- `GlmAsrForConditionalGeneration` (GLM-ASR model)
- **granite_speech** -- `GraniteSpeechForConditionalGeneration` (GraniteSpeech model)
- **led** -- `LEDForConditionalGeneration` (LED model)
- **longt5** -- `LongT5ForConditionalGeneration` (LongT5 model)
- **m2m_100** -- `M2M100ForConditionalGeneration` (M2M100 model)
- **marian** -- [MarianMTModel](/docs/transformers/v5.5.1/ko/model_doc/marian#transformers.MarianMTModel) (Marian model)
- **mbart** -- `MBartForConditionalGeneration` (mBART model)
- **mt5** -- `MT5ForConditionalGeneration` (MT5 model)
- **musicflamingo** -- `MusicFlamingoForConditionalGeneration` (MusicFlamingo model)
- **mvp** -- `MvpForConditionalGeneration` (MVP model)
- **nllb-moe** -- `NllbMoeForConditionalGeneration` (NLLB-MOE model)
- **pegasus** -- `PegasusForConditionalGeneration` (Pegasus model)
- **pegasus_x** -- `PegasusXForConditionalGeneration` (PEGASUS-X model)
- **plbart** -- `PLBartForConditionalGeneration` (PLBart model)
- **prophetnet** -- `ProphetNetForConditionalGeneration` (ProphetNet model)
- **qwen2_audio** -- `Qwen2AudioForConditionalGeneration` (Qwen2Audio model)
- **seamless_m4t** -- `SeamlessM4TForTextToText` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4Tv2ForTextToText` (SeamlessM4Tv2 model)
- **switch_transformers** -- `SwitchTransformersForConditionalGeneration` (SwitchTransformers model)
- **t5** -- `T5ForConditionalGeneration` (T5 model)
- **t5gemma** -- `T5GemmaForConditionalGeneration` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2ForConditionalGeneration` (T5Gemma2 model)
- **umt5** -- `UMT5ForConditionalGeneration` (UMT5 model)
- **vibevoice_asr** -- `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
- **voxtral** -- `VoxtralForConditionalGeneration` (Voxtral model)
- **voxtral_realtime** -- `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")

>>> # Update configuration during loading
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForSequenceClassification[[transformers.AutoModelForSequenceClassification]][[transformers.AutoModelForSequenceClassification]]

#### transformers.AutoModelForSequenceClassification[[transformers.AutoModelForSequenceClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2018)

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForSequenceClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForSequenceClassification) (ALBERT model)
  - `ArceeConfig` configuration class: `ArceeForSequenceClassification` (Arcee model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForSequenceClassification) (BART model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForSequenceClassification) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForSequenceClassification) (BigBird model)
  - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForSequenceClassification` (BigBird-Pegasus model)
  - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForSequenceClassification) (BioGpt model)
  - `BloomConfig` configuration class: `BloomForSequenceClassification` (BLOOM model)
  - `CTRLConfig` configuration class: `CTRLForSequenceClassification` (CTRL model)
  - `CamembertConfig` configuration class: `CamembertForSequenceClassification` (CamemBERT model)
  - `CanineConfig` configuration class: `CanineForSequenceClassification` (CANINE model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForSequenceClassification) (ConvBERT model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForSequenceClassification` (Data2VecText model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForSequenceClassification) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForSequenceClassification) (DeBERTa-v2 model)
  - `DeepseekV2Config` configuration class: `DeepseekV2ForSequenceClassification` (DeepSeek-V2 model)
  - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: `DeepseekV3ForSequenceClassification` (DeepSeek-V3 model)
  - `DiffLlamaConfig` configuration class: `DiffLlamaForSequenceClassification` (DiffLlama model)
  - `DistilBertConfig` configuration class: `DistilBertForSequenceClassification` (DistilBERT model)
  - `DogeConfig` configuration class: `DogeForSequenceClassification` (Doge model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForSequenceClassification) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForSequenceClassification` (ERNIE model)
  - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForSequenceClassification) (ESM model)
  - `EuroBertConfig` configuration class: `EuroBertForSequenceClassification` (EuroBERT model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForSequenceClassification) (EXAONE-4.0 model)
  - `FNetConfig` configuration class: `FNetForSequenceClassification` (FNet model)
  - `FalconConfig` configuration class: `FalconForSequenceClassification` (Falcon model)
  - `FlaubertConfig` configuration class: `FlaubertForSequenceClassification` (FlauBERT model)
  - `FunnelConfig` configuration class: `FunnelForSequenceClassification` (Funnel Transformer model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForSequenceClassification) (OpenAI GPT-2 model)
  - `GPTBigCodeConfig` configuration class: `GPTBigCodeForSequenceClassification` (GPTBigCode model)
  - `GPTJConfig` configuration class: `GPTJForSequenceClassification` (GPT-J model)
  - `GPTNeoConfig` configuration class: `GPTNeoForSequenceClassification` (GPT Neo model)
  - `GPTNeoXConfig` configuration class: `GPTNeoXForSequenceClassification` (GPT NeoX model)
  - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForSequenceClassification) (Gemma2 model)
  - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForSequenceClassification) (Gemma3ForConditionalGeneration model)
  - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: `Gemma3TextForSequenceClassification` (Gemma3ForCausalLM model)
  - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForSequenceClassification) (Gemma model)
  - `Glm4Config` configuration class: `Glm4ForSequenceClassification` (GLM4 model)
  - `GlmConfig` configuration class: `GlmForSequenceClassification` (GLM model)
  - `GptOssConfig` configuration class: `GptOssForSequenceClassification` (GptOss model)
  - `HeliumConfig` configuration class: `HeliumForSequenceClassification` (Helium model)
  - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1ForSequenceClassification` (HunYuanDenseV1 model)
  - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1ForSequenceClassification` (HunYuanMoeV1 model)
  - `IBertConfig` configuration class: `IBertForSequenceClassification` (I-BERT model)
  - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForSequenceClassification) (Jamba model)
  - `JetMoeConfig` configuration class: `JetMoeForSequenceClassification` (JetMoe model)
  - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForSequenceClassification` (JinaEmbeddingsV3 model)
  - `LayoutLMConfig` configuration class: `LayoutLMForSequenceClassification` (LayoutLM model)
  - `LayoutLMv2Config` configuration class: `LayoutLMv2ForSequenceClassification` (LayoutLMv2 model)
  - `LayoutLMv3Config` configuration class: `LayoutLMv3ForSequenceClassification` (LayoutLMv3 model)
  - `LiltConfig` configuration class: `LiltForSequenceClassification` (LiLT model)
  - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForSequenceClassification) (LLaMA model)
  - `LongformerConfig` configuration class: `LongformerForSequenceClassification` (Longformer model)
  - `LukeConfig` configuration class: `LukeForSequenceClassification` (LUKE model)
  - `MBartConfig` configuration class: `MBartForSequenceClassification` (mBART model)
  - `MPNetConfig` configuration class: `MPNetForSequenceClassification` (MPNet model)
  - `MT5Config` configuration class: `MT5ForSequenceClassification` (MT5 model)
  - `MarkupLMConfig` configuration class: `MarkupLMForSequenceClassification` (MarkupLM model)
  - `MegatronBertConfig` configuration class: `MegatronBertForSequenceClassification` (Megatron-BERT model)
  - `MiniMaxConfig` configuration class: `MiniMaxForSequenceClassification` (MiniMax model)
  - `Ministral3Config` configuration class: `Ministral3ForSequenceClassification` (Ministral3 model)
  - `MinistralConfig` configuration class: `MinistralForSequenceClassification` (Ministral model)
  - `Mistral4Config` configuration class: `Mistral4ForSequenceClassification` (Mistral4 model)
  - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForSequenceClassification) (Mistral model)
  - `MixtralConfig` configuration class: `MixtralForSequenceClassification` (Mixtral model)
  - `MobileBertConfig` configuration class: `MobileBertForSequenceClassification` (MobileBERT model)
  - `ModernBertConfig` configuration class: `ModernBertForSequenceClassification` (ModernBERT model)
  - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderForSequenceClassification` (ModernBertDecoder model)
  - `ModernVBertConfig` configuration class: `ModernVBertForSequenceClassification` (ModernVBert model)
  - `MptConfig` configuration class: `MptForSequenceClassification` (MPT model)
  - `MraConfig` configuration class: `MraForSequenceClassification` (MRA model)
  - `MvpConfig` configuration class: `MvpForSequenceClassification` (MVP model)
  - `NemotronConfig` configuration class: `NemotronForSequenceClassification` (Nemotron model)
  - `NomicBertConfig` configuration class: `NomicBertForSequenceClassification` (NomicBERT model)
  - `NystromformerConfig` configuration class: `NystromformerForSequenceClassification` (Nyströmformer model)
  - `OPTConfig` configuration class: `OPTForSequenceClassification` (OPT model)
  - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTForSequenceClassification) (OpenAI GPT model)
  - `PLBartConfig` configuration class: `PLBartForSequenceClassification` (PLBart model)
  - `PerceiverConfig` configuration class: `PerceiverForSequenceClassification` (Perceiver model)
  - `PersimmonConfig` configuration class: `PersimmonForSequenceClassification` (Persimmon model)
  - `Phi3Config` configuration class: `Phi3ForSequenceClassification` (Phi3 model)
  - `PhiConfig` configuration class: `PhiForSequenceClassification` (Phi model)
  - `PhimoeConfig` configuration class: `PhimoeForSequenceClassification` (Phimoe model)
  - `Qwen2Config` configuration class: `Qwen2ForSequenceClassification` (Qwen2 model)
  - `Qwen2MoeConfig` configuration class: `Qwen2MoeForSequenceClassification` (Qwen2MoE model)
  - `Qwen3Config` configuration class: `Qwen3ForSequenceClassification` (Qwen3 model)
  - `Qwen3MoeConfig` configuration class: `Qwen3MoeForSequenceClassification` (Qwen3MoE model)
  - `Qwen3NextConfig` configuration class: `Qwen3NextForSequenceClassification` (Qwen3Next model)
  - `Qwen3_5Config` configuration class: `Qwen3_5ForSequenceClassification` (Qwen3_5 model)
  - `Qwen3_5TextConfig` configuration class: `Qwen3_5ForSequenceClassification` (Qwen3_5Text model)
  - `ReformerConfig` configuration class: `ReformerForSequenceClassification` (Reformer model)
  - `RemBertConfig` configuration class: `RemBertForSequenceClassification` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForSequenceClassification` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForSequenceClassification` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForSequenceClassification) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForSequenceClassification` (RoBERTa-PreLayerNorm model)
  - `SeedOssConfig` configuration class: `SeedOssForSequenceClassification` (SeedOss model)
  - `SmolLM3Config` configuration class: `SmolLM3ForSequenceClassification` (SmolLM3 model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForSequenceClassification` (SqueezeBERT model)
  - `StableLmConfig` configuration class: `StableLmForSequenceClassification` (StableLm model)
  - `Starcoder2Config` configuration class: `Starcoder2ForSequenceClassification` (Starcoder2 model)
  - `T5Config` configuration class: `T5ForSequenceClassification` (T5 model)
  - `T5Gemma2Config` configuration class: `T5Gemma2ForSequenceClassification` (T5Gemma2 model)
  - `T5GemmaConfig` configuration class: `T5GemmaForSequenceClassification` (T5Gemma model)
  - `TapasConfig` configuration class: `TapasForSequenceClassification` (TAPAS model)
  - `UMT5Config` configuration class: `UMT5ForSequenceClassification` (UMT5 model)
  - `XLMConfig` configuration class: `XLMForSequenceClassification` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForSequenceClassification` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForSequenceClassification` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetForSequenceClassification` (XLNet model)
  - `XmodConfig` configuration class: `XmodForSequenceClassification` (X-MOD model)
  - `YosoConfig` configuration class: `YosoForSequenceClassification` (YOSO model)
  - `Zamba2Config` configuration class: `Zamba2ForSequenceClassification` (Zamba2 model)
  - `ZambaConfig` configuration class: `ZambaForSequenceClassification` (Zamba model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSequenceClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSequenceClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForSequenceClassification) (ALBERT model) - `ArceeConfig` configuration class: `ArceeForSequenceClassification` (Arcee model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForSequenceClassification) (BART model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForSequenceClassification) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForSequenceClassification) (BigBird model) - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForSequenceClassification` (BigBird-Pegasus model) - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForSequenceClassification) (BioGpt model) - `BloomConfig` configuration class: `BloomForSequenceClassification` (BLOOM model) - `CTRLConfig` configuration class: `CTRLForSequenceClassification` (CTRL model) - `CamembertConfig` configuration class: `CamembertForSequenceClassification` (CamemBERT model) - `CanineConfig` configuration class: `CanineForSequenceClassification` (CANINE model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForSequenceClassification) (ConvBERT model) - `Data2VecTextConfig` configuration class: `Data2VecTextForSequenceClassification` (Data2VecText model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForSequenceClassification) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForSequenceClassification) (DeBERTa-v2 model) - `DeepseekV2Config` configuration class: `DeepseekV2ForSequenceClassification` (DeepSeek-V2 model) - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: `DeepseekV3ForSequenceClassification` (DeepSeek-V3 model) - `DiffLlamaConfig` configuration class: `DiffLlamaForSequenceClassification` (DiffLlama model) - `DistilBertConfig` configuration class: `DistilBertForSequenceClassification` (DistilBERT model) - `DogeConfig` configuration class: `DogeForSequenceClassification` (Doge model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForSequenceClassification) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForSequenceClassification` (ERNIE model) - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForSequenceClassification) (ESM model) - `EuroBertConfig` configuration class: `EuroBertForSequenceClassification` (EuroBERT model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForSequenceClassification) (EXAONE-4.0 model) - `FNetConfig` configuration class: `FNetForSequenceClassification` (FNet model) - `FalconConfig` configuration class: `FalconForSequenceClassification` (Falcon model) - `FlaubertConfig` configuration class: `FlaubertForSequenceClassification` (FlauBERT model) - `FunnelConfig` configuration class: `FunnelForSequenceClassification` (Funnel Transformer model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForSequenceClassification) (OpenAI GPT-2 model) - `GPTBigCodeConfig` configuration class: `GPTBigCodeForSequenceClassification` (GPTBigCode model) - `GPTJConfig` configuration class: `GPTJForSequenceClassification` (GPT-J model) - `GPTNeoConfig` configuration class: `GPTNeoForSequenceClassification` (GPT Neo model) - `GPTNeoXConfig` configuration class: `GPTNeoXForSequenceClassification` (GPT NeoX model) - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForSequenceClassification) (Gemma2 model) - [Gemma3Config](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3Config) configuration class: [Gemma3ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForSequenceClassification) (Gemma3ForConditionalGeneration model) - [Gemma3TextConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3TextConfig) configuration class: `Gemma3TextForSequenceClassification` (Gemma3ForCausalLM model) - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForSequenceClassification) (Gemma model) - `Glm4Config` configuration class: `Glm4ForSequenceClassification` (GLM4 model) - `GlmConfig` configuration class: `GlmForSequenceClassification` (GLM model) - `GptOssConfig` configuration class: `GptOssForSequenceClassification` (GptOss model) - `HeliumConfig` configuration class: `HeliumForSequenceClassification` (Helium model) - `HunYuanDenseV1Config` configuration class: `HunYuanDenseV1ForSequenceClassification` (HunYuanDenseV1 model) - `HunYuanMoEV1Config` configuration class: `HunYuanMoEV1ForSequenceClassification` (HunYuanMoeV1 model) - `IBertConfig` configuration class: `IBertForSequenceClassification` (I-BERT model) - [JambaConfig](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaConfig) configuration class: [JambaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForSequenceClassification) (Jamba model) - `JetMoeConfig` configuration class: `JetMoeForSequenceClassification` (JetMoe model) - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForSequenceClassification` (JinaEmbeddingsV3 model) - `LayoutLMConfig` configuration class: `LayoutLMForSequenceClassification` (LayoutLM model) - `LayoutLMv2Config` configuration class: `LayoutLMv2ForSequenceClassification` (LayoutLMv2 model) - `LayoutLMv3Config` configuration class: `LayoutLMv3ForSequenceClassification` (LayoutLMv3 model) - `LiltConfig` configuration class: `LiltForSequenceClassification` (LiLT model) - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: [LlamaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForSequenceClassification) (LLaMA model) - `LongformerConfig` configuration class: `LongformerForSequenceClassification` (Longformer model) - `LukeConfig` configuration class: `LukeForSequenceClassification` (LUKE model) - `MBartConfig` configuration class: `MBartForSequenceClassification` (mBART model) - `MPNetConfig` configuration class: `MPNetForSequenceClassification` (MPNet model) - `MT5Config` configuration class: `MT5ForSequenceClassification` (MT5 model) - `MarkupLMConfig` configuration class: `MarkupLMForSequenceClassification` (MarkupLM model) - `MegatronBertConfig` configuration class: `MegatronBertForSequenceClassification` (Megatron-BERT model) - `MiniMaxConfig` configuration class: `MiniMaxForSequenceClassification` (MiniMax model) - `Ministral3Config` configuration class: `Ministral3ForSequenceClassification` (Ministral3 model) - `MinistralConfig` configuration class: `MinistralForSequenceClassification` (Ministral model) - `Mistral4Config` configuration class: `Mistral4ForSequenceClassification` (Mistral4 model) - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForSequenceClassification) (Mistral model) - `MixtralConfig` configuration class: `MixtralForSequenceClassification` (Mixtral model) - `MobileBertConfig` configuration class: `MobileBertForSequenceClassification` (MobileBERT model) - `ModernBertConfig` configuration class: `ModernBertForSequenceClassification` (ModernBERT model) - `ModernBertDecoderConfig` configuration class: `ModernBertDecoderForSequenceClassification` (ModernBertDecoder model) - `ModernVBertConfig` configuration class: `ModernVBertForSequenceClassification` (ModernVBert model) - `MptConfig` configuration class: `MptForSequenceClassification` (MPT model) - `MraConfig` configuration class: `MraForSequenceClassification` (MRA model) - `MvpConfig` configuration class: `MvpForSequenceClassification` (MVP model) - `NemotronConfig` configuration class: `NemotronForSequenceClassification` (Nemotron model) - `NomicBertConfig` configuration class: `NomicBertForSequenceClassification` (NomicBERT model) - `NystromformerConfig` configuration class: `NystromformerForSequenceClassification` (Nyströmformer model) - `OPTConfig` configuration class: `OPTForSequenceClassification` (OPT model) - [OpenAIGPTConfig](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTConfig) configuration class: [OpenAIGPTForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTForSequenceClassification) (OpenAI GPT model) - `PLBartConfig` configuration class: `PLBartForSequenceClassification` (PLBart model) - `PerceiverConfig` configuration class: `PerceiverForSequenceClassification` (Perceiver model) - `PersimmonConfig` configuration class: `PersimmonForSequenceClassification` (Persimmon model) - `Phi3Config` configuration class: `Phi3ForSequenceClassification` (Phi3 model) - `PhiConfig` configuration class: `PhiForSequenceClassification` (Phi model) - `PhimoeConfig` configuration class: `PhimoeForSequenceClassification` (Phimoe model) - `Qwen2Config` configuration class: `Qwen2ForSequenceClassification` (Qwen2 model) - `Qwen2MoeConfig` configuration class: `Qwen2MoeForSequenceClassification` (Qwen2MoE model) - `Qwen3Config` configuration class: `Qwen3ForSequenceClassification` (Qwen3 model) - `Qwen3MoeConfig` configuration class: `Qwen3MoeForSequenceClassification` (Qwen3MoE model) - `Qwen3NextConfig` configuration class: `Qwen3NextForSequenceClassification` (Qwen3Next model) - `Qwen3_5Config` configuration class: `Qwen3_5ForSequenceClassification` (Qwen3_5 model) - `Qwen3_5TextConfig` configuration class: `Qwen3_5ForSequenceClassification` (Qwen3_5Text model) - `ReformerConfig` configuration class: `ReformerForSequenceClassification` (Reformer model) - `RemBertConfig` configuration class: `RemBertForSequenceClassification` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForSequenceClassification` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForSequenceClassification` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForSequenceClassification) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForSequenceClassification` (RoBERTa-PreLayerNorm model) - `SeedOssConfig` configuration class: `SeedOssForSequenceClassification` (SeedOss model) - `SmolLM3Config` configuration class: `SmolLM3ForSequenceClassification` (SmolLM3 model) - `SqueezeBertConfig` configuration class: `SqueezeBertForSequenceClassification` (SqueezeBERT model) - `StableLmConfig` configuration class: `StableLmForSequenceClassification` (StableLm model) - `Starcoder2Config` configuration class: `Starcoder2ForSequenceClassification` (Starcoder2 model) - `T5Config` configuration class: `T5ForSequenceClassification` (T5 model) - `T5Gemma2Config` configuration class: `T5Gemma2ForSequenceClassification` (T5Gemma2 model) - `T5GemmaConfig` configuration class: `T5GemmaForSequenceClassification` (T5Gemma model) - `TapasConfig` configuration class: `TapasForSequenceClassification` (TAPAS model) - `UMT5Config` configuration class: `UMT5ForSequenceClassification` (UMT5 model) - `XLMConfig` configuration class: `XLMForSequenceClassification` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForSequenceClassification` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForSequenceClassification` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetForSequenceClassification` (XLNet model) - `XmodConfig` configuration class: `XmodForSequenceClassification` (X-MOD model) - `YosoConfig` configuration class: `YosoForSequenceClassification` (YOSO model) - `Zamba2Config` configuration class: `Zamba2ForSequenceClassification` (Zamba2 model) - `ZambaConfig` configuration class: `ZambaForSequenceClassification` (Zamba model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForSequenceClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForSequenceClassification) (ALBERT model)
- **arcee** -- `ArceeForSequenceClassification` (Arcee model)
- **bart** -- [BartForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForSequenceClassification) (BART model)
- **bert** -- [BertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForSequenceClassification) (BERT model)
- **big_bird** -- [BigBirdForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForSequenceClassification) (BigBird model)
- **bigbird_pegasus** -- `BigBirdPegasusForSequenceClassification` (BigBird-Pegasus model)
- **biogpt** -- [BioGptForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForSequenceClassification) (BioGpt model)
- **bloom** -- `BloomForSequenceClassification` (BLOOM model)
- **camembert** -- `CamembertForSequenceClassification` (CamemBERT model)
- **canine** -- `CanineForSequenceClassification` (CANINE model)
- **code_llama** -- [LlamaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForSequenceClassification) (CodeLlama model)
- **convbert** -- [ConvBertForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForSequenceClassification) (ConvBERT model)
- **ctrl** -- `CTRLForSequenceClassification` (CTRL model)
- **data2vec-text** -- `Data2VecTextForSequenceClassification` (Data2VecText model)
- **deberta** -- [DebertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForSequenceClassification) (DeBERTa model)
- **deberta-v2** -- [DebertaV2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForSequenceClassification) (DeBERTa-v2 model)
- **deepseek_v2** -- `DeepseekV2ForSequenceClassification` (DeepSeek-V2 model)
- **deepseek_v3** -- `DeepseekV3ForSequenceClassification` (DeepSeek-V3 model)
- **diffllama** -- `DiffLlamaForSequenceClassification` (DiffLlama model)
- **distilbert** -- `DistilBertForSequenceClassification` (DistilBERT model)
- **doge** -- `DogeForSequenceClassification` (Doge model)
- **electra** -- [ElectraForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForSequenceClassification) (ELECTRA model)
- **ernie** -- `ErnieForSequenceClassification` (ERNIE model)
- **esm** -- [EsmForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForSequenceClassification) (ESM model)
- **eurobert** -- `EuroBertForSequenceClassification` (EuroBERT model)
- **exaone4** -- [Exaone4ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForSequenceClassification) (EXAONE-4.0 model)
- **falcon** -- `FalconForSequenceClassification` (Falcon model)
- **flaubert** -- `FlaubertForSequenceClassification` (FlauBERT model)
- **fnet** -- `FNetForSequenceClassification` (FNet model)
- **funnel** -- `FunnelForSequenceClassification` (Funnel Transformer model)
- **gemma** -- [GemmaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForSequenceClassification) (Gemma model)
- **gemma2** -- [Gemma2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForSequenceClassification) (Gemma2 model)
- **gemma3** -- [Gemma3ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma3#transformers.Gemma3ForSequenceClassification) (Gemma3ForConditionalGeneration model)
- **gemma3_text** -- `Gemma3TextForSequenceClassification` (Gemma3ForCausalLM model)
- **glm** -- `GlmForSequenceClassification` (GLM model)
- **glm4** -- `Glm4ForSequenceClassification` (GLM4 model)
- **gpt-sw3** -- [GPT2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForSequenceClassification) (GPT-Sw3 model)
- **gpt2** -- [GPT2ForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForSequenceClassification) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeForSequenceClassification` (GPTBigCode model)
- **gpt_neo** -- `GPTNeoForSequenceClassification` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXForSequenceClassification` (GPT NeoX model)
- **gpt_oss** -- `GptOssForSequenceClassification` (GptOss model)
- **gptj** -- `GPTJForSequenceClassification` (GPT-J model)
- **helium** -- `HeliumForSequenceClassification` (Helium model)
- **hunyuan_v1_dense** -- `HunYuanDenseV1ForSequenceClassification` (HunYuanDenseV1 model)
- **hunyuan_v1_moe** -- `HunYuanMoEV1ForSequenceClassification` (HunYuanMoeV1 model)
- **ibert** -- `IBertForSequenceClassification` (I-BERT model)
- **jamba** -- [JambaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/jamba#transformers.JambaForSequenceClassification) (Jamba model)
- **jetmoe** -- `JetMoeForSequenceClassification` (JetMoe model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3ForSequenceClassification` (JinaEmbeddingsV3 model)
- **layoutlm** -- `LayoutLMForSequenceClassification` (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2ForSequenceClassification` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3ForSequenceClassification` (LayoutLMv3 model)
- **lilt** -- `LiltForSequenceClassification` (LiLT model)
- **llama** -- [LlamaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaForSequenceClassification) (LLaMA model)
- **longformer** -- `LongformerForSequenceClassification` (Longformer model)
- **luke** -- `LukeForSequenceClassification` (LUKE model)
- **markuplm** -- `MarkupLMForSequenceClassification` (MarkupLM model)
- **mbart** -- `MBartForSequenceClassification` (mBART model)
- **megatron-bert** -- `MegatronBertForSequenceClassification` (Megatron-BERT model)
- **minimax** -- `MiniMaxForSequenceClassification` (MiniMax model)
- **ministral** -- `MinistralForSequenceClassification` (Ministral model)
- **ministral3** -- `Ministral3ForSequenceClassification` (Ministral3 model)
- **mistral** -- [MistralForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForSequenceClassification) (Mistral model)
- **mistral4** -- `Mistral4ForSequenceClassification` (Mistral4 model)
- **mixtral** -- `MixtralForSequenceClassification` (Mixtral model)
- **mobilebert** -- `MobileBertForSequenceClassification` (MobileBERT model)
- **modernbert** -- `ModernBertForSequenceClassification` (ModernBERT model)
- **modernbert-decoder** -- `ModernBertDecoderForSequenceClassification` (ModernBertDecoder model)
- **modernvbert** -- `ModernVBertForSequenceClassification` (ModernVBert model)
- **mpnet** -- `MPNetForSequenceClassification` (MPNet model)
- **mpt** -- `MptForSequenceClassification` (MPT model)
- **mra** -- `MraForSequenceClassification` (MRA model)
- **mt5** -- `MT5ForSequenceClassification` (MT5 model)
- **mvp** -- `MvpForSequenceClassification` (MVP model)
- **nemotron** -- `NemotronForSequenceClassification` (Nemotron model)
- **nomic_bert** -- `NomicBertForSequenceClassification` (NomicBERT model)
- **nystromformer** -- `NystromformerForSequenceClassification` (Nyströmformer model)
- **openai-gpt** -- [OpenAIGPTForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/openai-gpt#transformers.OpenAIGPTForSequenceClassification) (OpenAI GPT model)
- **opt** -- `OPTForSequenceClassification` (OPT model)
- **perceiver** -- `PerceiverForSequenceClassification` (Perceiver model)
- **persimmon** -- `PersimmonForSequenceClassification` (Persimmon model)
- **phi** -- `PhiForSequenceClassification` (Phi model)
- **phi3** -- `Phi3ForSequenceClassification` (Phi3 model)
- **phimoe** -- `PhimoeForSequenceClassification` (Phimoe model)
- **plbart** -- `PLBartForSequenceClassification` (PLBart model)
- **qwen2** -- `Qwen2ForSequenceClassification` (Qwen2 model)
- **qwen2_moe** -- `Qwen2MoeForSequenceClassification` (Qwen2MoE model)
- **qwen3** -- `Qwen3ForSequenceClassification` (Qwen3 model)
- **qwen3_5** -- `Qwen3_5ForSequenceClassification` (Qwen3_5 model)
- **qwen3_5_text** -- `Qwen3_5ForSequenceClassification` (Qwen3_5Text model)
- **qwen3_moe** -- `Qwen3MoeForSequenceClassification` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextForSequenceClassification` (Qwen3Next model)
- **reformer** -- `ReformerForSequenceClassification` (Reformer model)
- **rembert** -- `RemBertForSequenceClassification` (RemBERT model)
- **roberta** -- [RobertaForSequenceClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForSequenceClassification) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForSequenceClassification` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForSequenceClassification` (RoCBert model)
- **roformer** -- `RoFormerForSequenceClassification` (RoFormer model)
- **seed_oss** -- `SeedOssForSequenceClassification` (SeedOss model)
- **smollm3** -- `SmolLM3ForSequenceClassification` (SmolLM3 model)
- **squeezebert** -- `SqueezeBertForSequenceClassification` (SqueezeBERT model)
- **stablelm** -- `StableLmForSequenceClassification` (StableLm model)
- **starcoder2** -- `Starcoder2ForSequenceClassification` (Starcoder2 model)
- **t5** -- `T5ForSequenceClassification` (T5 model)
- **t5gemma** -- `T5GemmaForSequenceClassification` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2ForSequenceClassification` (T5Gemma2 model)
- **tapas** -- `TapasForSequenceClassification` (TAPAS model)
- **umt5** -- `UMT5ForSequenceClassification` (UMT5 model)
- **xlm** -- `XLMForSequenceClassification` (XLM model)
- **xlm-roberta** -- `XLMRobertaForSequenceClassification` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForSequenceClassification` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetForSequenceClassification` (XLNet model)
- **xmod** -- `XmodForSequenceClassification` (X-MOD model)
- **yoso** -- `YosoForSequenceClassification` (YOSO model)
- **zamba** -- `ZambaForSequenceClassification` (Zamba model)
- **zamba2** -- `Zamba2ForSequenceClassification` (Zamba2 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSequenceClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForMultipleChoice[[transformers.AutoModelForMultipleChoice]][[transformers.AutoModelForMultipleChoice]]

#### transformers.AutoModelForMultipleChoice[[transformers.AutoModelForMultipleChoice]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2074)

This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForMultipleChoice.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMultipleChoice) (ALBERT model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMultipleChoice) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMultipleChoice) (BigBird model)
  - `CamembertConfig` configuration class: `CamembertForMultipleChoice` (CamemBERT model)
  - `CanineConfig` configuration class: `CanineForMultipleChoice` (CANINE model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMultipleChoice) (ConvBERT model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForMultipleChoice` (Data2VecText model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMultipleChoice) (DeBERTa-v2 model)
  - `DistilBertConfig` configuration class: `DistilBertForMultipleChoice` (DistilBERT model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMultipleChoice) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForMultipleChoice` (ERNIE model)
  - `FNetConfig` configuration class: `FNetForMultipleChoice` (FNet model)
  - `FlaubertConfig` configuration class: `FlaubertForMultipleChoice` (FlauBERT model)
  - `FunnelConfig` configuration class: `FunnelForMultipleChoice` (Funnel Transformer model)
  - `IBertConfig` configuration class: `IBertForMultipleChoice` (I-BERT model)
  - `LongformerConfig` configuration class: `LongformerForMultipleChoice` (Longformer model)
  - `LukeConfig` configuration class: `LukeForMultipleChoice` (LUKE model)
  - `MPNetConfig` configuration class: `MPNetForMultipleChoice` (MPNet model)
  - `MegatronBertConfig` configuration class: `MegatronBertForMultipleChoice` (Megatron-BERT model)
  - `MobileBertConfig` configuration class: `MobileBertForMultipleChoice` (MobileBERT model)
  - `ModernBertConfig` configuration class: `ModernBertForMultipleChoice` (ModernBERT model)
  - `MraConfig` configuration class: `MraForMultipleChoice` (MRA model)
  - `NystromformerConfig` configuration class: `NystromformerForMultipleChoice` (Nyströmformer model)
  - `RemBertConfig` configuration class: `RemBertForMultipleChoice` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForMultipleChoice` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForMultipleChoice` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMultipleChoice) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMultipleChoice` (RoBERTa-PreLayerNorm model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForMultipleChoice` (SqueezeBERT model)
  - `XLMConfig` configuration class: `XLMForMultipleChoice` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForMultipleChoice` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMultipleChoice` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetForMultipleChoice` (XLNet model)
  - `XmodConfig` configuration class: `XmodForMultipleChoice` (X-MOD model)
  - `YosoConfig` configuration class: `YosoForMultipleChoice` (YOSO model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMultipleChoice

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMultipleChoice.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMultipleChoice) (ALBERT model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMultipleChoice) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMultipleChoice) (BigBird model) - `CamembertConfig` configuration class: `CamembertForMultipleChoice` (CamemBERT model) - `CanineConfig` configuration class: `CanineForMultipleChoice` (CANINE model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMultipleChoice) (ConvBERT model) - `Data2VecTextConfig` configuration class: `Data2VecTextForMultipleChoice` (Data2VecText model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMultipleChoice) (DeBERTa-v2 model) - `DistilBertConfig` configuration class: `DistilBertForMultipleChoice` (DistilBERT model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMultipleChoice) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForMultipleChoice` (ERNIE model) - `FNetConfig` configuration class: `FNetForMultipleChoice` (FNet model) - `FlaubertConfig` configuration class: `FlaubertForMultipleChoice` (FlauBERT model) - `FunnelConfig` configuration class: `FunnelForMultipleChoice` (Funnel Transformer model) - `IBertConfig` configuration class: `IBertForMultipleChoice` (I-BERT model) - `LongformerConfig` configuration class: `LongformerForMultipleChoice` (Longformer model) - `LukeConfig` configuration class: `LukeForMultipleChoice` (LUKE model) - `MPNetConfig` configuration class: `MPNetForMultipleChoice` (MPNet model) - `MegatronBertConfig` configuration class: `MegatronBertForMultipleChoice` (Megatron-BERT model) - `MobileBertConfig` configuration class: `MobileBertForMultipleChoice` (MobileBERT model) - `ModernBertConfig` configuration class: `ModernBertForMultipleChoice` (ModernBERT model) - `MraConfig` configuration class: `MraForMultipleChoice` (MRA model) - `NystromformerConfig` configuration class: `NystromformerForMultipleChoice` (Nyströmformer model) - `RemBertConfig` configuration class: `RemBertForMultipleChoice` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForMultipleChoice` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForMultipleChoice` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMultipleChoice) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForMultipleChoice` (RoBERTa-PreLayerNorm model) - `SqueezeBertConfig` configuration class: `SqueezeBertForMultipleChoice` (SqueezeBERT model) - `XLMConfig` configuration class: `XLMForMultipleChoice` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForMultipleChoice` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForMultipleChoice` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetForMultipleChoice` (XLNet model) - `XmodConfig` configuration class: `XmodForMultipleChoice` (X-MOD model) - `YosoConfig` configuration class: `YosoForMultipleChoice` (YOSO model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForMultipleChoice.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForMultipleChoice) (ALBERT model)
- **bert** -- [BertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForMultipleChoice) (BERT model)
- **big_bird** -- [BigBirdForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForMultipleChoice) (BigBird model)
- **camembert** -- `CamembertForMultipleChoice` (CamemBERT model)
- **canine** -- `CanineForMultipleChoice` (CANINE model)
- **convbert** -- [ConvBertForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForMultipleChoice) (ConvBERT model)
- **data2vec-text** -- `Data2VecTextForMultipleChoice` (Data2VecText model)
- **deberta-v2** -- [DebertaV2ForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForMultipleChoice) (DeBERTa-v2 model)
- **distilbert** -- `DistilBertForMultipleChoice` (DistilBERT model)
- **electra** -- [ElectraForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForMultipleChoice) (ELECTRA model)
- **ernie** -- `ErnieForMultipleChoice` (ERNIE model)
- **flaubert** -- `FlaubertForMultipleChoice` (FlauBERT model)
- **fnet** -- `FNetForMultipleChoice` (FNet model)
- **funnel** -- `FunnelForMultipleChoice` (Funnel Transformer model)
- **ibert** -- `IBertForMultipleChoice` (I-BERT model)
- **longformer** -- `LongformerForMultipleChoice` (Longformer model)
- **luke** -- `LukeForMultipleChoice` (LUKE model)
- **megatron-bert** -- `MegatronBertForMultipleChoice` (Megatron-BERT model)
- **mobilebert** -- `MobileBertForMultipleChoice` (MobileBERT model)
- **modernbert** -- `ModernBertForMultipleChoice` (ModernBERT model)
- **mpnet** -- `MPNetForMultipleChoice` (MPNet model)
- **mra** -- `MraForMultipleChoice` (MRA model)
- **nystromformer** -- `NystromformerForMultipleChoice` (Nyströmformer model)
- **rembert** -- `RemBertForMultipleChoice` (RemBERT model)
- **roberta** -- [RobertaForMultipleChoice](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForMultipleChoice) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForMultipleChoice` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForMultipleChoice` (RoCBert model)
- **roformer** -- `RoFormerForMultipleChoice` (RoFormer model)
- **squeezebert** -- `SqueezeBertForMultipleChoice` (SqueezeBERT model)
- **xlm** -- `XLMForMultipleChoice` (XLM model)
- **xlm-roberta** -- `XLMRobertaForMultipleChoice` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForMultipleChoice` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetForMultipleChoice` (XLNet model)
- **xmod** -- `XmodForMultipleChoice` (X-MOD model)
- **yoso** -- `YosoForMultipleChoice` (YOSO model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMultipleChoice

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForNextSentencePrediction[[transformers.AutoModelForNextSentencePrediction]][[transformers.AutoModelForNextSentencePrediction]]

#### transformers.AutoModelForNextSentencePrediction[[transformers.AutoModelForNextSentencePrediction]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2081)

This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForNextSentencePrediction.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForNextSentencePrediction](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForNextSentencePrediction) (BERT model)
  - `ErnieConfig` configuration class: `ErnieForNextSentencePrediction` (ERNIE model)
  - `FNetConfig` configuration class: `FNetForNextSentencePrediction` (FNet model)
  - `MegatronBertConfig` configuration class: `MegatronBertForNextSentencePrediction` (Megatron-BERT model)
  - `MobileBertConfig` configuration class: `MobileBertForNextSentencePrediction` (MobileBERT model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForNextSentencePrediction.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForNextSentencePrediction](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForNextSentencePrediction) (BERT model) - `ErnieConfig` configuration class: `ErnieForNextSentencePrediction` (ERNIE model) - `FNetConfig` configuration class: `FNetForNextSentencePrediction` (FNet model) - `MegatronBertConfig` configuration class: `MegatronBertForNextSentencePrediction` (Megatron-BERT model) - `MobileBertConfig` configuration class: `MobileBertForNextSentencePrediction` (MobileBERT model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForNextSentencePrediction.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **bert** -- [BertForNextSentencePrediction](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForNextSentencePrediction) (BERT model)
- **ernie** -- `ErnieForNextSentencePrediction` (ERNIE model)
- **fnet** -- `FNetForNextSentencePrediction` (FNet model)
- **megatron-bert** -- `MegatronBertForNextSentencePrediction` (Megatron-BERT model)
- **mobilebert** -- `MobileBertForNextSentencePrediction` (MobileBERT model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForTokenClassification[[transformers.AutoModelForTokenClassification]][[transformers.AutoModelForTokenClassification]]

#### transformers.AutoModelForTokenClassification[[transformers.AutoModelForTokenClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2067)

This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForTokenClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForTokenClassification) (ALBERT model)
  - `ApertusConfig` configuration class: `ApertusForTokenClassification` (Apertus model)
  - `ArceeConfig` configuration class: `ArceeForTokenClassification` (Arcee model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForTokenClassification) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForTokenClassification) (BigBird model)
  - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForTokenClassification) (BioGpt model)
  - `BloomConfig` configuration class: `BloomForTokenClassification` (BLOOM model)
  - `BrosConfig` configuration class: `BrosForTokenClassification` (BROS model)
  - `CamembertConfig` configuration class: `CamembertForTokenClassification` (CamemBERT model)
  - `CanineConfig` configuration class: `CanineForTokenClassification` (CANINE model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForTokenClassification) (ConvBERT model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForTokenClassification` (Data2VecText model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForTokenClassification) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForTokenClassification) (DeBERTa-v2 model)
  - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: `DeepseekV3ForTokenClassification` (DeepSeek-V3 model)
  - `DiffLlamaConfig` configuration class: `DiffLlamaForTokenClassification` (DiffLlama model)
  - `DistilBertConfig` configuration class: `DistilBertForTokenClassification` (DistilBERT model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForTokenClassification) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForTokenClassification` (ERNIE model)
  - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForTokenClassification) (ESM model)
  - `EuroBertConfig` configuration class: `EuroBertForTokenClassification` (EuroBERT model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForTokenClassification) (EXAONE-4.0 model)
  - `FNetConfig` configuration class: `FNetForTokenClassification` (FNet model)
  - `FalconConfig` configuration class: `FalconForTokenClassification` (Falcon model)
  - `FlaubertConfig` configuration class: `FlaubertForTokenClassification` (FlauBERT model)
  - `FunnelConfig` configuration class: `FunnelForTokenClassification` (Funnel Transformer model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForTokenClassification) (OpenAI GPT-2 model)
  - `GPTBigCodeConfig` configuration class: `GPTBigCodeForTokenClassification` (GPTBigCode model)
  - `GPTNeoConfig` configuration class: `GPTNeoForTokenClassification` (GPT Neo model)
  - `GPTNeoXConfig` configuration class: `GPTNeoXForTokenClassification` (GPT NeoX model)
  - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForTokenClassification) (Gemma2 model)
  - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForTokenClassification) (Gemma model)
  - `Glm4Config` configuration class: `Glm4ForTokenClassification` (GLM4 model)
  - `GlmConfig` configuration class: `GlmForTokenClassification` (GLM model)
  - `GptOssConfig` configuration class: `GptOssForTokenClassification` (GptOss model)
  - `HeliumConfig` configuration class: `HeliumForTokenClassification` (Helium model)
  - `IBertConfig` configuration class: `IBertForTokenClassification` (I-BERT model)
  - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForTokenClassification` (JinaEmbeddingsV3 model)
  - `LayoutLMConfig` configuration class: `LayoutLMForTokenClassification` (LayoutLM model)
  - `LayoutLMv2Config` configuration class: `LayoutLMv2ForTokenClassification` (LayoutLMv2 model)
  - `LayoutLMv3Config` configuration class: `LayoutLMv3ForTokenClassification` (LayoutLMv3 model)
  - `LiltConfig` configuration class: `LiltForTokenClassification` (LiLT model)
  - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: `LlamaForTokenClassification` (LLaMA model)
  - `LongformerConfig` configuration class: `LongformerForTokenClassification` (Longformer model)
  - `LukeConfig` configuration class: `LukeForTokenClassification` (LUKE model)
  - `MPNetConfig` configuration class: `MPNetForTokenClassification` (MPNet model)
  - `MT5Config` configuration class: `MT5ForTokenClassification` (MT5 model)
  - `MarkupLMConfig` configuration class: `MarkupLMForTokenClassification` (MarkupLM model)
  - `MegatronBertConfig` configuration class: `MegatronBertForTokenClassification` (Megatron-BERT model)
  - `MiniMaxConfig` configuration class: `MiniMaxForTokenClassification` (MiniMax model)
  - `Ministral3Config` configuration class: `Ministral3ForTokenClassification` (Ministral3 model)
  - `MinistralConfig` configuration class: `MinistralForTokenClassification` (Ministral model)
  - `Mistral4Config` configuration class: `Mistral4ForTokenClassification` (Mistral4 model)
  - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForTokenClassification) (Mistral model)
  - `MixtralConfig` configuration class: `MixtralForTokenClassification` (Mixtral model)
  - `MobileBertConfig` configuration class: `MobileBertForTokenClassification` (MobileBERT model)
  - `ModernBertConfig` configuration class: `ModernBertForTokenClassification` (ModernBERT model)
  - `ModernVBertConfig` configuration class: `ModernVBertForTokenClassification` (ModernVBert model)
  - `MptConfig` configuration class: `MptForTokenClassification` (MPT model)
  - `MraConfig` configuration class: `MraForTokenClassification` (MRA model)
  - `NemotronConfig` configuration class: `NemotronForTokenClassification` (Nemotron model)
  - `NomicBertConfig` configuration class: `NomicBertForTokenClassification` (NomicBERT model)
  - `NystromformerConfig` configuration class: `NystromformerForTokenClassification` (Nyströmformer model)
  - `PersimmonConfig` configuration class: `PersimmonForTokenClassification` (Persimmon model)
  - `Phi3Config` configuration class: `Phi3ForTokenClassification` (Phi3 model)
  - `PhiConfig` configuration class: `PhiForTokenClassification` (Phi model)
  - `Qwen2Config` configuration class: `Qwen2ForTokenClassification` (Qwen2 model)
  - `Qwen2MoeConfig` configuration class: `Qwen2MoeForTokenClassification` (Qwen2MoE model)
  - `Qwen3Config` configuration class: `Qwen3ForTokenClassification` (Qwen3 model)
  - `Qwen3MoeConfig` configuration class: `Qwen3MoeForTokenClassification` (Qwen3MoE model)
  - `Qwen3NextConfig` configuration class: `Qwen3NextForTokenClassification` (Qwen3Next model)
  - `RemBertConfig` configuration class: `RemBertForTokenClassification` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForTokenClassification` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForTokenClassification` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForTokenClassification) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForTokenClassification` (RoBERTa-PreLayerNorm model)
  - `SeedOssConfig` configuration class: `SeedOssForTokenClassification` (SeedOss model)
  - `SmolLM3Config` configuration class: `SmolLM3ForTokenClassification` (SmolLM3 model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForTokenClassification` (SqueezeBERT model)
  - `StableLmConfig` configuration class: `StableLmForTokenClassification` (StableLm model)
  - `Starcoder2Config` configuration class: `Starcoder2ForTokenClassification` (Starcoder2 model)
  - `T5Config` configuration class: `T5ForTokenClassification` (T5 model)
  - `T5Gemma2Config` configuration class: `T5Gemma2ForTokenClassification` (T5Gemma2 model)
  - `T5GemmaConfig` configuration class: `T5GemmaForTokenClassification` (T5Gemma model)
  - `UMT5Config` configuration class: `UMT5ForTokenClassification` (UMT5 model)
  - `XLMConfig` configuration class: `XLMForTokenClassification` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForTokenClassification` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForTokenClassification` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetForTokenClassification` (XLNet model)
  - `XmodConfig` configuration class: `XmodForTokenClassification` (X-MOD model)
  - `YosoConfig` configuration class: `YosoForTokenClassification` (YOSO model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a token classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTokenClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForTokenClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForTokenClassification) (ALBERT model) - `ApertusConfig` configuration class: `ApertusForTokenClassification` (Apertus model) - `ArceeConfig` configuration class: `ArceeForTokenClassification` (Arcee model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForTokenClassification) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForTokenClassification) (BigBird model) - [BioGptConfig](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptConfig) configuration class: [BioGptForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForTokenClassification) (BioGpt model) - `BloomConfig` configuration class: `BloomForTokenClassification` (BLOOM model) - `BrosConfig` configuration class: `BrosForTokenClassification` (BROS model) - `CamembertConfig` configuration class: `CamembertForTokenClassification` (CamemBERT model) - `CanineConfig` configuration class: `CanineForTokenClassification` (CANINE model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForTokenClassification) (ConvBERT model) - `Data2VecTextConfig` configuration class: `Data2VecTextForTokenClassification` (Data2VecText model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForTokenClassification) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForTokenClassification) (DeBERTa-v2 model) - [DeepseekV3Config](/docs/transformers/v5.5.1/ko/model_doc/deepseek_v3#transformers.DeepseekV3Config) configuration class: `DeepseekV3ForTokenClassification` (DeepSeek-V3 model) - `DiffLlamaConfig` configuration class: `DiffLlamaForTokenClassification` (DiffLlama model) - `DistilBertConfig` configuration class: `DistilBertForTokenClassification` (DistilBERT model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForTokenClassification) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForTokenClassification` (ERNIE model) - [EsmConfig](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmConfig) configuration class: [EsmForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForTokenClassification) (ESM model) - `EuroBertConfig` configuration class: `EuroBertForTokenClassification` (EuroBERT model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForTokenClassification) (EXAONE-4.0 model) - `FNetConfig` configuration class: `FNetForTokenClassification` (FNet model) - `FalconConfig` configuration class: `FalconForTokenClassification` (Falcon model) - `FlaubertConfig` configuration class: `FlaubertForTokenClassification` (FlauBERT model) - `FunnelConfig` configuration class: `FunnelForTokenClassification` (Funnel Transformer model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForTokenClassification) (OpenAI GPT-2 model) - `GPTBigCodeConfig` configuration class: `GPTBigCodeForTokenClassification` (GPTBigCode model) - `GPTNeoConfig` configuration class: `GPTNeoForTokenClassification` (GPT Neo model) - `GPTNeoXConfig` configuration class: `GPTNeoXForTokenClassification` (GPT NeoX model) - [Gemma2Config](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2Config) configuration class: [Gemma2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForTokenClassification) (Gemma2 model) - [GemmaConfig](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaConfig) configuration class: [GemmaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForTokenClassification) (Gemma model) - `Glm4Config` configuration class: `Glm4ForTokenClassification` (GLM4 model) - `GlmConfig` configuration class: `GlmForTokenClassification` (GLM model) - `GptOssConfig` configuration class: `GptOssForTokenClassification` (GptOss model) - `HeliumConfig` configuration class: `HeliumForTokenClassification` (Helium model) - `IBertConfig` configuration class: `IBertForTokenClassification` (I-BERT model) - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForTokenClassification` (JinaEmbeddingsV3 model) - `LayoutLMConfig` configuration class: `LayoutLMForTokenClassification` (LayoutLM model) - `LayoutLMv2Config` configuration class: `LayoutLMv2ForTokenClassification` (LayoutLMv2 model) - `LayoutLMv3Config` configuration class: `LayoutLMv3ForTokenClassification` (LayoutLMv3 model) - `LiltConfig` configuration class: `LiltForTokenClassification` (LiLT model) - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: `LlamaForTokenClassification` (LLaMA model) - `LongformerConfig` configuration class: `LongformerForTokenClassification` (Longformer model) - `LukeConfig` configuration class: `LukeForTokenClassification` (LUKE model) - `MPNetConfig` configuration class: `MPNetForTokenClassification` (MPNet model) - `MT5Config` configuration class: `MT5ForTokenClassification` (MT5 model) - `MarkupLMConfig` configuration class: `MarkupLMForTokenClassification` (MarkupLM model) - `MegatronBertConfig` configuration class: `MegatronBertForTokenClassification` (Megatron-BERT model) - `MiniMaxConfig` configuration class: `MiniMaxForTokenClassification` (MiniMax model) - `Ministral3Config` configuration class: `Ministral3ForTokenClassification` (Ministral3 model) - `MinistralConfig` configuration class: `MinistralForTokenClassification` (Ministral model) - `Mistral4Config` configuration class: `Mistral4ForTokenClassification` (Mistral4 model) - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: [MistralForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForTokenClassification) (Mistral model) - `MixtralConfig` configuration class: `MixtralForTokenClassification` (Mixtral model) - `MobileBertConfig` configuration class: `MobileBertForTokenClassification` (MobileBERT model) - `ModernBertConfig` configuration class: `ModernBertForTokenClassification` (ModernBERT model) - `ModernVBertConfig` configuration class: `ModernVBertForTokenClassification` (ModernVBert model) - `MptConfig` configuration class: `MptForTokenClassification` (MPT model) - `MraConfig` configuration class: `MraForTokenClassification` (MRA model) - `NemotronConfig` configuration class: `NemotronForTokenClassification` (Nemotron model) - `NomicBertConfig` configuration class: `NomicBertForTokenClassification` (NomicBERT model) - `NystromformerConfig` configuration class: `NystromformerForTokenClassification` (Nyströmformer model) - `PersimmonConfig` configuration class: `PersimmonForTokenClassification` (Persimmon model) - `Phi3Config` configuration class: `Phi3ForTokenClassification` (Phi3 model) - `PhiConfig` configuration class: `PhiForTokenClassification` (Phi model) - `Qwen2Config` configuration class: `Qwen2ForTokenClassification` (Qwen2 model) - `Qwen2MoeConfig` configuration class: `Qwen2MoeForTokenClassification` (Qwen2MoE model) - `Qwen3Config` configuration class: `Qwen3ForTokenClassification` (Qwen3 model) - `Qwen3MoeConfig` configuration class: `Qwen3MoeForTokenClassification` (Qwen3MoE model) - `Qwen3NextConfig` configuration class: `Qwen3NextForTokenClassification` (Qwen3Next model) - `RemBertConfig` configuration class: `RemBertForTokenClassification` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForTokenClassification` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForTokenClassification` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForTokenClassification) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForTokenClassification` (RoBERTa-PreLayerNorm model) - `SeedOssConfig` configuration class: `SeedOssForTokenClassification` (SeedOss model) - `SmolLM3Config` configuration class: `SmolLM3ForTokenClassification` (SmolLM3 model) - `SqueezeBertConfig` configuration class: `SqueezeBertForTokenClassification` (SqueezeBERT model) - `StableLmConfig` configuration class: `StableLmForTokenClassification` (StableLm model) - `Starcoder2Config` configuration class: `Starcoder2ForTokenClassification` (Starcoder2 model) - `T5Config` configuration class: `T5ForTokenClassification` (T5 model) - `T5Gemma2Config` configuration class: `T5Gemma2ForTokenClassification` (T5Gemma2 model) - `T5GemmaConfig` configuration class: `T5GemmaForTokenClassification` (T5Gemma model) - `UMT5Config` configuration class: `UMT5ForTokenClassification` (UMT5 model) - `XLMConfig` configuration class: `XLMForTokenClassification` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForTokenClassification` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForTokenClassification` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetForTokenClassification` (XLNet model) - `XmodConfig` configuration class: `XmodForTokenClassification` (X-MOD model) - `YosoConfig` configuration class: `YosoForTokenClassification` (YOSO model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForTokenClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForTokenClassification) (ALBERT model)
- **apertus** -- `ApertusForTokenClassification` (Apertus model)
- **arcee** -- `ArceeForTokenClassification` (Arcee model)
- **bert** -- [BertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForTokenClassification) (BERT model)
- **big_bird** -- [BigBirdForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForTokenClassification) (BigBird model)
- **biogpt** -- [BioGptForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/biogpt#transformers.BioGptForTokenClassification) (BioGpt model)
- **bloom** -- `BloomForTokenClassification` (BLOOM model)
- **bros** -- `BrosForTokenClassification` (BROS model)
- **camembert** -- `CamembertForTokenClassification` (CamemBERT model)
- **canine** -- `CanineForTokenClassification` (CANINE model)
- **convbert** -- [ConvBertForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForTokenClassification) (ConvBERT model)
- **data2vec-text** -- `Data2VecTextForTokenClassification` (Data2VecText model)
- **deberta** -- [DebertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForTokenClassification) (DeBERTa model)
- **deberta-v2** -- [DebertaV2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForTokenClassification) (DeBERTa-v2 model)
- **deepseek_v3** -- `DeepseekV3ForTokenClassification` (DeepSeek-V3 model)
- **diffllama** -- `DiffLlamaForTokenClassification` (DiffLlama model)
- **distilbert** -- `DistilBertForTokenClassification` (DistilBERT model)
- **electra** -- [ElectraForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForTokenClassification) (ELECTRA model)
- **ernie** -- `ErnieForTokenClassification` (ERNIE model)
- **esm** -- [EsmForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/esm#transformers.EsmForTokenClassification) (ESM model)
- **eurobert** -- `EuroBertForTokenClassification` (EuroBERT model)
- **exaone4** -- [Exaone4ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForTokenClassification) (EXAONE-4.0 model)
- **falcon** -- `FalconForTokenClassification` (Falcon model)
- **flaubert** -- `FlaubertForTokenClassification` (FlauBERT model)
- **fnet** -- `FNetForTokenClassification` (FNet model)
- **funnel** -- `FunnelForTokenClassification` (Funnel Transformer model)
- **gemma** -- [GemmaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma#transformers.GemmaForTokenClassification) (Gemma model)
- **gemma2** -- [Gemma2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gemma2#transformers.Gemma2ForTokenClassification) (Gemma2 model)
- **glm** -- `GlmForTokenClassification` (GLM model)
- **glm4** -- `Glm4ForTokenClassification` (GLM4 model)
- **gpt-sw3** -- [GPT2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForTokenClassification) (GPT-Sw3 model)
- **gpt2** -- [GPT2ForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForTokenClassification) (OpenAI GPT-2 model)
- **gpt_bigcode** -- `GPTBigCodeForTokenClassification` (GPTBigCode model)
- **gpt_neo** -- `GPTNeoForTokenClassification` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXForTokenClassification` (GPT NeoX model)
- **gpt_oss** -- `GptOssForTokenClassification` (GptOss model)
- **helium** -- `HeliumForTokenClassification` (Helium model)
- **ibert** -- `IBertForTokenClassification` (I-BERT model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3ForTokenClassification` (JinaEmbeddingsV3 model)
- **layoutlm** -- `LayoutLMForTokenClassification` (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2ForTokenClassification` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3ForTokenClassification` (LayoutLMv3 model)
- **lilt** -- `LiltForTokenClassification` (LiLT model)
- **llama** -- `LlamaForTokenClassification` (LLaMA model)
- **longformer** -- `LongformerForTokenClassification` (Longformer model)
- **luke** -- `LukeForTokenClassification` (LUKE model)
- **markuplm** -- `MarkupLMForTokenClassification` (MarkupLM model)
- **megatron-bert** -- `MegatronBertForTokenClassification` (Megatron-BERT model)
- **minimax** -- `MiniMaxForTokenClassification` (MiniMax model)
- **ministral** -- `MinistralForTokenClassification` (Ministral model)
- **ministral3** -- `Ministral3ForTokenClassification` (Ministral3 model)
- **mistral** -- [MistralForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralForTokenClassification) (Mistral model)
- **mistral4** -- `Mistral4ForTokenClassification` (Mistral4 model)
- **mixtral** -- `MixtralForTokenClassification` (Mixtral model)
- **mobilebert** -- `MobileBertForTokenClassification` (MobileBERT model)
- **modernbert** -- `ModernBertForTokenClassification` (ModernBERT model)
- **modernvbert** -- `ModernVBertForTokenClassification` (ModernVBert model)
- **mpnet** -- `MPNetForTokenClassification` (MPNet model)
- **mpt** -- `MptForTokenClassification` (MPT model)
- **mra** -- `MraForTokenClassification` (MRA model)
- **mt5** -- `MT5ForTokenClassification` (MT5 model)
- **nemotron** -- `NemotronForTokenClassification` (Nemotron model)
- **nomic_bert** -- `NomicBertForTokenClassification` (NomicBERT model)
- **nystromformer** -- `NystromformerForTokenClassification` (Nyströmformer model)
- **persimmon** -- `PersimmonForTokenClassification` (Persimmon model)
- **phi** -- `PhiForTokenClassification` (Phi model)
- **phi3** -- `Phi3ForTokenClassification` (Phi3 model)
- **qwen2** -- `Qwen2ForTokenClassification` (Qwen2 model)
- **qwen2_moe** -- `Qwen2MoeForTokenClassification` (Qwen2MoE model)
- **qwen3** -- `Qwen3ForTokenClassification` (Qwen3 model)
- **qwen3_moe** -- `Qwen3MoeForTokenClassification` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextForTokenClassification` (Qwen3Next model)
- **rembert** -- `RemBertForTokenClassification` (RemBERT model)
- **roberta** -- [RobertaForTokenClassification](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForTokenClassification) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForTokenClassification` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForTokenClassification` (RoCBert model)
- **roformer** -- `RoFormerForTokenClassification` (RoFormer model)
- **seed_oss** -- `SeedOssForTokenClassification` (SeedOss model)
- **smollm3** -- `SmolLM3ForTokenClassification` (SmolLM3 model)
- **squeezebert** -- `SqueezeBertForTokenClassification` (SqueezeBERT model)
- **stablelm** -- `StableLmForTokenClassification` (StableLm model)
- **starcoder2** -- `Starcoder2ForTokenClassification` (Starcoder2 model)
- **t5** -- `T5ForTokenClassification` (T5 model)
- **t5gemma** -- `T5GemmaForTokenClassification` (T5Gemma model)
- **t5gemma2** -- `T5Gemma2ForTokenClassification` (T5Gemma2 model)
- **umt5** -- `UMT5ForTokenClassification` (UMT5 model)
- **xlm** -- `XLMForTokenClassification` (XLM model)
- **xlm-roberta** -- `XLMRobertaForTokenClassification` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForTokenClassification` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetForTokenClassification` (XLNet model)
- **xmod** -- `XmodForTokenClassification` (X-MOD model)
- **yoso** -- `YosoForTokenClassification` (YOSO model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTokenClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForQuestionAnswering[[transformers.AutoModelForQuestionAnswering]][[transformers.AutoModelForQuestionAnswering]]

#### transformers.AutoModelForQuestionAnswering[[transformers.AutoModelForQuestionAnswering]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2027)

This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForQuestionAnswering.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForQuestionAnswering) (ALBERT model)
  - `ArceeConfig` configuration class: `ArceeForQuestionAnswering` (Arcee model)
  - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForQuestionAnswering) (BART model)
  - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForQuestionAnswering) (BERT model)
  - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForQuestionAnswering) (BigBird model)
  - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForQuestionAnswering` (BigBird-Pegasus model)
  - `BloomConfig` configuration class: `BloomForQuestionAnswering` (BLOOM model)
  - `CamembertConfig` configuration class: `CamembertForQuestionAnswering` (CamemBERT model)
  - `CanineConfig` configuration class: `CanineForQuestionAnswering` (CANINE model)
  - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForQuestionAnswering) (ConvBERT model)
  - `Data2VecTextConfig` configuration class: `Data2VecTextForQuestionAnswering` (Data2VecText model)
  - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForQuestionAnswering) (DeBERTa model)
  - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForQuestionAnswering) (DeBERTa-v2 model)
  - `DiffLlamaConfig` configuration class: `DiffLlamaForQuestionAnswering` (DiffLlama model)
  - `DistilBertConfig` configuration class: `DistilBertForQuestionAnswering` (DistilBERT model)
  - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForQuestionAnswering) (ELECTRA model)
  - `ErnieConfig` configuration class: `ErnieForQuestionAnswering` (ERNIE model)
  - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForQuestionAnswering) (EXAONE-4.0 model)
  - `FNetConfig` configuration class: `FNetForQuestionAnswering` (FNet model)
  - `FalconConfig` configuration class: `FalconForQuestionAnswering` (Falcon model)
  - `FlaubertConfig` configuration class: `FlaubertForQuestionAnsweringSimple` (FlauBERT model)
  - `FunnelConfig` configuration class: `FunnelForQuestionAnswering` (Funnel Transformer model)
  - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForQuestionAnswering) (OpenAI GPT-2 model)
  - `GPTJConfig` configuration class: `GPTJForQuestionAnswering` (GPT-J model)
  - `GPTNeoConfig` configuration class: `GPTNeoForQuestionAnswering` (GPT Neo model)
  - `GPTNeoXConfig` configuration class: `GPTNeoXForQuestionAnswering` (GPT NeoX model)
  - `IBertConfig` configuration class: `IBertForQuestionAnswering` (I-BERT model)
  - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForQuestionAnswering` (JinaEmbeddingsV3 model)
  - `LEDConfig` configuration class: `LEDForQuestionAnswering` (LED model)
  - `LayoutLMv2Config` configuration class: `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model)
  - `LayoutLMv3Config` configuration class: `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model)
  - `LiltConfig` configuration class: `LiltForQuestionAnswering` (LiLT model)
  - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: `LlamaForQuestionAnswering` (LLaMA model)
  - `LongformerConfig` configuration class: `LongformerForQuestionAnswering` (Longformer model)
  - `LukeConfig` configuration class: `LukeForQuestionAnswering` (LUKE model)
  - `LxmertConfig` configuration class: `LxmertForQuestionAnswering` (LXMERT model)
  - `MBartConfig` configuration class: `MBartForQuestionAnswering` (mBART model)
  - `MPNetConfig` configuration class: `MPNetForQuestionAnswering` (MPNet model)
  - `MT5Config` configuration class: `MT5ForQuestionAnswering` (MT5 model)
  - `MarkupLMConfig` configuration class: `MarkupLMForQuestionAnswering` (MarkupLM model)
  - `MegatronBertConfig` configuration class: `MegatronBertForQuestionAnswering` (Megatron-BERT model)
  - `MiniMaxConfig` configuration class: `MiniMaxForQuestionAnswering` (MiniMax model)
  - `Ministral3Config` configuration class: `Ministral3ForQuestionAnswering` (Ministral3 model)
  - `MinistralConfig` configuration class: `MinistralForQuestionAnswering` (Ministral model)
  - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: `MistralForQuestionAnswering` (Mistral model)
  - `MixtralConfig` configuration class: `MixtralForQuestionAnswering` (Mixtral model)
  - `MobileBertConfig` configuration class: `MobileBertForQuestionAnswering` (MobileBERT model)
  - `ModernBertConfig` configuration class: `ModernBertForQuestionAnswering` (ModernBERT model)
  - `MptConfig` configuration class: `MptForQuestionAnswering` (MPT model)
  - `MraConfig` configuration class: `MraForQuestionAnswering` (MRA model)
  - `MvpConfig` configuration class: `MvpForQuestionAnswering` (MVP model)
  - `NemotronConfig` configuration class: `NemotronForQuestionAnswering` (Nemotron model)
  - `NystromformerConfig` configuration class: `NystromformerForQuestionAnswering` (Nyströmformer model)
  - `OPTConfig` configuration class: `OPTForQuestionAnswering` (OPT model)
  - `Qwen2Config` configuration class: `Qwen2ForQuestionAnswering` (Qwen2 model)
  - `Qwen2MoeConfig` configuration class: `Qwen2MoeForQuestionAnswering` (Qwen2MoE model)
  - `Qwen3Config` configuration class: `Qwen3ForQuestionAnswering` (Qwen3 model)
  - `Qwen3MoeConfig` configuration class: `Qwen3MoeForQuestionAnswering` (Qwen3MoE model)
  - `Qwen3NextConfig` configuration class: `Qwen3NextForQuestionAnswering` (Qwen3Next model)
  - `ReformerConfig` configuration class: `ReformerForQuestionAnswering` (Reformer model)
  - `RemBertConfig` configuration class: `RemBertForQuestionAnswering` (RemBERT model)
  - `RoCBertConfig` configuration class: `RoCBertForQuestionAnswering` (RoCBert model)
  - `RoFormerConfig` configuration class: `RoFormerForQuestionAnswering` (RoFormer model)
  - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForQuestionAnswering) (RoBERTa model)
  - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForQuestionAnswering` (RoBERTa-PreLayerNorm model)
  - `SeedOssConfig` configuration class: `SeedOssForQuestionAnswering` (SeedOss model)
  - `SmolLM3Config` configuration class: `SmolLM3ForQuestionAnswering` (SmolLM3 model)
  - `SplinterConfig` configuration class: `SplinterForQuestionAnswering` (Splinter model)
  - `SqueezeBertConfig` configuration class: `SqueezeBertForQuestionAnswering` (SqueezeBERT model)
  - `T5Config` configuration class: `T5ForQuestionAnswering` (T5 model)
  - `UMT5Config` configuration class: `UMT5ForQuestionAnswering` (UMT5 model)
  - `XLMConfig` configuration class: `XLMForQuestionAnsweringSimple` (XLM model)
  - `XLMRobertaConfig` configuration class: `XLMRobertaForQuestionAnswering` (XLM-RoBERTa model)
  - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForQuestionAnswering` (XLM-RoBERTa-XL model)
  - `XLNetConfig` configuration class: `XLNetForQuestionAnsweringSimple` (XLNet model)
  - `XmodConfig` configuration class: `XmodForQuestionAnswering` (X-MOD model)
  - `YosoConfig` configuration class: `YosoForQuestionAnswering` (YOSO model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a question answering head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForQuestionAnswering.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [AlbertConfig](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertConfig) configuration class: [AlbertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForQuestionAnswering) (ALBERT model) - `ArceeConfig` configuration class: `ArceeForQuestionAnswering` (Arcee model) - [BartConfig](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartConfig) configuration class: [BartForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForQuestionAnswering) (BART model) - [BertConfig](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertConfig) configuration class: [BertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForQuestionAnswering) (BERT model) - [BigBirdConfig](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdConfig) configuration class: [BigBirdForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForQuestionAnswering) (BigBird model) - `BigBirdPegasusConfig` configuration class: `BigBirdPegasusForQuestionAnswering` (BigBird-Pegasus model) - `BloomConfig` configuration class: `BloomForQuestionAnswering` (BLOOM model) - `CamembertConfig` configuration class: `CamembertForQuestionAnswering` (CamemBERT model) - `CanineConfig` configuration class: `CanineForQuestionAnswering` (CANINE model) - [ConvBertConfig](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertConfig) configuration class: [ConvBertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForQuestionAnswering) (ConvBERT model) - `Data2VecTextConfig` configuration class: `Data2VecTextForQuestionAnswering` (Data2VecText model) - [DebertaConfig](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaConfig) configuration class: [DebertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForQuestionAnswering) (DeBERTa model) - [DebertaV2Config](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2Config) configuration class: [DebertaV2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForQuestionAnswering) (DeBERTa-v2 model) - `DiffLlamaConfig` configuration class: `DiffLlamaForQuestionAnswering` (DiffLlama model) - `DistilBertConfig` configuration class: `DistilBertForQuestionAnswering` (DistilBERT model) - [ElectraConfig](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraConfig) configuration class: [ElectraForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForQuestionAnswering) (ELECTRA model) - `ErnieConfig` configuration class: `ErnieForQuestionAnswering` (ERNIE model) - [Exaone4Config](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4Config) configuration class: [Exaone4ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForQuestionAnswering) (EXAONE-4.0 model) - `FNetConfig` configuration class: `FNetForQuestionAnswering` (FNet model) - `FalconConfig` configuration class: `FalconForQuestionAnswering` (Falcon model) - `FlaubertConfig` configuration class: `FlaubertForQuestionAnsweringSimple` (FlauBERT model) - `FunnelConfig` configuration class: `FunnelForQuestionAnswering` (Funnel Transformer model) - [GPT2Config](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2Config) configuration class: [GPT2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForQuestionAnswering) (OpenAI GPT-2 model) - `GPTJConfig` configuration class: `GPTJForQuestionAnswering` (GPT-J model) - `GPTNeoConfig` configuration class: `GPTNeoForQuestionAnswering` (GPT Neo model) - `GPTNeoXConfig` configuration class: `GPTNeoXForQuestionAnswering` (GPT NeoX model) - `IBertConfig` configuration class: `IBertForQuestionAnswering` (I-BERT model) - `JinaEmbeddingsV3Config` configuration class: `JinaEmbeddingsV3ForQuestionAnswering` (JinaEmbeddingsV3 model) - `LEDConfig` configuration class: `LEDForQuestionAnswering` (LED model) - `LayoutLMv2Config` configuration class: `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model) - `LayoutLMv3Config` configuration class: `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model) - `LiltConfig` configuration class: `LiltForQuestionAnswering` (LiLT model) - [LlamaConfig](/docs/transformers/v5.5.1/ko/model_doc/llama2#transformers.LlamaConfig) configuration class: `LlamaForQuestionAnswering` (LLaMA model) - `LongformerConfig` configuration class: `LongformerForQuestionAnswering` (Longformer model) - `LukeConfig` configuration class: `LukeForQuestionAnswering` (LUKE model) - `LxmertConfig` configuration class: `LxmertForQuestionAnswering` (LXMERT model) - `MBartConfig` configuration class: `MBartForQuestionAnswering` (mBART model) - `MPNetConfig` configuration class: `MPNetForQuestionAnswering` (MPNet model) - `MT5Config` configuration class: `MT5ForQuestionAnswering` (MT5 model) - `MarkupLMConfig` configuration class: `MarkupLMForQuestionAnswering` (MarkupLM model) - `MegatronBertConfig` configuration class: `MegatronBertForQuestionAnswering` (Megatron-BERT model) - `MiniMaxConfig` configuration class: `MiniMaxForQuestionAnswering` (MiniMax model) - `Ministral3Config` configuration class: `Ministral3ForQuestionAnswering` (Ministral3 model) - `MinistralConfig` configuration class: `MinistralForQuestionAnswering` (Ministral model) - [MistralConfig](/docs/transformers/v5.5.1/ko/model_doc/mistral#transformers.MistralConfig) configuration class: `MistralForQuestionAnswering` (Mistral model) - `MixtralConfig` configuration class: `MixtralForQuestionAnswering` (Mixtral model) - `MobileBertConfig` configuration class: `MobileBertForQuestionAnswering` (MobileBERT model) - `ModernBertConfig` configuration class: `ModernBertForQuestionAnswering` (ModernBERT model) - `MptConfig` configuration class: `MptForQuestionAnswering` (MPT model) - `MraConfig` configuration class: `MraForQuestionAnswering` (MRA model) - `MvpConfig` configuration class: `MvpForQuestionAnswering` (MVP model) - `NemotronConfig` configuration class: `NemotronForQuestionAnswering` (Nemotron model) - `NystromformerConfig` configuration class: `NystromformerForQuestionAnswering` (Nyströmformer model) - `OPTConfig` configuration class: `OPTForQuestionAnswering` (OPT model) - `Qwen2Config` configuration class: `Qwen2ForQuestionAnswering` (Qwen2 model) - `Qwen2MoeConfig` configuration class: `Qwen2MoeForQuestionAnswering` (Qwen2MoE model) - `Qwen3Config` configuration class: `Qwen3ForQuestionAnswering` (Qwen3 model) - `Qwen3MoeConfig` configuration class: `Qwen3MoeForQuestionAnswering` (Qwen3MoE model) - `Qwen3NextConfig` configuration class: `Qwen3NextForQuestionAnswering` (Qwen3Next model) - `ReformerConfig` configuration class: `ReformerForQuestionAnswering` (Reformer model) - `RemBertConfig` configuration class: `RemBertForQuestionAnswering` (RemBERT model) - `RoCBertConfig` configuration class: `RoCBertForQuestionAnswering` (RoCBert model) - `RoFormerConfig` configuration class: `RoFormerForQuestionAnswering` (RoFormer model) - [RobertaConfig](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaConfig) configuration class: [RobertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForQuestionAnswering) (RoBERTa model) - `RobertaPreLayerNormConfig` configuration class: `RobertaPreLayerNormForQuestionAnswering` (RoBERTa-PreLayerNorm model) - `SeedOssConfig` configuration class: `SeedOssForQuestionAnswering` (SeedOss model) - `SmolLM3Config` configuration class: `SmolLM3ForQuestionAnswering` (SmolLM3 model) - `SplinterConfig` configuration class: `SplinterForQuestionAnswering` (Splinter model) - `SqueezeBertConfig` configuration class: `SqueezeBertForQuestionAnswering` (SqueezeBERT model) - `T5Config` configuration class: `T5ForQuestionAnswering` (T5 model) - `UMT5Config` configuration class: `UMT5ForQuestionAnswering` (UMT5 model) - `XLMConfig` configuration class: `XLMForQuestionAnsweringSimple` (XLM model) - `XLMRobertaConfig` configuration class: `XLMRobertaForQuestionAnswering` (XLM-RoBERTa model) - `XLMRobertaXLConfig` configuration class: `XLMRobertaXLForQuestionAnswering` (XLM-RoBERTa-XL model) - `XLNetConfig` configuration class: `XLNetForQuestionAnsweringSimple` (XLNet model) - `XmodConfig` configuration class: `XmodForQuestionAnswering` (X-MOD model) - `YosoConfig` configuration class: `YosoForQuestionAnswering` (YOSO model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForQuestionAnswering.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **albert** -- [AlbertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/albert#transformers.AlbertForQuestionAnswering) (ALBERT model)
- **arcee** -- `ArceeForQuestionAnswering` (Arcee model)
- **bart** -- [BartForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bart#transformers.BartForQuestionAnswering) (BART model)
- **bert** -- [BertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/bert#transformers.BertForQuestionAnswering) (BERT model)
- **big_bird** -- [BigBirdForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/big_bird#transformers.BigBirdForQuestionAnswering) (BigBird model)
- **bigbird_pegasus** -- `BigBirdPegasusForQuestionAnswering` (BigBird-Pegasus model)
- **bloom** -- `BloomForQuestionAnswering` (BLOOM model)
- **camembert** -- `CamembertForQuestionAnswering` (CamemBERT model)
- **canine** -- `CanineForQuestionAnswering` (CANINE model)
- **convbert** -- [ConvBertForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/convbert#transformers.ConvBertForQuestionAnswering) (ConvBERT model)
- **data2vec-text** -- `Data2VecTextForQuestionAnswering` (Data2VecText model)
- **deberta** -- [DebertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta#transformers.DebertaForQuestionAnswering) (DeBERTa model)
- **deberta-v2** -- [DebertaV2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/deberta-v2#transformers.DebertaV2ForQuestionAnswering) (DeBERTa-v2 model)
- **diffllama** -- `DiffLlamaForQuestionAnswering` (DiffLlama model)
- **distilbert** -- `DistilBertForQuestionAnswering` (DistilBERT model)
- **electra** -- [ElectraForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/electra#transformers.ElectraForQuestionAnswering) (ELECTRA model)
- **ernie** -- `ErnieForQuestionAnswering` (ERNIE model)
- **exaone4** -- [Exaone4ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/exaone4#transformers.Exaone4ForQuestionAnswering) (EXAONE-4.0 model)
- **falcon** -- `FalconForQuestionAnswering` (Falcon model)
- **flaubert** -- `FlaubertForQuestionAnsweringSimple` (FlauBERT model)
- **fnet** -- `FNetForQuestionAnswering` (FNet model)
- **funnel** -- `FunnelForQuestionAnswering` (Funnel Transformer model)
- **gpt2** -- [GPT2ForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/gpt2#transformers.GPT2ForQuestionAnswering) (OpenAI GPT-2 model)
- **gpt_neo** -- `GPTNeoForQuestionAnswering` (GPT Neo model)
- **gpt_neox** -- `GPTNeoXForQuestionAnswering` (GPT NeoX model)
- **gptj** -- `GPTJForQuestionAnswering` (GPT-J model)
- **ibert** -- `IBertForQuestionAnswering` (I-BERT model)
- **jina_embeddings_v3** -- `JinaEmbeddingsV3ForQuestionAnswering` (JinaEmbeddingsV3 model)
- **layoutlmv2** -- `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model)
- **led** -- `LEDForQuestionAnswering` (LED model)
- **lilt** -- `LiltForQuestionAnswering` (LiLT model)
- **llama** -- `LlamaForQuestionAnswering` (LLaMA model)
- **longformer** -- `LongformerForQuestionAnswering` (Longformer model)
- **luke** -- `LukeForQuestionAnswering` (LUKE model)
- **lxmert** -- `LxmertForQuestionAnswering` (LXMERT model)
- **markuplm** -- `MarkupLMForQuestionAnswering` (MarkupLM model)
- **mbart** -- `MBartForQuestionAnswering` (mBART model)
- **megatron-bert** -- `MegatronBertForQuestionAnswering` (Megatron-BERT model)
- **minimax** -- `MiniMaxForQuestionAnswering` (MiniMax model)
- **ministral** -- `MinistralForQuestionAnswering` (Ministral model)
- **ministral3** -- `Ministral3ForQuestionAnswering` (Ministral3 model)
- **mistral** -- `MistralForQuestionAnswering` (Mistral model)
- **mixtral** -- `MixtralForQuestionAnswering` (Mixtral model)
- **mobilebert** -- `MobileBertForQuestionAnswering` (MobileBERT model)
- **modernbert** -- `ModernBertForQuestionAnswering` (ModernBERT model)
- **mpnet** -- `MPNetForQuestionAnswering` (MPNet model)
- **mpt** -- `MptForQuestionAnswering` (MPT model)
- **mra** -- `MraForQuestionAnswering` (MRA model)
- **mt5** -- `MT5ForQuestionAnswering` (MT5 model)
- **mvp** -- `MvpForQuestionAnswering` (MVP model)
- **nemotron** -- `NemotronForQuestionAnswering` (Nemotron model)
- **nystromformer** -- `NystromformerForQuestionAnswering` (Nyströmformer model)
- **opt** -- `OPTForQuestionAnswering` (OPT model)
- **qwen2** -- `Qwen2ForQuestionAnswering` (Qwen2 model)
- **qwen2_moe** -- `Qwen2MoeForQuestionAnswering` (Qwen2MoE model)
- **qwen3** -- `Qwen3ForQuestionAnswering` (Qwen3 model)
- **qwen3_moe** -- `Qwen3MoeForQuestionAnswering` (Qwen3MoE model)
- **qwen3_next** -- `Qwen3NextForQuestionAnswering` (Qwen3Next model)
- **reformer** -- `ReformerForQuestionAnswering` (Reformer model)
- **rembert** -- `RemBertForQuestionAnswering` (RemBERT model)
- **roberta** -- [RobertaForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/roberta#transformers.RobertaForQuestionAnswering) (RoBERTa model)
- **roberta-prelayernorm** -- `RobertaPreLayerNormForQuestionAnswering` (RoBERTa-PreLayerNorm model)
- **roc_bert** -- `RoCBertForQuestionAnswering` (RoCBert model)
- **roformer** -- `RoFormerForQuestionAnswering` (RoFormer model)
- **seed_oss** -- `SeedOssForQuestionAnswering` (SeedOss model)
- **smollm3** -- `SmolLM3ForQuestionAnswering` (SmolLM3 model)
- **splinter** -- `SplinterForQuestionAnswering` (Splinter model)
- **squeezebert** -- `SqueezeBertForQuestionAnswering` (SqueezeBERT model)
- **t5** -- `T5ForQuestionAnswering` (T5 model)
- **umt5** -- `UMT5ForQuestionAnswering` (UMT5 model)
- **xlm** -- `XLMForQuestionAnsweringSimple` (XLM model)
- **xlm-roberta** -- `XLMRobertaForQuestionAnswering` (XLM-RoBERTa model)
- **xlm-roberta-xl** -- `XLMRobertaXLForQuestionAnswering` (XLM-RoBERTa-XL model)
- **xlnet** -- `XLNetForQuestionAnsweringSimple` (XLNet model)
- **xmod** -- `XmodForQuestionAnswering` (X-MOD model)
- **yoso** -- `YosoForQuestionAnswering` (YOSO model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForTextEncoding[[transformers.AutoModelForTextEncoding]][[transformers.AutoModelForTextEncoding]]

#### transformers.AutoModelForTextEncoding[[transformers.AutoModelForTextEncoding]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1961)

## 컴퓨터 비전[[computer-vision]]

다음 자동 클래스들은 아래의 컴퓨터 비전 작업에 사용할 수 있습니다.

### AutoModelForDepthEstimation[[transformers.AutoModelForDepthEstimation]][[transformers.AutoModelForDepthEstimation]]

#### transformers.AutoModelForDepthEstimation[[transformers.AutoModelForDepthEstimation]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2165)

This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForDepthEstimation.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `CHMv2Config` configuration class: `CHMv2ForDepthEstimation` (CHMv2 model)
  - `DPTConfig` configuration class: `DPTForDepthEstimation` (DPT model)
  - `DepthAnythingConfig` configuration class: `DepthAnythingForDepthEstimation` (Depth Anything model)
  - `DepthProConfig` configuration class: `DepthProForDepthEstimation` (DepthPro model)
  - `GLPNConfig` configuration class: `GLPNForDepthEstimation` (GLPN model)
  - `PromptDepthAnythingConfig` configuration class: `PromptDepthAnythingForDepthEstimation` (PromptDepthAnything model)
  - `ZoeDepthConfig` configuration class: `ZoeDepthForDepthEstimation` (ZoeDepth model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a depth estimation head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForDepthEstimation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForDepthEstimation.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `CHMv2Config` configuration class: `CHMv2ForDepthEstimation` (CHMv2 model) - `DPTConfig` configuration class: `DPTForDepthEstimation` (DPT model) - `DepthAnythingConfig` configuration class: `DepthAnythingForDepthEstimation` (Depth Anything model) - `DepthProConfig` configuration class: `DepthProForDepthEstimation` (DepthPro model) - `GLPNConfig` configuration class: `GLPNForDepthEstimation` (GLPN model) - `PromptDepthAnythingConfig` configuration class: `PromptDepthAnythingForDepthEstimation` (PromptDepthAnything model) - `ZoeDepthConfig` configuration class: `ZoeDepthForDepthEstimation` (ZoeDepth model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForDepthEstimation.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **chmv2** -- `CHMv2ForDepthEstimation` (CHMv2 model)
- **depth_anything** -- `DepthAnythingForDepthEstimation` (Depth Anything model)
- **depth_pro** -- `DepthProForDepthEstimation` (DepthPro model)
- **dpt** -- `DPTForDepthEstimation` (DPT model)
- **glpn** -- `GLPNForDepthEstimation` (GLPN model)
- **prompt_depth_anything** -- `PromptDepthAnythingForDepthEstimation` (PromptDepthAnything model)
- **zoedepth** -- `ZoeDepthForDepthEstimation` (ZoeDepth model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForDepthEstimation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForImageClassification[[transformers.AutoModelForImageClassification]][[transformers.AutoModelForImageClassification]]

#### transformers.AutoModelForImageClassification[[transformers.AutoModelForImageClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2090)

This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForImageClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `BeitConfig` configuration class: `BeitForImageClassification` (BEiT model)
  - `BitConfig` configuration class: `BitForImageClassification` (BiT model)
  - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPForImageClassification) (CLIP model)
  - `ConvNextConfig` configuration class: `ConvNextForImageClassification` (ConvNeXT model)
  - `ConvNextV2Config` configuration class: `ConvNextV2ForImageClassification` (ConvNeXTV2 model)
  - `CvtConfig` configuration class: `CvtForImageClassification` (CvT model)
  - `Data2VecVisionConfig` configuration class: `Data2VecVisionForImageClassification` (Data2VecVision model)
  - `DeiTConfig` configuration class: `DeiTForImageClassification` or `DeiTForImageClassificationWithTeacher` (DeiT model)
  - `DinatConfig` configuration class: `DinatForImageClassification` (DiNAT model)
  - `Dinov2Config` configuration class: `Dinov2ForImageClassification` (DINOv2 model)
  - `Dinov2WithRegistersConfig` configuration class: `Dinov2WithRegistersForImageClassification` (DINOv2 with Registers model)
  - `DonutSwinConfig` configuration class: `DonutSwinForImageClassification` (DonutSwin model)
  - `EfficientNetConfig` configuration class: `EfficientNetForImageClassification` (EfficientNet model)
  - `FocalNetConfig` configuration class: `FocalNetForImageClassification` (FocalNet model)
  - `HGNetV2Config` configuration class: `HGNetV2ForImageClassification` (HGNet-V2 model)
  - `HieraConfig` configuration class: `HieraForImageClassification` (Hiera model)
  - `IJepaConfig` configuration class: `IJepaForImageClassification` (I-JEPA model)
  - `ImageGPTConfig` configuration class: `ImageGPTForImageClassification` (ImageGPT model)
  - `LevitConfig` configuration class: `LevitForImageClassification` or `LevitForImageClassificationWithTeacher` (LeViT model)
  - `MetaClip2Config` configuration class: `MetaClip2ForImageClassification` (MetaCLIP 2 model)
  - `MobileNetV1Config` configuration class: `MobileNetV1ForImageClassification` (MobileNetV1 model)
  - `MobileNetV2Config` configuration class: `MobileNetV2ForImageClassification` (MobileNetV2 model)
  - `MobileViTConfig` configuration class: `MobileViTForImageClassification` (MobileViT model)
  - `MobileViTV2Config` configuration class: `MobileViTV2ForImageClassification` (MobileViTV2 model)
  - `PPLCNetConfig` configuration class: `PPLCNetForImageClassification` (PPLCNet model)
  - `PerceiverConfig` configuration class: `PerceiverForImageClassificationLearned` or `PerceiverForImageClassificationFourier` or `PerceiverForImageClassificationConvProcessing` (Perceiver model)
  - `PoolFormerConfig` configuration class: `PoolFormerForImageClassification` (PoolFormer model)
  - `PvtConfig` configuration class: `PvtForImageClassification` (PVT model)
  - `PvtV2Config` configuration class: `PvtV2ForImageClassification` (PVTv2 model)
  - `RegNetConfig` configuration class: `RegNetForImageClassification` (RegNet model)
  - `ResNetConfig` configuration class: `ResNetForImageClassification` (ResNet model)
  - `SegformerConfig` configuration class: `SegformerForImageClassification` (SegFormer model)
  - `ShieldGemma2Config` configuration class: `ShieldGemma2ForImageClassification` (Shieldgemma2 model)
  - `Siglip2Config` configuration class: `Siglip2ForImageClassification` (SigLIP2 model)
  - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipForImageClassification) (SigLIP model)
  - `SwiftFormerConfig` configuration class: `SwiftFormerForImageClassification` (SwiftFormer model)
  - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForImageClassification) (Swin Transformer model)
  - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2ForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForImageClassification) (Swin Transformer V2 model)
  - `TextNetConfig` configuration class: `TextNetForImageClassification` (TextNet model)
  - `TimmWrapperConfig` configuration class: `TimmWrapperForImageClassification` (TimmWrapperModel model)
  - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForImageClassification) (ViT model)
  - `ViTMSNConfig` configuration class: `ViTMSNForImageClassification` (ViTMSN model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a image classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForImageClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `BeitConfig` configuration class: `BeitForImageClassification` (BEiT model) - `BitConfig` configuration class: `BitForImageClassification` (BiT model) - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPForImageClassification) (CLIP model) - `ConvNextConfig` configuration class: `ConvNextForImageClassification` (ConvNeXT model) - `ConvNextV2Config` configuration class: `ConvNextV2ForImageClassification` (ConvNeXTV2 model) - `CvtConfig` configuration class: `CvtForImageClassification` (CvT model) - `Data2VecVisionConfig` configuration class: `Data2VecVisionForImageClassification` (Data2VecVision model) - `DeiTConfig` configuration class: `DeiTForImageClassification` or `DeiTForImageClassificationWithTeacher` (DeiT model) - `DinatConfig` configuration class: `DinatForImageClassification` (DiNAT model) - `Dinov2Config` configuration class: `Dinov2ForImageClassification` (DINOv2 model) - `Dinov2WithRegistersConfig` configuration class: `Dinov2WithRegistersForImageClassification` (DINOv2 with Registers model) - `DonutSwinConfig` configuration class: `DonutSwinForImageClassification` (DonutSwin model) - `EfficientNetConfig` configuration class: `EfficientNetForImageClassification` (EfficientNet model) - `FocalNetConfig` configuration class: `FocalNetForImageClassification` (FocalNet model) - `HGNetV2Config` configuration class: `HGNetV2ForImageClassification` (HGNet-V2 model) - `HieraConfig` configuration class: `HieraForImageClassification` (Hiera model) - `IJepaConfig` configuration class: `IJepaForImageClassification` (I-JEPA model) - `ImageGPTConfig` configuration class: `ImageGPTForImageClassification` (ImageGPT model) - `LevitConfig` configuration class: `LevitForImageClassification` or `LevitForImageClassificationWithTeacher` (LeViT model) - `MetaClip2Config` configuration class: `MetaClip2ForImageClassification` (MetaCLIP 2 model) - `MobileNetV1Config` configuration class: `MobileNetV1ForImageClassification` (MobileNetV1 model) - `MobileNetV2Config` configuration class: `MobileNetV2ForImageClassification` (MobileNetV2 model) - `MobileViTConfig` configuration class: `MobileViTForImageClassification` (MobileViT model) - `MobileViTV2Config` configuration class: `MobileViTV2ForImageClassification` (MobileViTV2 model) - `PPLCNetConfig` configuration class: `PPLCNetForImageClassification` (PPLCNet model) - `PerceiverConfig` configuration class: `PerceiverForImageClassificationLearned` or `PerceiverForImageClassificationFourier` or `PerceiverForImageClassificationConvProcessing` (Perceiver model) - `PoolFormerConfig` configuration class: `PoolFormerForImageClassification` (PoolFormer model) - `PvtConfig` configuration class: `PvtForImageClassification` (PVT model) - `PvtV2Config` configuration class: `PvtV2ForImageClassification` (PVTv2 model) - `RegNetConfig` configuration class: `RegNetForImageClassification` (RegNet model) - `ResNetConfig` configuration class: `ResNetForImageClassification` (ResNet model) - `SegformerConfig` configuration class: `SegformerForImageClassification` (SegFormer model) - `ShieldGemma2Config` configuration class: `ShieldGemma2ForImageClassification` (Shieldgemma2 model) - `Siglip2Config` configuration class: `Siglip2ForImageClassification` (SigLIP2 model) - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipForImageClassification) (SigLIP model) - `SwiftFormerConfig` configuration class: `SwiftFormerForImageClassification` (SwiftFormer model) - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForImageClassification) (Swin Transformer model) - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2ForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForImageClassification) (Swin Transformer V2 model) - `TextNetConfig` configuration class: `TextNetForImageClassification` (TextNet model) - `TimmWrapperConfig` configuration class: `TimmWrapperForImageClassification` (TimmWrapperModel model) - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForImageClassification) (ViT model) - `ViTMSNConfig` configuration class: `ViTMSNForImageClassification` (ViTMSN model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForImageClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **beit** -- `BeitForImageClassification` (BEiT model)
- **bit** -- `BitForImageClassification` (BiT model)
- **clip** -- [CLIPForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPForImageClassification) (CLIP model)
- **convnext** -- `ConvNextForImageClassification` (ConvNeXT model)
- **convnextv2** -- `ConvNextV2ForImageClassification` (ConvNeXTV2 model)
- **cvt** -- `CvtForImageClassification` (CvT model)
- **data2vec-vision** -- `Data2VecVisionForImageClassification` (Data2VecVision model)
- **deit** -- `DeiTForImageClassification` or `DeiTForImageClassificationWithTeacher` (DeiT model)
- **dinat** -- `DinatForImageClassification` (DiNAT model)
- **dinov2** -- `Dinov2ForImageClassification` (DINOv2 model)
- **dinov2_with_registers** -- `Dinov2WithRegistersForImageClassification` (DINOv2 with Registers model)
- **donut-swin** -- `DonutSwinForImageClassification` (DonutSwin model)
- **efficientnet** -- `EfficientNetForImageClassification` (EfficientNet model)
- **focalnet** -- `FocalNetForImageClassification` (FocalNet model)
- **hgnet_v2** -- `HGNetV2ForImageClassification` (HGNet-V2 model)
- **hiera** -- `HieraForImageClassification` (Hiera model)
- **ijepa** -- `IJepaForImageClassification` (I-JEPA model)
- **imagegpt** -- `ImageGPTForImageClassification` (ImageGPT model)
- **levit** -- `LevitForImageClassification` or `LevitForImageClassificationWithTeacher` (LeViT model)
- **metaclip_2** -- `MetaClip2ForImageClassification` (MetaCLIP 2 model)
- **mobilenet_v1** -- `MobileNetV1ForImageClassification` (MobileNetV1 model)
- **mobilenet_v2** -- `MobileNetV2ForImageClassification` (MobileNetV2 model)
- **mobilevit** -- `MobileViTForImageClassification` (MobileViT model)
- **mobilevitv2** -- `MobileViTV2ForImageClassification` (MobileViTV2 model)
- **perceiver** -- `PerceiverForImageClassificationLearned` or `PerceiverForImageClassificationFourier` or `PerceiverForImageClassificationConvProcessing` (Perceiver model)
- **poolformer** -- `PoolFormerForImageClassification` (PoolFormer model)
- **pp_lcnet** -- `PPLCNetForImageClassification` (PPLCNet model)
- **pvt** -- `PvtForImageClassification` (PVT model)
- **pvt_v2** -- `PvtV2ForImageClassification` (PVTv2 model)
- **regnet** -- `RegNetForImageClassification` (RegNet model)
- **resnet** -- `ResNetForImageClassification` (ResNet model)
- **segformer** -- `SegformerForImageClassification` (SegFormer model)
- **shieldgemma2** -- `ShieldGemma2ForImageClassification` (Shieldgemma2 model)
- **siglip** -- [SiglipForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipForImageClassification) (SigLIP model)
- **siglip2** -- `Siglip2ForImageClassification` (SigLIP2 model)
- **swiftformer** -- `SwiftFormerForImageClassification` (SwiftFormer model)
- **swin** -- [SwinForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForImageClassification) (Swin Transformer model)
- **swinv2** -- [Swinv2ForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForImageClassification) (Swin Transformer V2 model)
- **textnet** -- `TextNetForImageClassification` (TextNet model)
- **timm_wrapper** -- `TimmWrapperForImageClassification` (TimmWrapperModel model)
- **vit** -- [ViTForImageClassification](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForImageClassification) (ViT model)
- **vit_msn** -- `ViTMSNForImageClassification` (ViTMSN model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForVideoClassification[[transformers.AutoModelForVideoClassification]][[transformers.AutoModelForVideoClassification]]

#### transformers.AutoModelForVideoClassification[[transformers.AutoModelForVideoClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2186)

This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForVideoClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [TimesformerConfig](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerConfig) configuration class: [TimesformerForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerForVideoClassification) (TimeSformer model)
  - `VJEPA2Config` configuration class: `VJEPA2ForVideoClassification` (VJEPA2Model model)
  - `VideoMAEConfig` configuration class: `VideoMAEForVideoClassification` (VideoMAE model)
  - [VivitConfig](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitConfig) configuration class: [VivitForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitForVideoClassification) (ViViT model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a video classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForVideoClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForVideoClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [TimesformerConfig](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerConfig) configuration class: [TimesformerForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerForVideoClassification) (TimeSformer model) - `VJEPA2Config` configuration class: `VJEPA2ForVideoClassification` (VJEPA2Model model) - `VideoMAEConfig` configuration class: `VideoMAEForVideoClassification` (VideoMAE model) - [VivitConfig](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitConfig) configuration class: [VivitForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitForVideoClassification) (ViViT model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForVideoClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a video classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **timesformer** -- [TimesformerForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/timesformer#transformers.TimesformerForVideoClassification) (TimeSformer model)
- **videomae** -- `VideoMAEForVideoClassification` (VideoMAE model)
- **vivit** -- [VivitForVideoClassification](/docs/transformers/v5.5.1/ko/model_doc/vivit#transformers.VivitForVideoClassification) (ViViT model)
- **vjepa2** -- `VJEPA2ForVideoClassification` (VJEPA2Model model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForVideoClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForKeypointDetection[[transformers.AutoModelForKeypointDetection]][[transformers.AutoModelForKeypointDetection]]

#### transformers.AutoModelForKeypointDetection[[transformers.AutoModelForKeypointDetection]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1953)

### AutoModelForMaskedImageModeling[[transformers.AutoModelForMaskedImageModeling]][[transformers.AutoModelForMaskedImageModeling]]

#### transformers.AutoModelForMaskedImageModeling[[transformers.AutoModelForMaskedImageModeling]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2268)

This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForMaskedImageModeling.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `DeiTConfig` configuration class: `DeiTForMaskedImageModeling` (DeiT model)
  - `FocalNetConfig` configuration class: `FocalNetForMaskedImageModeling` (FocalNet model)
  - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForMaskedImageModeling) (Swin Transformer model)
  - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2ForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForMaskedImageModeling) (Swin Transformer V2 model)
  - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForMaskedImageModeling) (ViT model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForMaskedImageModeling.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `DeiTConfig` configuration class: `DeiTForMaskedImageModeling` (DeiT model) - `FocalNetConfig` configuration class: `FocalNetForMaskedImageModeling` (FocalNet model) - [SwinConfig](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinConfig) configuration class: [SwinForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForMaskedImageModeling) (Swin Transformer model) - [Swinv2Config](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2Config) configuration class: [Swinv2ForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForMaskedImageModeling) (Swin Transformer V2 model) - [ViTConfig](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTConfig) configuration class: [ViTForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForMaskedImageModeling) (ViT model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForMaskedImageModeling.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **deit** -- `DeiTForMaskedImageModeling` (DeiT model)
- **focalnet** -- `FocalNetForMaskedImageModeling` (FocalNet model)
- **swin** -- [SwinForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swin#transformers.SwinForMaskedImageModeling) (Swin Transformer model)
- **swinv2** -- [Swinv2ForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/swinv2#transformers.Swinv2ForMaskedImageModeling) (Swin Transformer V2 model)
- **vit** -- [ViTForMaskedImageModeling](/docs/transformers/v5.5.1/ko/model_doc/vit#transformers.ViTForMaskedImageModeling) (ViT model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForObjectDetection[[transformers.AutoModelForObjectDetection]][[transformers.AutoModelForObjectDetection]]

#### transformers.AutoModelForObjectDetection[[transformers.AutoModelForObjectDetection]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2149)

This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForObjectDetection.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `ConditionalDetrConfig` configuration class: `ConditionalDetrForObjectDetection` (Conditional DETR model)
  - `DFineConfig` configuration class: `DFineForObjectDetection` (D-FINE model)
  - `DabDetrConfig` configuration class: `DabDetrForObjectDetection` (DAB-DETR model)
  - `DeformableDetrConfig` configuration class: `DeformableDetrForObjectDetection` (Deformable DETR model)
  - `DetrConfig` configuration class: `DetrForObjectDetection` (DETR model)
  - `LwDetrConfig` configuration class: `LwDetrForObjectDetection` (LwDetr model)
  - `PPDocLayoutV2Config` configuration class: `PPDocLayoutV2ForObjectDetection` (PPDocLayoutV2 model)
  - `PPDocLayoutV3Config` configuration class: `PPDocLayoutV3ForObjectDetection` (PPDocLayoutV3 model)
  - `PPOCRV5MobileDetConfig` configuration class: `PPOCRV5MobileDetForObjectDetection` (PPOCRV5MobileDet model)
  - `PPOCRV5ServerDetConfig` configuration class: `PPOCRV5ServerDetForObjectDetection` (PPOCRV5ServerDet model)
  - `RTDetrConfig` configuration class: `RTDetrForObjectDetection` (RT-DETR model)
  - `RTDetrV2Config` configuration class: `RTDetrV2ForObjectDetection` (RT-DETRv2 model)
  - `TableTransformerConfig` configuration class: `TableTransformerForObjectDetection` (Table Transformer model)
  - `YolosConfig` configuration class: `YolosForObjectDetection` (YOLOS model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a object detection head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForObjectDetection

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForObjectDetection.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `ConditionalDetrConfig` configuration class: `ConditionalDetrForObjectDetection` (Conditional DETR model) - `DFineConfig` configuration class: `DFineForObjectDetection` (D-FINE model) - `DabDetrConfig` configuration class: `DabDetrForObjectDetection` (DAB-DETR model) - `DeformableDetrConfig` configuration class: `DeformableDetrForObjectDetection` (Deformable DETR model) - `DetrConfig` configuration class: `DetrForObjectDetection` (DETR model) - `LwDetrConfig` configuration class: `LwDetrForObjectDetection` (LwDetr model) - `PPDocLayoutV2Config` configuration class: `PPDocLayoutV2ForObjectDetection` (PPDocLayoutV2 model) - `PPDocLayoutV3Config` configuration class: `PPDocLayoutV3ForObjectDetection` (PPDocLayoutV3 model) - `PPOCRV5MobileDetConfig` configuration class: `PPOCRV5MobileDetForObjectDetection` (PPOCRV5MobileDet model) - `PPOCRV5ServerDetConfig` configuration class: `PPOCRV5ServerDetForObjectDetection` (PPOCRV5ServerDet model) - `RTDetrConfig` configuration class: `RTDetrForObjectDetection` (RT-DETR model) - `RTDetrV2Config` configuration class: `RTDetrV2ForObjectDetection` (RT-DETRv2 model) - `TableTransformerConfig` configuration class: `TableTransformerForObjectDetection` (Table Transformer model) - `YolosConfig` configuration class: `YolosForObjectDetection` (YOLOS model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForObjectDetection.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a object detection head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **conditional_detr** -- `ConditionalDetrForObjectDetection` (Conditional DETR model)
- **d_fine** -- `DFineForObjectDetection` (D-FINE model)
- **dab-detr** -- `DabDetrForObjectDetection` (DAB-DETR model)
- **deformable_detr** -- `DeformableDetrForObjectDetection` (Deformable DETR model)
- **detr** -- `DetrForObjectDetection` (DETR model)
- **lw_detr** -- `LwDetrForObjectDetection` (LwDetr model)
- **pp_doclayout_v2** -- `PPDocLayoutV2ForObjectDetection` (PPDocLayoutV2 model)
- **pp_doclayout_v3** -- `PPDocLayoutV3ForObjectDetection` (PPDocLayoutV3 model)
- **pp_ocrv5_mobile_det** -- `PPOCRV5MobileDetForObjectDetection` (PPOCRV5MobileDet model)
- **pp_ocrv5_server_det** -- `PPOCRV5ServerDetForObjectDetection` (PPOCRV5ServerDet model)
- **rt_detr** -- `RTDetrForObjectDetection` (RT-DETR model)
- **rt_detr_v2** -- `RTDetrV2ForObjectDetection` (RT-DETRv2 model)
- **table-transformer** -- `TableTransformerForObjectDetection` (Table Transformer model)
- **yolos** -- `YolosForObjectDetection` (YOLOS model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForObjectDetection

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForImageSegmentation[[transformers.AutoModelForImageSegmentation]][[transformers.AutoModelForImageSegmentation]]

#### transformers.AutoModelForImageSegmentation[[transformers.AutoModelForImageSegmentation]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2106)

This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForImageSegmentation.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `DetrConfig` configuration class: `DetrForSegmentation` (DETR model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a image segmentation head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForImageSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForImageSegmentation.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `DetrConfig` configuration class: `DetrForSegmentation` (DETR model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForImageSegmentation.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **detr** -- `DetrForSegmentation` (DETR model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForImageSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForImageToImage[[transformers.AutoModelForImageToImage]][[transformers.AutoModelForImageToImage]]

#### transformers.AutoModelForImageToImage[[transformers.AutoModelForImageToImage]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L1965)

### AutoModelForSemanticSegmentation[[transformers.AutoModelForSemanticSegmentation]][[transformers.AutoModelForSemanticSegmentation]]

#### transformers.AutoModelForSemanticSegmentation[[transformers.AutoModelForSemanticSegmentation]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2113)

This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForSemanticSegmentation.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `BeitConfig` configuration class: `BeitForSemanticSegmentation` (BEiT model)
  - `DPTConfig` configuration class: `DPTForSemanticSegmentation` (DPT model)
  - `Data2VecVisionConfig` configuration class: `Data2VecVisionForSemanticSegmentation` (Data2VecVision model)
  - `MobileNetV2Config` configuration class: `MobileNetV2ForSemanticSegmentation` (MobileNetV2 model)
  - `MobileViTConfig` configuration class: `MobileViTForSemanticSegmentation` (MobileViT model)
  - `MobileViTV2Config` configuration class: `MobileViTV2ForSemanticSegmentation` (MobileViTV2 model)
  - `SegformerConfig` configuration class: `SegformerForSemanticSegmentation` (SegFormer model)
  - `UperNetConfig` configuration class: `UperNetForSemanticSegmentation` (UPerNet model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSemanticSegmentation.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `BeitConfig` configuration class: `BeitForSemanticSegmentation` (BEiT model) - `DPTConfig` configuration class: `DPTForSemanticSegmentation` (DPT model) - `Data2VecVisionConfig` configuration class: `Data2VecVisionForSemanticSegmentation` (Data2VecVision model) - `MobileNetV2Config` configuration class: `MobileNetV2ForSemanticSegmentation` (MobileNetV2 model) - `MobileViTConfig` configuration class: `MobileViTForSemanticSegmentation` (MobileViT model) - `MobileViTV2Config` configuration class: `MobileViTV2ForSemanticSegmentation` (MobileViTV2 model) - `SegformerConfig` configuration class: `SegformerForSemanticSegmentation` (SegFormer model) - `UperNetConfig` configuration class: `UperNetForSemanticSegmentation` (UPerNet model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForSemanticSegmentation.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **beit** -- `BeitForSemanticSegmentation` (BEiT model)
- **data2vec-vision** -- `Data2VecVisionForSemanticSegmentation` (Data2VecVision model)
- **dpt** -- `DPTForSemanticSegmentation` (DPT model)
- **mobilenet_v2** -- `MobileNetV2ForSemanticSegmentation` (MobileNetV2 model)
- **mobilevit** -- `MobileViTForSemanticSegmentation` (MobileViT model)
- **mobilevitv2** -- `MobileViTV2ForSemanticSegmentation` (MobileViTV2 model)
- **segformer** -- `SegformerForSemanticSegmentation` (SegFormer model)
- **upernet** -- `UperNetForSemanticSegmentation` (UPerNet model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForInstanceSegmentation[[transformers.AutoModelForInstanceSegmentation]][[transformers.AutoModelForInstanceSegmentation]]

#### transformers.AutoModelForInstanceSegmentation[[transformers.AutoModelForInstanceSegmentation]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2140)

This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForInstanceSegmentation.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `MaskFormerConfig` configuration class: `MaskFormerForInstanceSegmentation` (MaskFormer model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForInstanceSegmentation.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `MaskFormerConfig` configuration class: `MaskFormerForInstanceSegmentation` (MaskFormer model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForInstanceSegmentation.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **maskformer** -- `MaskFormerForInstanceSegmentation` (MaskFormer model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForUniversalSegmentation[[transformers.AutoModelForUniversalSegmentation]][[transformers.AutoModelForUniversalSegmentation]]

#### transformers.AutoModelForUniversalSegmentation[[transformers.AutoModelForUniversalSegmentation]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2131)

This is a generic model class that will be instantiated as one of the model classes of the library (with a universal image segmentation head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForUniversalSegmentation.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `DetrConfig` configuration class: `DetrForSegmentation` (DETR model)
  - `EomtConfig` configuration class: `EomtForUniversalSegmentation` (EoMT model)
  - `EomtDinov3Config` configuration class: `EomtDinov3ForUniversalSegmentation` (EoMT-DINOv3 model)
  - `Mask2FormerConfig` configuration class: `Mask2FormerForUniversalSegmentation` (Mask2Former model)
  - `MaskFormerConfig` configuration class: `MaskFormerForInstanceSegmentation` (MaskFormer model)
  - `OneFormerConfig` configuration class: `OneFormerForUniversalSegmentation` (OneFormer model)
  - `VideomtConfig` configuration class: `VideomtForUniversalSegmentation` (VidEoMT model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a universal image segmentation head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForUniversalSegmentation

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForUniversalSegmentation.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `DetrConfig` configuration class: `DetrForSegmentation` (DETR model) - `EomtConfig` configuration class: `EomtForUniversalSegmentation` (EoMT model) - `EomtDinov3Config` configuration class: `EomtDinov3ForUniversalSegmentation` (EoMT-DINOv3 model) - `Mask2FormerConfig` configuration class: `Mask2FormerForUniversalSegmentation` (Mask2Former model) - `MaskFormerConfig` configuration class: `MaskFormerForInstanceSegmentation` (MaskFormer model) - `OneFormerConfig` configuration class: `OneFormerForUniversalSegmentation` (OneFormer model) - `VideomtConfig` configuration class: `VideomtForUniversalSegmentation` (VidEoMT model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForUniversalSegmentation.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a universal image segmentation head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **detr** -- `DetrForSegmentation` (DETR model)
- **eomt** -- `EomtForUniversalSegmentation` (EoMT model)
- **eomt_dinov3** -- `EomtDinov3ForUniversalSegmentation` (EoMT-DINOv3 model)
- **mask2former** -- `Mask2FormerForUniversalSegmentation` (Mask2Former model)
- **maskformer** -- `MaskFormerForInstanceSegmentation` (MaskFormer model)
- **oneformer** -- `OneFormerForUniversalSegmentation` (OneFormer model)
- **videomt** -- `VideomtForUniversalSegmentation` (VidEoMT model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForUniversalSegmentation

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForZeroShotImageClassification[[transformers.AutoModelForZeroShotImageClassification]][[transformers.AutoModelForZeroShotImageClassification]]

#### transformers.AutoModelForZeroShotImageClassification[[transformers.AutoModelForZeroShotImageClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2097)

This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForZeroShotImageClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `AlignConfig` configuration class: `AlignModel` (ALIGN model)
  - [AltCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPConfig) configuration class: [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model)
  - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2ForImageTextRetrieval](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForImageTextRetrieval) (BLIP-2 model)
  - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model)
  - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model)
  - [CLIPSegConfig](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegConfig) configuration class: [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model)
  - `ChineseCLIPConfig` configuration class: `ChineseCLIPModel` (Chinese-CLIP model)
  - `MetaClip2Config` configuration class: `MetaClip2Model` (MetaCLIP 2 model)
  - `Siglip2Config` configuration class: `Siglip2Model` (SigLIP2 model)
  - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForZeroShotImageClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `AlignConfig` configuration class: `AlignModel` (ALIGN model) - [AltCLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPConfig) configuration class: [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model) - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2ForImageTextRetrieval](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForImageTextRetrieval) (BLIP-2 model) - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model) - [CLIPConfig](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPConfig) configuration class: [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model) - [CLIPSegConfig](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegConfig) configuration class: [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model) - `ChineseCLIPConfig` configuration class: `ChineseCLIPModel` (Chinese-CLIP model) - `MetaClip2Config` configuration class: `MetaClip2Model` (MetaCLIP 2 model) - `Siglip2Config` configuration class: `Siglip2Model` (SigLIP2 model) - [SiglipConfig](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipConfig) configuration class: [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForZeroShotImageClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **align** -- `AlignModel` (ALIGN model)
- **altclip** -- [AltCLIPModel](/docs/transformers/v5.5.1/ko/model_doc/altclip#transformers.AltCLIPModel) (AltCLIP model)
- **blip** -- [BlipModel](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipModel) (BLIP model)
- **blip-2** -- [Blip2ForImageTextRetrieval](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForImageTextRetrieval) (BLIP-2 model)
- **chinese_clip** -- `ChineseCLIPModel` (Chinese-CLIP model)
- **clip** -- [CLIPModel](/docs/transformers/v5.5.1/ko/model_doc/clip#transformers.CLIPModel) (CLIP model)
- **clipseg** -- [CLIPSegModel](/docs/transformers/v5.5.1/ko/model_doc/clipseg#transformers.CLIPSegModel) (CLIPSeg model)
- **metaclip_2** -- `MetaClip2Model` (MetaCLIP 2 model)
- **siglip** -- [SiglipModel](/docs/transformers/v5.5.1/ko/model_doc/siglip#transformers.SiglipModel) (SigLIP model)
- **siglip2** -- `Siglip2Model` (SigLIP2 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForZeroShotObjectDetection[[transformers.AutoModelForZeroShotObjectDetection]][[transformers.AutoModelForZeroShotObjectDetection]]

#### transformers.AutoModelForZeroShotObjectDetection[[transformers.AutoModelForZeroShotObjectDetection]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2156)

This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForZeroShotObjectDetection.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [GroundingDinoConfig](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoConfig) configuration class: [GroundingDinoForObjectDetection](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoForObjectDetection) (Grounding DINO model)
  - `MMGroundingDinoConfig` configuration class: `MMGroundingDinoForObjectDetection` (MM Grounding DINO model)
  - `OmDetTurboConfig` configuration class: `OmDetTurboForObjectDetection` (OmDet-Turbo model)
  - `OwlViTConfig` configuration class: `OwlViTForObjectDetection` (OWL-ViT model)
  - `Owlv2Config` configuration class: `Owlv2ForObjectDetection` (OWLv2 model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForZeroShotObjectDetection.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [GroundingDinoConfig](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoConfig) configuration class: [GroundingDinoForObjectDetection](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoForObjectDetection) (Grounding DINO model) - `MMGroundingDinoConfig` configuration class: `MMGroundingDinoForObjectDetection` (MM Grounding DINO model) - `OmDetTurboConfig` configuration class: `OmDetTurboForObjectDetection` (OmDet-Turbo model) - `OwlViTConfig` configuration class: `OwlViTForObjectDetection` (OWL-ViT model) - `Owlv2Config` configuration class: `Owlv2ForObjectDetection` (OWLv2 model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForZeroShotObjectDetection.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **grounding-dino** -- [GroundingDinoForObjectDetection](/docs/transformers/v5.5.1/ko/model_doc/grounding-dino#transformers.GroundingDinoForObjectDetection) (Grounding DINO model)
- **mm-grounding-dino** -- `MMGroundingDinoForObjectDetection` (MM Grounding DINO model)
- **omdet-turbo** -- `OmDetTurboForObjectDetection` (OmDet-Turbo model)
- **owlv2** -- `Owlv2ForObjectDetection` (OWLv2 model)
- **owlvit** -- `OwlViTForObjectDetection` (OWL-ViT model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

## 오디오[[audio]]

다음 자동 클래스들은 아래의 오디오 작업에 사용할 수 있습니다.

### AutoModelForAudioClassification[[transformers.AutoModelForAudioClassification]][[transformers.AutoModelForAudioClassification]]

#### transformers.AutoModelForAudioClassification[[transformers.AutoModelForAudioClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2217)

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForAudioClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `ASTConfig` configuration class: `ASTForAudioClassification` (Audio Spectrogram Transformer model)
  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForSequenceClassification` (Data2VecAudio model)
  - `HubertConfig` configuration class: `HubertForSequenceClassification` (Hubert model)
  - `SEWConfig` configuration class: `SEWForSequenceClassification` (SEW model)
  - `SEWDConfig` configuration class: `SEWDForSequenceClassification` (SEW-D model)
  - `UniSpeechConfig` configuration class: `UniSpeechForSequenceClassification` (UniSpeech model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatForSequenceClassification` (UniSpeechSat model)
  - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForSequenceClassification` (Wav2Vec2-BERT model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2ForSequenceClassification` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForSequenceClassification` (Wav2Vec2-Conformer model)
  - `WavLMConfig` configuration class: `WavLMForSequenceClassification` (WavLM model)
  - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperForAudioClassification](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForAudioClassification) (Whisper model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a audio classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `ASTConfig` configuration class: `ASTForAudioClassification` (Audio Spectrogram Transformer model) - `Data2VecAudioConfig` configuration class: `Data2VecAudioForSequenceClassification` (Data2VecAudio model) - `HubertConfig` configuration class: `HubertForSequenceClassification` (Hubert model) - `SEWConfig` configuration class: `SEWForSequenceClassification` (SEW model) - `SEWDConfig` configuration class: `SEWDForSequenceClassification` (SEW-D model) - `UniSpeechConfig` configuration class: `UniSpeechForSequenceClassification` (UniSpeech model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatForSequenceClassification` (UniSpeechSat model) - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForSequenceClassification` (Wav2Vec2-BERT model) - `Wav2Vec2Config` configuration class: `Wav2Vec2ForSequenceClassification` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForSequenceClassification` (Wav2Vec2-Conformer model) - `WavLMConfig` configuration class: `WavLMForSequenceClassification` (WavLM model) - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperForAudioClassification](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForAudioClassification) (Whisper model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForAudioClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **audio-spectrogram-transformer** -- `ASTForAudioClassification` (Audio Spectrogram Transformer model)
- **data2vec-audio** -- `Data2VecAudioForSequenceClassification` (Data2VecAudio model)
- **hubert** -- `HubertForSequenceClassification` (Hubert model)
- **sew** -- `SEWForSequenceClassification` (SEW model)
- **sew-d** -- `SEWDForSequenceClassification` (SEW-D model)
- **unispeech** -- `UniSpeechForSequenceClassification` (UniSpeech model)
- **unispeech-sat** -- `UniSpeechSatForSequenceClassification` (UniSpeechSat model)
- **wav2vec2** -- `Wav2Vec2ForSequenceClassification` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertForSequenceClassification` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerForSequenceClassification` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMForSequenceClassification` (WavLM model)
- **whisper** -- [WhisperForAudioClassification](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForAudioClassification) (Whisper model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForAudioFrameClassification[[transformers.AutoModelForAudioFrameClassification]][[transformers.AutoModelForAudioFrameClassification]]

#### transformers.AutoModelForAudioFrameClassification[[transformers.AutoModelForAudioFrameClassification]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2240)

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForAudioFrameClassification.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForAudioFrameClassification` (Data2VecAudio model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatForAudioFrameClassification` (UniSpeechSat model)
  - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForAudioFrameClassification` (Wav2Vec2-BERT model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2ForAudioFrameClassification` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForAudioFrameClassification` (Wav2Vec2-Conformer model)
  - `WavLMConfig` configuration class: `WavLMForAudioFrameClassification` (WavLM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioFrameClassification.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForAudioFrameClassification` (Data2VecAudio model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatForAudioFrameClassification` (UniSpeechSat model) - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForAudioFrameClassification` (Wav2Vec2-BERT model) - `Wav2Vec2Config` configuration class: `Wav2Vec2ForAudioFrameClassification` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForAudioFrameClassification` (Wav2Vec2-Conformer model) - `WavLMConfig` configuration class: `WavLMForAudioFrameClassification` (WavLM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForAudioFrameClassification.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **data2vec-audio** -- `Data2VecAudioForAudioFrameClassification` (Data2VecAudio model)
- **unispeech-sat** -- `UniSpeechSatForAudioFrameClassification` (UniSpeechSat model)
- **wav2vec2** -- `Wav2Vec2ForAudioFrameClassification` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertForAudioFrameClassification` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerForAudioFrameClassification` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMForAudioFrameClassification` (WavLM model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForCTC[[transformers.AutoModelForCTC]][[transformers.AutoModelForCTC]]

#### transformers.AutoModelForCTC[[transformers.AutoModelForCTC]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2224)

This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForCTC.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForCTC` (Data2VecAudio model)
  - `HubertConfig` configuration class: `HubertForCTC` (Hubert model)
  - `LasrCTCConfig` configuration class: `LasrForCTC` (Lasr model)
  - `ParakeetCTCConfig` configuration class: `ParakeetForCTC` (Parakeet model)
  - `SEWConfig` configuration class: `SEWForCTC` (SEW model)
  - `SEWDConfig` configuration class: `SEWDForCTC` (SEW-D model)
  - `UniSpeechConfig` configuration class: `UniSpeechForCTC` (UniSpeech model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatForCTC` (UniSpeechSat model)
  - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForCTC` (Wav2Vec2-BERT model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2ForCTC` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForCTC` (Wav2Vec2-Conformer model)
  - `WavLMConfig` configuration class: `WavLMForCTC` (WavLM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForCTC

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForCTC.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForCTC` (Data2VecAudio model) - `HubertConfig` configuration class: `HubertForCTC` (Hubert model) - `LasrCTCConfig` configuration class: `LasrForCTC` (Lasr model) - `ParakeetCTCConfig` configuration class: `ParakeetForCTC` (Parakeet model) - `SEWConfig` configuration class: `SEWForCTC` (SEW model) - `SEWDConfig` configuration class: `SEWDForCTC` (SEW-D model) - `UniSpeechConfig` configuration class: `UniSpeechForCTC` (UniSpeech model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatForCTC` (UniSpeechSat model) - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForCTC` (Wav2Vec2-BERT model) - `Wav2Vec2Config` configuration class: `Wav2Vec2ForCTC` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForCTC` (Wav2Vec2-Conformer model) - `WavLMConfig` configuration class: `WavLMForCTC` (WavLM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForCTC.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **data2vec-audio** -- `Data2VecAudioForCTC` (Data2VecAudio model)
- **hubert** -- `HubertForCTC` (Hubert model)
- **lasr_ctc** -- `LasrForCTC` (Lasr model)
- **parakeet_ctc** -- `ParakeetForCTC` (Parakeet model)
- **sew** -- `SEWForCTC` (SEW model)
- **sew-d** -- `SEWDForCTC` (SEW-D model)
- **unispeech** -- `UniSpeechForCTC` (UniSpeech model)
- **unispeech-sat** -- `UniSpeechSatForCTC` (UniSpeechSat model)
- **wav2vec2** -- `Wav2Vec2ForCTC` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertForCTC` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerForCTC` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMForCTC` (WavLM model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForCTC

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForSpeechSeq2Seq[[transformers.AutoModelForSpeechSeq2Seq]][[transformers.AutoModelForSpeechSeq2Seq]]

#### transformers.AutoModelForSpeechSeq2Seq[[transformers.AutoModelForSpeechSeq2Seq]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2231)

This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForSpeechSeq2Seq.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `CohereAsrConfig` configuration class: `CohereAsrForConditionalGeneration` (CohereASR model)
  - `DiaConfig` configuration class: `DiaForConditionalGeneration` (Dia model)
  - `GraniteSpeechConfig` configuration class: `GraniteSpeechForConditionalGeneration` (GraniteSpeech model)
  - `KyutaiSpeechToTextConfig` configuration class: `KyutaiSpeechToTextForConditionalGeneration` (KyutaiSpeechToText model)
  - `MoonshineConfig` configuration class: `MoonshineForConditionalGeneration` (Moonshine model)
  - `MoonshineStreamingConfig` configuration class: `MoonshineStreamingForConditionalGeneration` (MoonshineStreaming model)
  - `Pop2PianoConfig` configuration class: `Pop2PianoForConditionalGeneration` (Pop2Piano model)
  - `SeamlessM4TConfig` configuration class: `SeamlessM4TForSpeechToText` (SeamlessM4T model)
  - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2ForSpeechToText` (SeamlessM4Tv2 model)
  - `Speech2TextConfig` configuration class: `Speech2TextForConditionalGeneration` (Speech2Text model)
  - `SpeechEncoderDecoderConfig` configuration class: `SpeechEncoderDecoderModel` (Speech Encoder decoder model)
  - `SpeechT5Config` configuration class: `SpeechT5ForSpeechToText` (SpeechT5 model)
  - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
  - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model)
  - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
  - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForConditionalGeneration) (Whisper model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForSpeechSeq2Seq.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `CohereAsrConfig` configuration class: `CohereAsrForConditionalGeneration` (CohereASR model) - `DiaConfig` configuration class: `DiaForConditionalGeneration` (Dia model) - `GraniteSpeechConfig` configuration class: `GraniteSpeechForConditionalGeneration` (GraniteSpeech model) - `KyutaiSpeechToTextConfig` configuration class: `KyutaiSpeechToTextForConditionalGeneration` (KyutaiSpeechToText model) - `MoonshineConfig` configuration class: `MoonshineForConditionalGeneration` (Moonshine model) - `MoonshineStreamingConfig` configuration class: `MoonshineStreamingForConditionalGeneration` (MoonshineStreaming model) - `Pop2PianoConfig` configuration class: `Pop2PianoForConditionalGeneration` (Pop2Piano model) - `SeamlessM4TConfig` configuration class: `SeamlessM4TForSpeechToText` (SeamlessM4T model) - `SeamlessM4Tv2Config` configuration class: `SeamlessM4Tv2ForSpeechToText` (SeamlessM4Tv2 model) - `Speech2TextConfig` configuration class: `Speech2TextForConditionalGeneration` (Speech2Text model) - `SpeechEncoderDecoderConfig` configuration class: `SpeechEncoderDecoderModel` (Speech Encoder decoder model) - `SpeechT5Config` configuration class: `SpeechT5ForSpeechToText` (SpeechT5 model) - `VibeVoiceAsrConfig` configuration class: `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model) - `VoxtralConfig` configuration class: `VoxtralForConditionalGeneration` (Voxtral model) - `VoxtralRealtimeConfig` configuration class: `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model) - [WhisperConfig](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperConfig) configuration class: [WhisperForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForConditionalGeneration) (Whisper model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForSpeechSeq2Seq.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **cohere_asr** -- `CohereAsrForConditionalGeneration` (CohereASR model)
- **dia** -- `DiaForConditionalGeneration` (Dia model)
- **granite_speech** -- `GraniteSpeechForConditionalGeneration` (GraniteSpeech model)
- **kyutai_speech_to_text** -- `KyutaiSpeechToTextForConditionalGeneration` (KyutaiSpeechToText model)
- **moonshine** -- `MoonshineForConditionalGeneration` (Moonshine model)
- **moonshine_streaming** -- `MoonshineStreamingForConditionalGeneration` (MoonshineStreaming model)
- **pop2piano** -- `Pop2PianoForConditionalGeneration` (Pop2Piano model)
- **seamless_m4t** -- `SeamlessM4TForSpeechToText` (SeamlessM4T model)
- **seamless_m4t_v2** -- `SeamlessM4Tv2ForSpeechToText` (SeamlessM4Tv2 model)
- **speech-encoder-decoder** -- `SpeechEncoderDecoderModel` (Speech Encoder decoder model)
- **speech_to_text** -- `Speech2TextForConditionalGeneration` (Speech2Text model)
- **speecht5** -- `SpeechT5ForSpeechToText` (SpeechT5 model)
- **vibevoice_asr** -- `VibeVoiceAsrForConditionalGeneration` (VibeVoiceAsr model)
- **voxtral** -- `VoxtralForConditionalGeneration` (Voxtral model)
- **voxtral_realtime** -- `VoxtralRealtimeForConditionalGeneration` (VoxtralRealtime model)
- **whisper** -- [WhisperForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/whisper#transformers.WhisperForConditionalGeneration) (Whisper model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForAudioXVector[[transformers.AutoModelForAudioXVector]][[transformers.AutoModelForAudioXVector]]

#### transformers.AutoModelForAudioXVector[[transformers.AutoModelForAudioXVector]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2249)

This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForAudioXVector.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForXVector` (Data2VecAudio model)
  - `UniSpeechSatConfig` configuration class: `UniSpeechSatForXVector` (UniSpeechSat model)
  - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForXVector` (Wav2Vec2-BERT model)
  - `Wav2Vec2Config` configuration class: `Wav2Vec2ForXVector` (Wav2Vec2 model)
  - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForXVector` (Wav2Vec2-Conformer model)
  - `WavLMConfig` configuration class: `WavLMForXVector` (WavLM model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a audio retrieval via x-vector head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioXVector

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForAudioXVector.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `Data2VecAudioConfig` configuration class: `Data2VecAudioForXVector` (Data2VecAudio model) - `UniSpeechSatConfig` configuration class: `UniSpeechSatForXVector` (UniSpeechSat model) - `Wav2Vec2BertConfig` configuration class: `Wav2Vec2BertForXVector` (Wav2Vec2-BERT model) - `Wav2Vec2Config` configuration class: `Wav2Vec2ForXVector` (Wav2Vec2 model) - `Wav2Vec2ConformerConfig` configuration class: `Wav2Vec2ConformerForXVector` (Wav2Vec2-Conformer model) - `WavLMConfig` configuration class: `WavLMForXVector` (WavLM model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForAudioXVector.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **data2vec-audio** -- `Data2VecAudioForXVector` (Data2VecAudio model)
- **unispeech-sat** -- `UniSpeechSatForXVector` (UniSpeechSat model)
- **wav2vec2** -- `Wav2Vec2ForXVector` (Wav2Vec2 model)
- **wav2vec2-bert** -- `Wav2Vec2BertForXVector` (Wav2Vec2-BERT model)
- **wav2vec2-conformer** -- `Wav2Vec2ConformerForXVector` (Wav2Vec2-Conformer model)
- **wavlm** -- `WavLMForXVector` (WavLM model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForAudioXVector

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForTextToSpectrogram[[transformers.AutoModelForTextToSpectrogram]][[transformers.AutoModelForTextToSpectrogram]]

#### transformers.AutoModelForTextToSpectrogram[[transformers.AutoModelForTextToSpectrogram]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2253)

### AutoModelForTextToWaveform[[transformers.AutoModelForTextToWaveform]][[transformers.AutoModelForTextToWaveform]]

#### transformers.AutoModelForTextToWaveform[[transformers.AutoModelForTextToWaveform]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2257)

## 멀티모달[[multimodal]]

다음 자동 클래스들은 아래의 멀티모달 작업에 사용할 수 있습니다.

### AutoModelForTableQuestionAnswering[[transformers.AutoModelForTableQuestionAnswering]][[transformers.AutoModelForTableQuestionAnswering]]

#### transformers.AutoModelForTableQuestionAnswering[[transformers.AutoModelForTableQuestionAnswering]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2034)

This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForTableQuestionAnswering.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `TapasConfig` configuration class: `TapasForQuestionAnswering` (TAPAS model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a table question answering head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google/tapas-base-finetuned-wtq")
>>> model = AutoModelForTableQuestionAnswering.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `TapasConfig` configuration class: `TapasForQuestionAnswering` (TAPAS model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForTableQuestionAnswering.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **tapas** -- `TapasForQuestionAnswering` (TAPAS model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")

>>> # Update configuration during loading
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForDocumentQuestionAnswering[[transformers.AutoModelForDocumentQuestionAnswering]][[transformers.AutoModelForDocumentQuestionAnswering]]

#### transformers.AutoModelForDocumentQuestionAnswering[[transformers.AutoModelForDocumentQuestionAnswering]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2056)

This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForDocumentQuestionAnswering.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `LayoutLMConfig` configuration class: `LayoutLMForQuestionAnswering` (LayoutLM model)
  - `LayoutLMv2Config` configuration class: `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model)
  - `LayoutLMv3Config` configuration class: `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a document question answering head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = AutoModelForDocumentQuestionAnswering.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `LayoutLMConfig` configuration class: `LayoutLMForQuestionAnswering` (LayoutLM model) - `LayoutLMv2Config` configuration class: `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model) - `LayoutLMv3Config` configuration class: `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForDocumentQuestionAnswering.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **layoutlm** -- `LayoutLMForQuestionAnswering` (LayoutLM model)
- **layoutlmv2** -- `LayoutLMv2ForQuestionAnswering` (LayoutLMv2 model)
- **layoutlmv3** -- `LayoutLMv3ForQuestionAnswering` (LayoutLMv3 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")

>>> # Update configuration during loading
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

### AutoModelForVisualQuestionAnswering[[transformers.AutoModelForVisualQuestionAnswering]][[transformers.AutoModelForVisualQuestionAnswering]]

#### transformers.AutoModelForVisualQuestionAnswering[[transformers.AutoModelForVisualQuestionAnswering]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2045)

This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForVisualQuestionAnswering.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForConditionalGeneration) (BLIP-2 model)
  - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipForQuestionAnswering) (BLIP model)
  - `ViltConfig` configuration class: `ViltForQuestionAnswering` (ViLT model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a visual question answering head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
>>> model = AutoModelForVisualQuestionAnswering.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - [Blip2Config](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2Config) configuration class: [Blip2ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForConditionalGeneration) (BLIP-2 model) - [BlipConfig](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipConfig) configuration class: [BlipForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipForQuestionAnswering) (BLIP model) - `ViltConfig` configuration class: `ViltForQuestionAnswering` (ViLT model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForVisualQuestionAnswering.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **blip** -- [BlipForQuestionAnswering](/docs/transformers/v5.5.1/ko/model_doc/blip#transformers.BlipForQuestionAnswering) (BLIP model)
- **blip-2** -- [Blip2ForConditionalGeneration](/docs/transformers/v5.5.1/ko/model_doc/blip-2#transformers.Blip2ForConditionalGeneration) (BLIP-2 model)
- **vilt** -- `ViltForQuestionAnswering` (ViLT model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")

>>> # Update configuration during loading
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

## Time Series

### AutoModelForTimeSeriesPrediction[[transformers.AutoModelForTimeSeriesPrediction]][[transformers.AutoModelForTimeSeriesPrediction]]

#### transformers.AutoModelForTimeSeriesPrediction[[transformers.AutoModelForTimeSeriesPrediction]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/modeling_auto.py#L2122)

This is a generic model class that will be instantiated as one of the model classes of the library (with a time-series prediction head) when created
with the [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) class method or the [from_config()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_config) class
method.

This class cannot be instantiated directly using `__init__()` (throws an error).

from_configtransformers.AutoModelForTimeSeriesPrediction.from_confighttps://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L206[{"name": "**kwargs", "val": ""}]- **config** ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) --
  The model class to instantiate is selected based on the configuration class:

  - `TimesFm2_5Config` configuration class: `TimesFm2_5ModelForPrediction` (TimesFm2p5 model)
  - `TimesFmConfig` configuration class: `TimesFmModelForPrediction` (TimesFm model)
- **attn_implementation** (`str`, *optional*) --
  The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.0

Instantiates one of the model classes of the library (with a time-series prediction head) from a configuration.

Note:
Loading a model from its configuration file does **not** load the model weights. It only affects the
model's configuration. Use [from_pretrained()](/docs/transformers/v5.5.1/ko/model_doc/auto#transformers.AutoModel.from_pretrained) to load the model weights.

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTimeSeriesPrediction

>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-cased")
>>> model = AutoModelForTimeSeriesPrediction.from_config(config)
```

**Parameters:**

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig)) : The model class to instantiate is selected based on the configuration class:  - `TimesFm2_5Config` configuration class: `TimesFm2_5ModelForPrediction` (TimesFm2p5 model) - `TimesFmConfig` configuration class: `TimesFmModelForPrediction` (TimesFm model)

attn_implementation (`str`, *optional*) : The attention implementation to use in the model (if relevant). Can be any of `"eager"` (manual implementation of the attention), `"sdpa"` (using [`F.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html)), `"flash_attention_2"` (using [Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention)), or `"flash_attention_3"` (using [Dao-AILab/flash-attention/hopper](https://github.com/Dao-AILab/flash-attention/tree/main/hopper)). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual `"eager"` implementation.
#### from_pretrained[[transformers.AutoModelForTimeSeriesPrediction.from_pretrained]]

[Source](https://github.com/huggingface/transformers/blob/v5.5.1/src/transformers/models/auto/auto_factory.py#L253)

Instantiate one of the model classes of the library (with a time-series prediction head) from a pretrained model.

The model class to instantiate is selected based on the `model_type` property of the config object (either
passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by
falling back to using pattern matching on `pretrained_model_name_or_path`:

- **timesfm** -- `TimesFmModelForPrediction` (TimesFm model)
- **timesfm2_5** -- `TimesFm2_5ModelForPrediction` (TimesFm2p5 model)

The model is set in evaluation mode by default using `model.eval()` (so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with `model.train()`

Examples:

```python
>>> from transformers import AutoConfig, AutoModelForTimeSeriesPrediction

>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTimeSeriesPrediction.from_pretrained("google-bert/bert-base-cased")

>>> # Update configuration during loading
>>> model = AutoModelForTimeSeriesPrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
```

**Parameters:**

pretrained_model_name_or_path (`str` or `os.PathLike`) : Can be either:  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - A path to a *directory* containing model weights saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained), e.g., `./my_model_directory/`.

model_args (additional positional arguments, *optional*) : Will be passed along to the underlying model `__init__()` method.

config ([PreTrainedConfig](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig), *optional*) : Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when:  - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). - The model was saved using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and is reloaded by supplying the save directory. - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory.

state_dict (*dict[str, torch.Tensor]*, *optional*) : A state dictionary to use instead of a state dictionary loaded from saved weights file.  This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using [save_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.save_pretrained) and [from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) is not a simpler option.

cache_dir (`str` or `os.PathLike`, *optional*) : Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

force_download (`bool`, *optional*, defaults to `False`) : Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

proxies (`dict[str, str]`, *optional*) : A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.

output_loading_info(`bool`, *optional*, defaults to `False`) : Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.

local_files_only(`bool`, *optional*, defaults to `False`) : Whether or not to only look at local files (e.g., not try downloading the model).

revision (`str`, *optional*, defaults to `"main"`) : The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

trust_remote_code (`bool`, *optional*, defaults to `False`) : Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.

code_revision (`str`, *optional*, defaults to `"main"`) : The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git.

kwargs (additional keyword arguments, *optional*) : Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., `output_attentions=True`). Behaves differently depending on whether a `config` is provided or automatically loaded:  - If a configuration is provided with `config`, `**kwargs` will be directly passed to the underlying model's `__init__` method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided, `kwargs` will be first passed to the configuration class initialization function ([from_pretrained()](/docs/transformers/v5.5.1/ko/main_classes/configuration#transformers.PreTrainedConfig.from_pretrained)). Each key of `kwargs` that corresponds to a configuration attribute will be used to override said attribute with the supplied `kwargs` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's `__init__` function.

