Taiwan Patent QA - Gemma-2-9B-IT (For Practice Fine-Tune Model Code)
模型作者: Simon Liu, Google GenAI GDE
原始模型: google/gemma-2-9b-it
用途: 此模型僅為 Fine-Tune 練習用途,準確率無法保證。 本模型經微調,旨在用於台灣專利法相關的問答練習,但尚未進行全面的準確性測試。
Model Author: Simon Liu, Google GenAI GDE
Original Model: google/gemma-2-9b-it
Purpose: This model is intended solely for Fine-Tuning practice; accuracy is not guaranteed. This model has been fine-tuned for question-answering practice related to Taiwan patent law but has not undergone comprehensive accuracy testing.
Model Summary
此模型 taiwan-patent-qa-gemma-2-9b-it 是基於 Google 的 Gemma-2-9B-IT 模型進行微調的。該模型特別針對台灣專利法及相關議題進行訓練。然而,此模型僅作為微調實驗的練習用途,並不保證回答的準確性和可靠性。因此,使用者在實際應用時應注意其答案可能會有偏差。
The model taiwan-patent-qa-gemma-2-9b-it is fine-tuned from Google's Gemma-2-9B-IT. It has been trained specifically for Taiwan patent law and related topics. However, this model is only a practice for fine-tuning experiments and does not guarantee accuracy or reliability in its responses. Users should be cautious, as answers may have deviations in real applications.
主要特點 / Key Features
增強問答效能: 微調過程中針對台灣專利相關問題進行了訓練。
基於先進模型架構: 使用了 Google Gemma-2-9B-IT 的強大架構來增強語言理解。
適用於台灣專利法律術語: 雖經過訓練,但準確性未經全面驗證。
Enhanced QA Performance: Fine-tuned for Taiwanese patent-related questions.
Based on State-of-the-Art Model: Utilizes the powerful architecture of Google Gemma-2-9B-IT to enhance language comprehension.
Suited for Taiwanese Patent Legal Terminology: Although trained on patent terminology, its accuracy has not been fully validated.
How to Use the Model
本模型可透過 Hugging Face 的 transformers 套件載入,以下是使用範例。請注意:模型僅作為微調練習用途,回答的準確性無法保證。
This model can be loaded using the transformers library by Hugging Face. Below is an example of how to use it. Note: This model is only intended for fine-tuning practice, and the accuracy of responses is not guaranteed.
安裝 / Installation
請先安裝所需套件:
Please ensure you have the necessary packages installed:
pip install transformers
載入模型 / Loading the Model
以下是載入和使用模型的範例:
Below is an example of loading and using the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Simon-Liu/taiwan-patent-qa-gemma-2-9b-it"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example question
question = "智慧局取得電子交換的優先權證明文件後,會通知申請人嗎?"
inputs = tokenizer(question, return_tensors="pt")
# Generate the answer
outputs = model.generate(**inputs, max_new_tokens=100)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Answer:", answer)
使用範例 / Example Usage
本模型設計為回答台灣專利相關問題的練習工具。使用時應留意此模型並未進行準確率測試,因此建議僅供參考。
This model is designed as a practice tool for answering questions related to Taiwan's patent law. Users should note that this model has not undergone accuracy testing and should therefore only be used for reference.
Example Input:
"智慧局取得電子交換的優先權證明文件後,會通知申請人嗎?"
Example Output:
"如果電子交換成功的話,智慧局不會再另外通知申請人。"
Model Details
基礎模型: google/gemma-2-9b-it
微調者: Simon Liu, Google GenAI GDE
微調數據: 台灣專利法律相關資料集
目的: 針對台灣專利法律的問答微調練習,但準確率無法保證。
Base Model: google/gemma-2-9b-it
Fine-Tuned by: Simon Liu, Google GenAI GDE
Fine-Tuning Data: Dataset related to Taiwan patent law
Purpose: Fine-tuning practice for question-answering on Taiwan patent law, accuracy is not guaranteed.
Model Training
Model Configuration
LoRA Configuration:
LoraConfig( r=6, lora_alpha=8, target_modules=['o_proj', 'q_proj', 'up_proj', 'v_proj', 'k_proj', 'down_proj', 'gate_proj'], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" )Training Hyperparameters:
- Learning Rate: 5e-6
- Epochs: 20
- GPU: A100 * 1
作者 / Author
Simon Liu
Google GenAI GDE (Google Developer Expert)
如有任何問題或反饋,歡迎透過 Hugging Face 平台或透過 LinkedIn、GitHub 聯絡我。
If you have any questions or feedback, feel free to reach out on the Hugging Face platform or connect with me on LinkedIn or GitHub.
Citation
如果您在研究中使用此模型,請引用如下:
If you use this model in your research, please cite it as follows:
@misc{Liu2024TaiwanPatentQA,
author = {Simon Liu},
title = {Taiwan Patent QA - Gemma-2-9B-IT (Fine-Tune 練習版)},
year = {2024},
url = {https://huggingface.co/Simon-Liu/taiwan-patent-qa-gemma-2-9b-it},
note = {微調模型用於台灣專利問答的練習用途,準確率無法保證 / Fine-tuned model for Taiwan patent question-answering practice, accuracy not guaranteed}
}
- Downloads last month
- 1

