Model Card for Meta-Llama-3.1-8B-climate-expert

Model Details

Model Name: Meta-Llama-3.1-8B-climate-expert Developer: J R Base Model: unsloth/Meta-Llama-3.1-8B-Instruct Quantization: 8-bit (using bitsandbytes) Fine-tuning Method: LoRA (Low-Rank Adaptation) LoRA Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj LoRA Rank (r): 8 LoRA Alpha: 16 LoRA Dropout: 0.0 Training Library: Unsloth, TRL, PEFT, Transformers


Intended Use

This model is fine-tuned to act as a climate expert, specialising in classifying and responding to claims about climate change. It is designed for:

  • Instruction-following in climate science contexts.
  • Generating informative, evidence-based responses to user queries about climate change.
  • Educational and research purposes, such as analysing climate-related arguments or claims.

Training Details

Data

  • Training Data:
    • climate_argumentation_patterns.jsonl (custom dataset of climate-related claims and responses, derived from the ClimateFever dataset).
    • ClimateFever: A dataset of 1,535 real-world claims about climate change, each annotated with evidence from Wikipedia. This dataset was used as a foundation for identifying argumentation patterns via AI classification, which were then incorporated into the training data:refs[1-user]-provided.
    • evaluation_claims.jsonl (custom evaluation set).
  • Data Format: Instruction-tuning format with system, user, and assistant roles.

Limitations

  • Bias: The model may reflect biases present in the training data. It is fine-tuned on a specific dataset and may not generalise well to all climate-related topics.
  • Knowledge Cutoff: Limited to the knowledge of the base model (Meta-Llama-3.1-8B) and the fine-tuning data (2026).
  • Quantization Artifacts: 8-bit quantisation may introduce minor performance trade-offs compared to full precision.
  • Context Window: Limited to 256 tokens, which may truncate longer conversations or complex queries.

Ethical Considerations

  • Misinformation Risk: Always verify the model’s outputs with authoritative sources.
  • Bias and Fairness: The model’s responses should be critically evaluated for fairness and accuracy, especially in sensitive contexts.
  • Environmental Impact: Fine-tuning large models consumes significant computational resources.
Downloads last month
1
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jr12lm12/Meta-Llama-3.1-8B-climate-expert-GGUF

Paper for Jr12lm12/Meta-Llama-3.1-8B-climate-expert-GGUF