The Dataset Viewer has been disabled on this dataset.

SAM3 Vision Scripts

Detect and segment objects in images using Meta's SAM3 (Segment Anything Model 3) with text prompts. Process HuggingFace datasets with zero-shot detection and segmentation using natural language descriptions.

Script What it does Output
detect-objects.py Object detection with bounding boxes objects column with bbox, category, score
segment-objects.py Pixel-level segmentation masks Segmentation maps or per-instance masks

Browse results interactively: SAM3 Results Browser


Object Detection (detect-objects.py)

Detect objects and output bounding boxes in HuggingFace object detection format.

Quick Start

Requires GPU. Use HuggingFace Jobs for cloud execution:

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
    input-dataset \
    output-dataset \
    --class-name photograph

Example Output

Example Detection

Photograph detected in a historical newspaper with bounding box and confidence score. Generated from davanstrien/newspapers-image-predictions.

Arguments

Required:

  • input_dataset - Input HF dataset ID
  • output_dataset - Output HF dataset ID
  • --class-name - Object class to detect (e.g., "photograph", "animal", "table")

Common options:

  • --confidence-threshold FLOAT - Min confidence (default: 0.5)
  • --batch-size INT - Batch size (default: 4)
  • --max-samples INT - Limit samples for testing
  • --image-column STR - Image column name (default: "image")
  • --private - Make output private
All options
--mask-threshold FLOAT       Mask generation threshold (default: 0.5)
--split STR                  Dataset split (default: "train")
--shuffle                    Shuffle before processing
--model STR                  Model ID (default: "facebook/sam3")
--dtype STR                  Precision: float32|float16|bfloat16
--hf-token STR               HF token (or use HF_TOKEN env var)

Output Format

Adds objects column with ClassLabel-based detections:

{
    "objects": [
        {
            "bbox": [x, y, width, height],
            "category": 0,  # Always 0 for single class
            "score": 0.87
        }
    ]
}

Image Segmentation (segment-objects.py)

Produce pixel-level segmentation masks for objects matching a text prompt. Two output formats available.

Quick Start

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
    input-dataset \
    output-dataset \
    --class-name deer

Example Output

Example Segmentation

Deer segmented in a wildlife camera trap image with pixel-level mask and bounding box. Generated from davanstrien/ena24-detection.

Arguments

Required:

  • input_dataset - Input HF dataset ID
  • output_dataset - Output HF dataset ID
  • --class-name - Object class to segment (e.g., "deer", "animal", "table")

Common options:

  • --output-format - semantic-mask (default) or instance-masks
  • --confidence-threshold FLOAT - Min confidence (default: 0.5)
  • --include-boxes - Also output bounding boxes
  • --batch-size INT - Batch size (default: 4)
  • --max-samples INT - Limit samples for testing
  • --private - Make output private
All options
--mask-threshold FLOAT       Mask binarization threshold (default: 0.5)
--image-column STR           Image column name (default: "image")
--split STR                  Dataset split (default: "train")
--shuffle                    Shuffle before processing
--model STR                  Model ID (default: "facebook/sam3")
--dtype STR                  Precision: float32|float16|bfloat16
--hf-token STR               HF token (or use HF_TOKEN env var)

Output Formats

Semantic mask (--output-format semantic-mask, default):

  • Adds segmentation_map column: single image per sample where pixel value = instance ID (0 = background)
  • More compact, viewable in the HF dataset viewer
  • Also adds num_instances and scores columns

Instance masks (--output-format instance-masks):

  • Adds segmentation_masks column: list of binary mask images (one per detected instance)
  • Also adds scores and category columns
  • Best for extracting individual objects or creating training data

Example

Segment deer in wildlife camera trap images:

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
    davanstrien/ena24-detection \
    my-username/wildlife-segmented \
    --class-name deer \
    --include-boxes

HuggingFace Jobs Examples

Historical Newspapers

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
    davanstrien/newspapers-with-images-after-photography \
    my-username/newspapers-detected \
    --class-name photograph \
    --confidence-threshold 0.6 \
    --batch-size 8

Wildlife Camera Traps

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
    wildlife-images \
    wildlife-segmented \
    --class-name animal \
    --include-boxes

Quick Testing

Test on a small subset before full run:

hf jobs uv run --flavor a100-large \
    -s HF_TOKEN=HF_TOKEN \
    https://huggingface.co/datasets/uv-scripts/sam3/raw/main/segment-objects.py \
    large-dataset \
    test-output \
    --class-name object \
    --max-samples 20

GPU Flavors

# L4 (cost-effective)
--flavor l4x1

# A100 (fastest)
--flavor a100

See HF Jobs pricing.

Local Execution

If you have a CUDA GPU locally:

# Detection
uv run detect-objects.py INPUT OUTPUT --class-name CLASSNAME

# Segmentation
uv run segment-objects.py INPUT OUTPUT --class-name CLASSNAME

Multiple Object Types

Run the script multiple times with different --class-name values:

hf jobs uv run ... --class-name photograph
hf jobs uv run ... --class-name illustration

Performance

GPU Batch Size ~Images/sec
L4 4-8 2-4
A10 8-16 4-6

Varies by image size and detection complexity

Common Use Cases

  • Documents: --class-name table or --class-name figure
  • Newspapers: --class-name photograph or --class-name illustration
  • Wildlife: --class-name animal or --class-name bird
  • Products: --class-name product or --class-name label

Troubleshooting

  • No CUDA: Use HF Jobs (see examples above)
  • OOM errors: Reduce --batch-size
  • Few detections: Lower --confidence-threshold or try different class descriptions
  • Wrong column: Use --image-column your_column_name

About SAM3

SAM3 is Meta's zero-shot vision model. Describe any object in natural language and it will detect and segment it — no training required.

See Also

License

Apache 2.0

Downloads last month
213