MedGemma Fine-tuned on Dermnet (LoRA Adapters)
This repository contains the LoRA adapters for google/medgemma-4b-it fine-tuned on the Dermnet dataset.
Model Details
- Dataset: Dermnet (~15k images)
- Classes: 23 Dermatology Conditions
- Method: QLoRA (4-bit quantization)
Supported Diagnoses
- Acne and Rosacea Photos
- Actinic Keratosis Basal Cell Carcinoma and other Malignant Lesions
- Atopic Dermatitis Photos
- Bullous Disease Photos
- Cellulitis Impetigo and other Bacterial Infections
- Eczema Photos
- Exanthems and Drug Eruptions
- Hair Loss Photos Alopecia and other Hair Diseases
- Herpes HPV and other STDs Photos
- Light Diseases and Disorders of Pigmentation
- Lupus and other Connective Tissue diseases
- Melanoma Skin Cancer Nevi and Moles
- Nail Fungus and other Nail Disease
- Poison Ivy Photos and other Contact Dermatitis
- Psoriasis pictures Lichen Planus and related diseases
- Scabies Lyme Disease and other Infestations and Bites
- Seborrheic Keratoses and other Benign Tumors
- Systemic Disease
- Tinea Ringworm Candidiasis and other Fungal Infections
- Urticaria Hives
- Vascular Tumors
- Vasculitis Photos
- Warts Molluscum and other Viral Infections
Usage
To use this model, load the base model and then attach these adapters:
from transformers import AutoModelForImageTextToText, AutoProcessor
from peft import PeftModel
import torch
from PIL import Image
# 1. Load Base Model
base_model_id = "google/medgemma-4b-it"
base_model = AutoModelForImageTextToText.from_pretrained(
base_model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# 2. Load Adapters
repo_id = "ayyuce/medgemma-dermatology-dermnet-adapters"
model = PeftModel.from_pretrained(base_model, repo_id)
processor = AutoProcessor.from_pretrained(repo_id)
# 3. Inference
image = Image.open("your_image.jpg")
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this skin condition clinically."}
]
}
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=500)
print(processor.decode(outputs[0], skip_special_tokens=True))
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support