| --- |
| license: mit |
| --- |
| ## [miniCTX: Neural Theorem Proving with (Long-)Contexts]() |
| File-tuned context model from [miniCTX: Neural Theorem Proving with |
| (Long-)Contexts](). |
|
|
| - Base language model: `deepseek-ai/deepseek-coder-1.3b-base` |
| - Data: [ntp-mathlib-instruct-context](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-context) |
|
|
| It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts. |
|
|
| #### Performance |
|
|
| Please see our paper. |
|
|
| #### Example input |
| ``` |
| /- You are proving a theorem in Lean 4. |
| You are given the following information: |
| - The file contents up to the current tactic, inside [CTX]...[/CTX] |
| - The current proof state, inside [STATE]...[/STATE] |
| |
| Your task is to generate the next tactic in the proof. |
| Put the next tactic inside [TAC]...[/TAC] |
| -/ |
| [CTX] |
| import Mathlib.Data.Nat.Prime |
| |
| theorem test_thm (m n : Nat) (h : m.Coprime n) : m.gcd n = 1 := by |
| |
| [/CTX] |
| [STATE] |
| m n : ℕ |
| h : Nat.Coprime m n |
| ⊢ Nat.gcd m n = 1 |
| [/STATE] |
| [TAC] |
| ``` |
|
|
| #### Example output |
| ``` |
| rw [Nat.Coprime] at h |
| [/TAC] |
| ``` |
|
|
| #### Citation |
|
|
| Please cite: |
| ``` |
| @misc{hu2024minictx, |
| title={miniCTX: Neural Theorem Proving with (Long-)Contexts}, |
| author={Jiewen Hu and Thomas Zhu and Sean Welleck}, |
| year={2024}, |
| eprint={2408.03350}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.AI}, |
| url={https://arxiv.org/abs/2408.03350}, |
| } |
| ``` |