NKIBench
NKIBench is a benchmark of AWS Neuron Kernel Interface (NKI) kernels paired with NumPy reference implementations. Each task provides a specification, a ground-truth NumPy forward pass, and an optimized NKI kernel targeting AWS Trainium devices, together with tooling to compile, check numerical correctness, and measure on-device latency.
Dataset structure
NKIBench/
├── seeds/ # YAML task specifications (shape-agnostic templates)
├── reference/ # NumPy reference implementations with concrete shapes
├── kernels/ # Initial NKI kernels (one per case)
├── summary.json # Index mapping task → case → {seed, reference, kernel}
├── save_fields.json # Useful fields of the neuron profiler
└── kernel_wrapper.py # Profiler: compile, correctness check, latency benchmark
summary.json
The canonical index. Each entry maps a task name to one or more parameter cases and the files that implement them:
{
"matmul": {
"seed": "./seeds/matmul.yaml",
"cases": {
"3": {
"values": {"K": 5120, "M": 4096, "N": 12288},
"impls": [{
"task": "./reference/matmul_M4096_N12288_K5120_numpy_2.py",
"kernel": "./kernels/matmul_M4096_N12288_K5120_0.py"
}]
}
}
}
}
seeds/*.yaml
A shape-agnostic specification: the task name, its symbolic parameters, an input generator, and a NumPy forward implementation.
test_name: matmul
parameters: [M, N, K]
input: |
lhs = np.random.normal(loc=0, scale=1.0, size=(M, K)).astype(np.float32)
rhs = np.random.normal(loc=0, scale=1.0, size=(K, N)).astype(np.float32)
return [lhs, rhs]
impl: |
def forward(lhs, rhs):
return np.matmul(lhs, rhs)
reference/*.py
A shape-concrete NumPy reference. Exposes:
get_inputs()— produces randomized numpy input tensors.forward(*inputs)— ground-truth computation.transform_to_nki_inputs(inputs)— reshapes numpy inputs into the tile layout the NKI kernel expects.transform_nki_outputs(k_res, ref)— reshapes kernel outputs back to reference layout.
kernels/*.py
Initial NKI kernels using neuronxcc.nki. Each file defines a kernel function decorated with @nki.jit.
Usage
# Clone the dataset
hf download Genghan/NKIBench --repo-type dataset --local-dir NKIBench
cd NKIBench
# Profile one kernel on an AWS Neuron-enabled instance (e.g. trn1 / inf2).
# Requires: neuronx-cc, neuronx-runtime, and the `neuron-profile` CLI.
import json
from kernel_wrapper import NKIKernel
summary = json.load(open("summary.json"))
save_fields = json.load(open("save_fields.json"))
case = summary["matmul"]["cases"]["3"]["impls"][0]
k = NKIKernel(program_path=case["kernel"], base_numpy_path=case["task"])
result = k.profile(save_fields=save_fields)
print("compiled:", result.compiled)
print("correct :", result.correct)
print("latency :", result.metadata.get("latency"), "ms")
NKIKernel.profile() compiles the kernel, validates numerical correctness against the NumPy reference over multiple random seeds (L2-norm relative tolerance 2e-5), and benchmarks latency via neuron-profile. float16 inside a kernel is rejected to avoid silent precision loss.
Citation
If you use NKIBench in your work, please cite the repository.
@article{zhang2026accelopt,
title={AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization},
author={Zhang, Genghan and Zhu, Shaowei and Wei, Anjiang and Song, Zhenyu and Nie, Allen and Jia, Zhen and Vijaykumar, Nandita and Wang, Yida and Olukotun, Kunle},
journal={Proceedings of Machine Learning and Systems},
volume={9},
year={2026}
}
- Downloads last month
- 14