Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
RoPE (Rotary Position Embedding) Input/Output Dump
Dump of the apply_rope_inplace kernel from SGLang, captured during inference with DeepSeek-V3.1-Base.
Setup
- Model:
deepseek-ai/DeepSeek-V3.1-Base(FP8, MLA architecture) - Framework: SGLang (
sglang-private, commit01e9b55— "Add LoRA MoE support with fused Triton kernels and TP slicing") - Attention Backend:
trtllm_mla(default on sm100 / GB300) - TP: 4 (4x NVIDIA GB300)
- CUDA Graph: disabled
- Input Prompt:
"hello, how are you?"(7 tokens after tokenization)
Server Startup
bash start_server_deepseek.sh
See start_server_deepseek.sh for the full launch command.
Function Signature
# sglang-private/python/sglang/jit_kernel/rope.py
@register_custom_op(mutates_args=["q", "k"])
def apply_rope_inplace(
q: torch.Tensor, # [num_tokens, num_qo_heads, rope_dim] — modified in-place
k: torch.Tensor, # [num_tokens, num_kv_heads, rope_dim] — modified in-place
cos_sin_cache: torch.Tensor, # [max_position, rope_dim], float32
positions: torch.Tensor, # [num_tokens], int64
*,
is_neox: bool, # False for DeepSeek (GPT-J interleaved style)
rope_dim: int = 0, # 64
) -> None
Dump Files
Tensors are stored in safetensors format (rope_dump.safetensors).
Tensor Keys in rope_dump.safetensors
| Key | Shape | Dtype | Description |
|---|---|---|---|
q_input |
[7, 32, 64] |
bfloat16 | Query tensor before RoPE (7 tokens, 32 qo_heads on TP0, rope_dim=64) |
q_output |
[7, 32, 64] |
bfloat16 | Query tensor after RoPE |
k_input |
[7, 1, 64] |
bfloat16 | Key tensor before RoPE (7 tokens, 1 kv_head, rope_dim=64) |
k_output |
[7, 1, 64] |
bfloat16 | Key tensor after RoPE |
cos_sin_cache |
[164096, 64] |
float32 | Precomputed cos/sin cache. First 32 cols = cos, last 32 cols = sin |
positions |
[7] |
int64 | Position indices: [0, 1, 2, 3, 4, 5, 6] |
Other Files
| File | Description |
|---|---|
call_0_meta.json |
Shapes, dtypes, is_neox, rope_dim |
start_server_deepseek.sh |
SGLang server startup script |
Parameters
is_neox = False (GPT-J interleaved style)
rope_dim = 64
positions = [0, 1, 2, 3, 4, 5, 6]
Quick Load Example
from safetensors.torch import load_file
tensors = load_file("rope_dump.safetensors")
q_in = tensors["q_input"] # [7, 32, 64] bfloat16
q_out = tensors["q_output"] # [7, 32, 64] bfloat16
k_in = tensors["k_input"] # [7, 1, 64] bfloat16
k_out = tensors["k_output"] # [7, 1, 64] bfloat16
cos_sin_cache = tensors["cos_sin_cache"] # [164096, 64] float32
positions = tensors["positions"] # [7] int64
print(f"Q: {q_in.shape} -> {q_out.shape}") # [7, 32, 64]
print(f"K: {k_in.shape} -> {k_out.shape}") # [7, 1, 64]
print(f"positions: {positions.tolist()}") # [0, 1, 2, 3, 4, 5, 6]
Load from HuggingFace Hub
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
path = hf_hub_download(
repo_id="yushengsu/rope-dump-DeepSeek-V3.1-Base",
filename="rope_dump.safetensors",
repo_type="dataset",
)
tensors = load_file(path)
print(list(tensors.keys()))
# ['cos_sin_cache', 'k_input', 'k_output', 'positions', 'q_input', 'q_output']
Notes
- At
position=0, cos=1 and sin=0, sok_output == k_input(identity rotation). - Q changes even at position 0 because MLA's
q_b_projmixes head information before RoPE. - This dump captures the first layer's RoPE call during prefill of the prompt.
- During decode steps, DeepSeek's trtllm_mla backend fuses RoPE into the attention kernel, so
apply_rope_inplaceis only called during prefill.
- Downloads last month
- 31