Legend:
✅ = Passed
❌ = Failed
➖ = No testcase yet
Component | Description | Testcases | Since |
---|---|---|---|
MlpExample | A simple Equinox MLP (converter pipeline). | mlp_training_mode ✅mlp_training_mode_f64 ✅mlp_inference_mode ✅mlp_inference_mode_f64 ✅mlp_batched_training_mode ✅mlp_batched_training_mode_f64 ✅ |
v0.8.0 |
SimpleLinearExample | A simple linear layer example using Equinox (converter). | simple_linear ✅simple_linear_f64 ✅nn_linear ✅nn_linear_f64 ✅ |
v0.7.1 |
Attention | Multi-Head Self-Attention using Equinox modules. | attention_dynamic ✅attention ✅ |
v0.10.0 |
AttentionCore | Multi-Head Self-Attention without rotary processing. | attention_core_dynamic ✅attention_core ✅ |
v0.10.0 |
Block | Transformer Block. | transformer_block_dynamic ✅transformer_block ✅ |
v0.10.0 |
DINOv3VisionTransformer | DINOv3 Vision Transformer | eqx_dinov3_vit_Ti14_dynamic ✅eqx_dinov3_vit_Ti14 ✅eqx_dinov3_vit_S14_dynamic ✅eqx_dinov3_vit_S14 ✅eqx_dinov3_vit_B14_dynamic ✅eqx_dinov3_vit_B14 ✅eqx_dinov3_vit_S16_dynamic ✅eqx_dinov3_vit_S16 ✅ |
v0.10.0 |
PatchEmbed | Image to Patch Embedding. | patch_embed ✅ |
v0.10.0 |
GPT | A simple GPT model that reuses nnx.MultiHeadAttention. | gpt_dynamic ✅gpt ✅ |
v0.7.0 |
GPT_Attention | A multi-head attention layer. | gpt_attention ✅ |
v0.7.1 |
GPT_CausalSelfAttention | A causal self-attention module. | gpt_causal_self_attention_dynamic ✅gpt_causal_self_attention ✅ |
v0.7.0 |
GPT_Embeddings | Combines token and position embeddings with dropout. | gpt_embeddings_dynamic ✅gpt_embeddings ✅ |
v0.7.0 |
GPT_Head | The head of the GPT model. | gpt_head_dynamic ✅gpt_head ✅ |
v0.7.0 |
GPT_MLP | An MLP block with GELU activation from nanoGPT. | gpt_mlp_dynamic ✅gpt_mlp ✅ |
v0.7.0 |
GPT_PositionEmbedding | A positional embedding layer using nnx.Embed. | gpt_position_embedding ✅ |
v0.7.0 |
GPT_TokenEmbedding | A token embedding layer using nnx.Embed. | gpt_token_embedding_dynamic ✅gpt_token_embedding ✅ |
v0.7.0 |
GPT_TransformerBlock | A transformer block combining attention and MLP. | gpt_block_dynamic ✅gpt_block ✅ |
v0.7.0 |
GPT_TransformerStack | A stack of transformer blocks. | gpt_transformer_stack_dynamic ✅gpt_transformer_stack ✅ |
v0.7.0 |
broadcast_add | Simple dynamic broadcast + add | gpt_broadcast_add_dynamic_dynamic ✅gpt_broadcast_add_dynamic_dynamic_f64 ✅gpt_broadcast_add_dynamic ✅gpt_broadcast_add_dynamic_f64 ✅ |
v0.7.0 |
cfl_timestep | Tests the CFL condition timestep calculation. | cfl_timestep_f64 ✅ |
v0.6.5 |
weno_reconstruction | Tests the complex arithmetic pattern found in WENO schemes. | weno_reconstruction_f64 ✅ |
v0.6.5 |
fori_loop_test | fori_loop_test: demonstrates jax.lax.fori_loop with a simple loop. | fori_loop_test ✅fori_loop_test_f64 ✅ |
v0.6.3 |
issue18_abs | Test jnp.abs from issue 18 | abs_fn ✅abs_fn_f64 ✅ |
v0.6.3 |
issue18_arange | Test jnp.arange from issue 18 | arange_fn ✅ |
v0.6.3 |
issue18_fori_loop | Test jax.lax.fori_loop from issue 18 | fori_loop_fn ✅fori_loop_fn_f64 ✅ |
v0.6.3 |
issue18_linspace | Test jnp.linspace from issue 18 | linspace_fn ✅ |
v0.6.3 |
issue18_scan | Test jax.lax.scan from issue 18 (no xs) | scan_fn ✅ |
v0.6.3 |
issue18_sign | Test jnp.sign from issue 18 | sign_fn ✅sign_fn_f64 ✅ |
v0.6.3 |
issue18_where | Test jnp.where from issue 18 | where_fn ✅where_fn_f64 ✅ |
v0.6.3 |
issue18_while_loop | Test jax.lax.while_loop from issue 18 | while_loop_fn ✅ |
v0.9.0 |
select_test | Demonstrates jnp.select with scalar and tensor predicates. | select_test_all_options ✅select_test_scalar_select_option_0 ✅select_test_scalar_select_option_1 ✅select_test_scalar_select_option_2 ✅select_test_default_case ✅ |
v0.9.0 |
sort_test | sort_test: demonstrates jnp.sort on slices of an input array. | sort_test_basic ✅ |
v0.9.0 |
cond_scatter_add_mul | Scatter add/mul inside conditional branches (converter). | cond_scatter_add_mul_f64_a ✅cond_scatter_add_mul_f64_b ✅ |
v0.8.0 |
cond_scatter_repro | Reproduces a bug where lax.cond subgraphs do not inherit parent initializers. | cond_scatter_repro_f64 ✅ |
v0.6.4 |
remat2 | Tests a simple case of jax.checkpoint (also known as jax.remat2 ). |
checkpoint_scalar_f32 ✅checkpoint_scalar_f32_f64 ✅ |
v0.6.5 |
scatter_window | Window-scatter (H×W patch) with implicit batch (depth-3 path). Exercises GatherScatterMode.FILL_OR_DROP and double precision. Regression of a prior conversion failure. | scatter_window_update_f64_example ✅ |
v0.7.4 |
AutoEncoder | A simple autoencoder example (converter pipeline). | simple_autoencoder ✅simple_autoencoder_f64 ✅ |
v0.2.0 |
CNN | A simple convolutional neural network (CNN). | simple_cnn_static ✅simple_cnn_dynamic ✅ |
v0.2.0 |
ForiLoop | fori_loop example using nnx-compatible primitives (converter). | fori_loop_counter ✅fori_loop_counter_f64 ✅ |
v0.5.1 |
GRUCell | Flax/nnx GRUCell lowered through converter primitives. | gru_cell_basic ✅ |
v0.7.2 |
MLP | A simple Multi-Layer Perceptron (MLP) with BatchNorm, Dropout, and GELU activation. | simple_mlp_static ✅simple_mlp_static_f64 ✅simple_mlp_dynamic ✅simple_mlp_dynamic_f64 ✅simple_mlp_with_call_params_dynamic ✅simple_mlp_with_call_params_dynamic_f64 ✅simple_mlp_with_call_params ✅simple_mlp_with_call_params_f64 ✅ |
v0.1.0 |
MultiHeadAttention | nnx.MultiHeadAttention exercised in several configurations, including custom attention_fn and symbolic batch variants. | multihead_attention_nn_dynamic ✅multihead_attention_nn ✅multihead_attention_nnx_dynamic ✅multihead_attention_nnx ✅multihead_attention_2_nnx_dynamic ✅multihead_attention_2_nnx ✅ |
v0.2.0 |
SequentialReLU | Two stateless nnx.relu activations chained via nnx.Sequential. | sequential_double_relu ✅sequential_double_relu_f64 ✅ |
v0.7.1 |
SequentialWithResidual | nnx.Sequential nested within a residual block to regress earlier bugs. | sequential_nested_with_residual ✅ |
v0.7.1 |
TransformerDecoderWithSequential | Tiny nnx Transformer decoder using nnx.Sequential in the FFN block. | tiny_decoder_with_sequential ✅tiny_decoder_with_sequential_and_full_dynamic_shapes_dynamic ✅ |
v0.7.1 |
TransformerDecoderWithoutSequential | Tiny nnx Transformer decoder with explicit FFN layers (no Sequential). | tiny_decoder_without_sequential ✅ |
v0.7.1 |
onnx_functions_000 | One function boundary on an outer NNX module (new-world). | 000_one_function_on_outer_layer_dynamic ✅000_one_function_on_outer_layer ✅ |
v0.4.0 |
onnx_functions_001 | one function on an inner layer. | 001_one_function_inner_dynamic ✅001_one_function_inner ✅ |
v0.4.0 |
onnx_functions_002 | two nested functions. | 002_two_nested_functions_dynamic ✅002_two_nested_functions ✅ |
v0.4.0 |
onnx_functions_003 | two nested functions. | 003_two_simple_nested_functions_dynamic ✅003_two_simple_nested_functions ✅ |
v0.4.0 |
onnx_functions_004 | nested function plus component | 004_nested_function_plus_component_dynamic ✅004_nested_function_plus_component ✅ |
v0.4.0 |
onnx_functions_005 | nested function plus more components | 005_nested_function_plus_component_dynamic ✅005_nested_function_plus_component ✅ |
v0.4.0 |
onnx_functions_006 | one function on an outer layer. | 006_one_function_outer_dynamic ✅006_one_function_outer ✅ |
v0.4.0 |
onnx_functions_007 | transformer block with nested mlp block with call parameter | 007_transformer_block_dynamic ✅007_transformer_block ✅ |
v0.4.0 |
onnx_functions_008 | transformer block with nested mlp block no call parameter | 008_transformer_block_dynamic ✅008_transformer_block ✅ |
v0.4.0 |
onnx_functions_009 | transformer block using decorator on class and function | 009_transformer_block_dynamic ✅009_transformer_block ✅ |
v0.4.0 |
onnx_functions_010 | transformer stack | 010_transformer_stack_dynamic ✅010_transformer_stack ✅ |
v0.4.0 |
onnx_functions_012 | Vision Transformer (ViT) | 012_vit_conv_embedding_dynamic ✅012_vit_conv_embedding ✅ |
v0.4.0 |
onnx_functions_013 | Vision Transformer (ViT) | 013_vit_conv_embedding_with_call_params_dynamic ✅013_vit_conv_embedding_with_call_params ✅013_vit_conv_embedding_with_internal_call_params_dynamic ✅013_vit_conv_embedding_with_internal_call_params ✅ |
v0.4.0 |
onnx_functions_014 | one function on an outer layer. | 014_one_function_with_input_param_with_default_value ✅014_one_function_without_input_param_with_default_value_dynamic ✅014_one_function_without_input_param_with_default_value ✅ |
v0.4.0 |
onnx_functions_015 | one function on an outer layer. | 015_one_function_with_input_param_without_default_value_dynamic ✅015_one_function_with_input_param_without_default_value ✅ |
v0.4.0 |
onnx_functions_016 | nested function plus more components | 016_internal_function_with_input_param_with_default_value_dynamic ✅016_internal_function_with_input_param_with_default_value ✅ |
v0.4.0 |
onnx_functions_017 | Demonstrates @onnx_function(unique=True) reuse across call sites. | 017_unique_function_reuse ✅ |
v0.10.0 |
ClassificationHead | Classification head for Vision Transformer | vit_classification_head_dynamic ✅vit_classification_head ✅ |
v0.4.0 |
ClassificationHeadFlatten | Classification head for Vision Transformer | vit_classification_head_flat_dynamic ✅vit_classification_head_flat ✅ |
v0.4.0 |
ConcatClsToken | Concatenate CLS token to the input embedding | vit_concat_cls_token_dynamic ✅vit_concat_cls_token ✅ |
v0.4.0 |
ConcatClsTokenFlatten | Concatenate CLS token to the input embedding | vit_concat_cls_token_flat_dynamic ✅vit_concat_cls_token_flat ✅ |
v0.4.0 |
ConvEmbedding | Convolutional Token Embedding for MNIST with hierarchical downsampling. | vit_mnist_conv_embedding_dynamic ✅vit_mnist_conv_embedding ✅ |
v0.1.0 |
ConvEmbeddingFlatten | Convolutional Token Embedding for MNIST with hierarchical downsampling. | vit_mnist_conv_embedding_flat_dynamic ✅vit_mnist_conv_embedding_flat ✅ |
v0.1.0 |
FeedForward | MLP in Transformer | vit_feed_forward_dynamic ✅vit_feed_forward ✅ |
v0.1.0 |
FeedForwardFlatten | MLP in Transformer | vit_feed_forward_flat_dynamic ✅vit_feed_forward_flat ✅ |
v0.1.0 |
GetToken | Get the CLS token from the input embedding | vit_get_token_dynamic ✅vit_get_token ✅ |
v0.4.0 |
GetTokenFlatten | Get the CLS token from the input embedding | vit_get_token_flat_dynamic ✅vit_get_token_flat ✅ |
v0.4.0 |
PatchEmbedding | Cutting the image into patches and linearly embedding them. | vit_patch_embedding_dynamic ✅vit_patch_embedding ✅ |
v0.1.0 |
PatchEmbeddingFlatten | Cutting the image into patches and linearly embedding them. | vit_patch_embedding_flat_dynamic ✅vit_patch_embedding_flat ✅ |
v0.1.0 |
PositionalEmbedding | Add positional embedding to the input embedding | vit_positional_embedding_dynamic ✅vit_positional_embedding ✅ |
v0.4.0 |
PositionalEmbeddingFlatten | Add positional embedding to the input embedding | vit_positional_embedding_flat_dynamic ✅vit_positional_embedding_flat ✅ |
v0.4.0 |
TransformerBlock | Transformer from ‘Attention Is All You Need.’ | vit_transformer_block_dynamic ✅vit_transformer_block ✅ |
v0.1.0 |
TransformerBlockFlatten | Transformer from ‘Attention Is All You Need.’ | vit_transformer_block_flat_dynamic ✅vit_transformer_block_flat ✅ |
v0.1.0 |
TransformerStack | Stack of Transformer blocks | vit_transformer_stack_dynamic ✅vit_transformer_stack ✅ |
v0.1.0 |
TransformerStackFlatten | Stack of Transformer blocks | vit_transformer_stack_flat_dynamic ✅vit_transformer_stack_flat ✅ |
v0.1.0 |
VisionTransformer | A Vision Transformer (ViT) model for MNIST with configurable embedding type. | vit_conv_embedding_dynamic ✅vit_conv_embedding ✅vit_patch_embedding ✅ |
v0.2.0 |
VisionTransformerFlatten | A Vision Transformer (ViT) model for MNIST with configurable embedding type. | vit_conv_embedding_flat_dynamic ✅vit_conv_embedding_flat ✅vit_patch_embedding_flat_dynamic ✅vit_patch_embedding_flat ✅ |
v0.2.0 |