Jamba Reasoning 3B is AI21’s compact, hybrid Transformer–Mamba model built for efficient reasoning on modest hardware. With just ~3B params (26 Mamba layers + 2 attention layers), it achieves strong scores on reasoning benchmarks, supports very long context windows (up to 256K), and runs smoothly with vLLM or Transformers. The Mamba layers drastically cut cache overhead, so you get long-context throughput without the usual KV-cache blow-up—great for laptops, single-GPU boxes, and edge deployments.
Intelligence Benchmark Results
| MMLU-Pro | Humanity’s Last Exam | IFBench |
---|
DeepSeek R1 Distill Qwen 1.5B | 27.0% | 3.3% | 13.0% |
Phi-4 mini | 47.0% | 4.2% | 21.0% |
Granite 4.0 Micro | 44.7% | 5.1% | 24.8% |
Llama 3.2 3B | 35.0% | 5.2% | 26.0% |
Gemma 3 4B | 42.0% | 5.2% | 28.0% |
Qwen 3 1.7B | 57.0% | 4.8% | 27.0% |
Qwen 3 4B | 70% | 5.1% | 33% |
Jamba Reasoning 3B | 61.0% | 6.0% | 52.0% |
Intelligence vs Speed – Jamba Reasoning 3B
Model | Developer | Combined Intelligence Score (%) | Speed (Output Tokens per Second) | Notes |
---|
AI21 Jamba Reasoning 3B | AI21 Labs | ≈38–39% | ≈43–44 tok/s | Highest speed and strong reasoning efficiency. |
Qwen 3 4B | Alibaba | ≈36–37% | ≈13 tok/s | High intelligence, slower due to larger model size. |
Qwen 3 1.7B | Alibaba | ≈32–33% | ≈33 tok/s | Balanced in speed and reasoning. |
Gemma 3 4B | Google DeepMind | ≈25% | ≈30 tok/s | Competitive mid-tier performance. |
Granite 4.0 Micro | IBM | ≈24–25% | ≈18 tok/s | Good compact reasoning model. |
Phi-4 Mini | Microsoft | ≈21–22% | ≈15 tok/s | Lightweight and fast for small tasks. |
Llama 3.2 3B | Meta | ≈19–20% | ≈23 tok/s | Average reasoning and speed for its scale. |
DeepSeek Distill Qwen 1.5B | DeepSeek | ≈12–13% | ≈36 tok/s | Very fast but lower intelligence score. |
Benchmark Performance – Jamba Reasoning 3B
Model | IFBench (%) | Humanity’s Last Exam (%) | MMLU-Pro (%) |
---|
AI21 Jamba Reasoning 3B | 52.0 | 6.0 | 61.0 |
Qwen 3 4B | 33.0 | 5.2 | 70.0 |
Gemma 3 4B | 28.0 | 5.2 | 41.5 |
Llama 3.2 3B | 26.0 | 5.1 | 41.0 |
Granite 4.0 Micro | 24.8 | 5.1 | 43.0 |
Phi-4 Mini | 21.0 | 4.2 | 37.0 |
Observations
- Jamba Reasoning 3B dominates IFBench and Humanity’s Last Exam, showing stronger logical and situational reasoning.
- Qwen 3 4B leads MMLU-Pro, showing higher general knowledge retention, but at the cost of inference speed.
- Gemma 3 4B and Llama 3.2 3B perform comparably across all three, while Phi-4 Mini trails but remains extremely efficient.
On-Device Speed as Context Scales
Model | 16K Tokens | 32K Tokens | 64K Tokens | 128K Tokens | 256K Tokens | Remarks |
---|
AI21 Jamba Reasoning 3B | ≈43 tok/s | ≈41 tok/s | ≈38 tok/s | ≈33 tok/s | ≈26 tok/s | Maintains stable throughput; only model feasible above 128K tokens. |
Qwen 3 4B | ≈39 tok/s | ≈28 tok/s | ≈18 tok/s | ≈12 tok/s | — | Degrades quickly beyond 64K; not ideal for ultra-long context. |
Gemma 3 4B | ≈35 tok/s | ≈22 tok/s | ≈13 tok/s | ≈8 tok/s | — | Significant drop after 32K; context scaling limited. |
Llama 3.2 3B | ≈33 tok/s | ≈20 tok/s | ≈11 tok/s | ≈6 tok/s | — | Moderate speed but falls sharply as context grows. |
Granite 4.0 Micro | ≈30 tok/s | ≈18 tok/s | ≈10 tok/s | ≈5 tok/s | — | Compact but slower scaling; unsuitable for long context. |
Phi-4 Mini | ≈27 tok/s | ≈15 tok/s | ≈8 tok/s | ≈3 tok/s | — | Fast at small windows, but nearly stalls past 64K. |
Insights
- Jamba Reasoning 3B is the only model sustaining practical inference beyond 128K context windows, preserving 60–70% of its base speed.
- Other compact models experience sharp quadratic slowdowns due to full-attention scaling, while Jamba’s Mamba-based state-space layers keep compute linear in sequence length.
- Tests were performed on an Apple M3 MacBook Pro, confirming on-device feasibility without GPU acceleration.
GPU Configuration (Rule-of-Thumb)
Scenario | Precision / Load | Min VRAM that works* | Comfortable VRAM | Typical Setup | Notes / Tips |
---|
CPU-only (quantized) | 4-bit / 5-bit GGUF | — | — | Modern 8-core+ CPU | Good for testing/offline; lower throughput; use llama.cpp or similar. |
Single-GPU (light prompts, short ctx ≤8–16K) | BF16/FP16 | ~6–8 GB | 10–12 GB | RTX 3060 12GB / 4060 8GB / T4 16GB | 3B weights ≈6 GB in BF16; overhead small due to few attention layers. |
Single-GPU (general use, ctx 32–64K) | BF16/FP16 | 8–10 GB | 12–16 GB | RTX 4070/4070 Ti, A10 24GB | Prefer vLLM with --mamba-ssm-cache-dtype float32 for stability/throughput. |
Single-GPU (heavy ctx 128K) | BF16/FP16 | 12–16 GB | 16–24 GB | L4 24GB / A5000 24GB | Mamba SSM cache grows with length; keep max_model_len sane. |
Ultra-long ctx (≤256K) | BF16/FP16 | 20–24 GB | 24–40 GB | L40S 48GB / A100 40GB | Use streaming/pagination; constrain parallel requests; tune max_num_seqs . |
High-throughput serving | BF16/FP16 | 16 GB | 24–48 GB | L40S 48GB / A100 40–80GB | vLLM --tensor-parallel-size as needed; raise --max-num-seqs carefully. |
Memory-tight GPU | 4-bit weight-only | 4–6 GB | 6–8 GB | 4060 8GB / 3050 6GB | Lower quality but fine for quick dev; keep ctx small and temp modest. |
Resources
Link: https://huggingface.co/ai21labs/AI21-Jamba-Reasoning-3B
Step-by-Step Process to Install & Run AI21-Jamba-Reasoning-3B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H200s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running AI21-Jamba-Reasoning-3B, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.
We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04
This image is essential because it includes:
- Full CUDA toolkit (including
nvcc
)
- Proper support for building and running GPU-based models like AI21-Jamba-Reasoning-3B.
- Compatibility with CUDA 12.1.1 required by certain model operations
Launch Mode
We selected:
Interactive shell server
This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching models like AI21-Jamba-Reasoning-3B.
Docker Repository Authentication
We left all fields empty here.
Since the Docker image is publicly available on Docker Hub, no login credentials are required.
Identification
nvidia/cuda:12.1.1-devel-ubuntu22.04
CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
This setup ensures that the AI21-Jamba-Reasoning-3B runs in a GPU-enabled environment with proper CUDA access and high compute performance.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, If you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Python 3.11 and Pip (VM already has Python 3.10; We Update It)
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.10.12 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
apt update && apt install -y software-properties-common curl ca-certificates
add-apt-repository -y ppa:deadsnakes/ppa
apt update
Now, run the following commands to install Python 3.11, Pip and Wheel:
apt install -y python3.11 python3.11-venv python3.11-dev
python3.11 -m ensurepip --upgrade
python3.11 -m pip install --upgrade pip setuptools wheel
python3.11 --version
python3.11 -m pip --version
Step 9: Created and Activated Python 3.11 Virtual Environment
Run the following commands to created and activated Python 3.11 virtual environment:
python3.11 -m venv ~/.venvs/py311
source ~/.venvs/py311/bin/activate
python --version
pip --version
Step 10: Install PyTorch for CUDA
Run the following command to install PyTorch:
pip install --index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
Step 11: Install the Utilities
Run the following command to install utilities:
pip install "vllm>=0.11.0" transformers>=4.54.0
Step 12: Install Mamba & No-Build-Isolation Dependencies
Run the following command to install the mamba & no-build-isolation dependencies:
pip install causal-conv1d>=1.2.0
pip install mamba-ssm --no-build-isolation
Step 13: Install FlashAttention 2
Run the following command to install flashattention 2:
pip install flash-attn --no-build-isolation
Step 14: Run as an OpenAI-Compatible Server (Port 8000)
# Start the server
vllm serve "ai21labs/AI21-Jamba-Reasoning-3B" \
--host 0.0.0.0 --port 8000 \
--max-model-len 256000 \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--mamba-ssm-cache-dtype float32
--max-model-len 256000
aligns with Jamba’s 256K context. (Long-context is a key Jamba feature.)
--mamba-ssm-cache-dtype float32
is specifically suggested.
- Heads-up (SSM limits in vLLM): no prefix caching / KV offloading for SSM state yet; chunked prefill can be slower.
Step 15: Install Streamlit and OpenAI SDK (For the Web UI)
Now that your vLLM server for AI21 Jamba Reasoning 3B is running, install the packages that let you build a lightweight web interface:
pip install streamlit openai
Explanation:
streamlit
→ creates the interactive browser-based chat interface (http://<VM-IP>:8501
).
openai
→ lets the UI talk to your vLLM server using the OpenAI-compatible API.
Step 16: Connect to Your GPU VM with a Code Editor
Before you start running model script with the AI21-Jamba-Reasoning-3B model, it’s a good idea to connect your GPU virtual machine (VM) to a code editor of your choice. This makes writing, editing, and running code much easier.
- You can use popular editors like VS Code, Cursor, or any other IDE that supports SSH remote connections.
- In this example, we’re using cursor code editor.
- Once connected, you’ll be able to browse files, edit scripts, and run commands directly on your remote server, just like working locally.
Why do this?
Connecting your VM to a code editor gives you a powerful, streamlined workflow for Python development, allowing you to easily manage your code, install dependencies, and experiment with large models.
Step 17: Create the Script
Create a file (ex: # jamba_ui.py) and add the following code:
# jamba_ui.py
import os, time
import streamlit as st
from openai import OpenAI
# ---- CONFIG ----
# Point to your vLLM server (change host if remote)
BASE_URL = os.environ.get("OPENAI_API_BASE_URL", "http://localhost:8000/v1")
API_KEY = os.environ.get("OPENAI_API_KEY", "sk-no-key-needed")
MODEL_ID = os.environ.get("OPENAI_MODEL", "ai21labs/AI21-Jamba-Reasoning-3B")
client = OpenAI(base_url=BASE_URL, api_key=API_KEY)
st.set_page_config(page_title="Jamba 3B UI", page_icon="🫘", layout="centered")
st.title("🫘 Jamba Reasoning 3B — Edge UI")
# Sidebar
with st.sidebar:
st.subheader("Server")
st.write(f"**Base URL:** {BASE_URL}")
st.write(f"**Model:** `{MODEL_ID}`")
temp = st.slider("Temperature", 0.0, 1.0, 0.2, 0.1)
max_tokens = st.number_input("Max Tokens", min_value=32, max_value=8192, value=512, step=32)
st.caption("Tip: If you see meta-thoughts, ask for concise answers in your prompt.")
# Chat state
if "messages" not in st.session_state:
st.session_state.messages = [{"role":"system","content":"You are concise. Avoid meta commentary; answer directly."}]
# Display history
for m in st.session_state.messages:
if m["role"] != "system":
with st.chat_message(m["role"]):
st.markdown(m["content"])
# Input
user_input = st.chat_input("Ask Jamba 3B…")
if user_input:
st.session_state.messages.append({"role":"user","content":user_input})
with st.chat_message("user"):
st.markdown(user_input)
with st.chat_message("assistant"):
placeholder = st.empty()
full_text = ""
# Call OpenAI-compatible Chat Completions on vLLM
resp = client.chat.completions.create(
model=MODEL_ID,
messages=[m for m in st.session_state.messages if m["role"] in ("system","user","assistant")],
temperature=temp,
max_tokens=max_tokens,
)
content = resp.choices[0].message.content or ""
# Stream fake typing effect (optional)
for chunk in content.split():
full_text += chunk + " "
placeholder.markdown(full_text)
time.sleep(0.01)
st.session_state.messages.append({"role":"assistant","content":content})
What This Script Does
- Initializes a Streamlit chat app and reads config from env vars (
OPENAI_API_BASE_URL
, OPENAI_API_KEY
, OPENAI_MODEL
) to create an OpenAI-compatible client.
- Renders a clean UI with title and a sidebar showing server info plus controls for temperature and max tokens.
- Keeps full chat history in
st.session_state
(starts with a concise system prompt).
- On user input, sends the conversation to your vLLM server via
client.chat.completions.create(...)
using the selected model.
- Displays the assistant’s reply with a small “typing” effect, and stores it back into history for the next turn.
Step 18: Launch the Streamlit UI
Run Streamlit:
export OPENAI_API_BASE_URL=http://localhost:8000/v1
export OPENAI_API_KEY=sk-xxx # any non-empty string works for vLLM
# (optional) export OPENAI_MODEL=ai21labs/AI21-Jamba-Reasoning-3B
streamlit run jamba_ui.py --server.address 0.0.0.0 --server.port 8501
Step 19: Access the Streamlit App
Access the streamlit app on:
http://0.0.0.0:8501/
Play with Model
Conclusion
AI21’s Jamba Reasoning 3B shows how far compact reasoning models have evolved — delivering impressive intelligence and long-context capability without demanding high-end infrastructure. Its hybrid Transformer-Mamba architecture combines speed, efficiency, and scalability, sustaining strong performance even at 256 K tokens. With vLLM and a simple Streamlit UI, you can now deploy, test, and interact with this model seamlessly on a single GPU VM, making it ideal for developers, researchers, and edge deployments that need powerful reasoning on modest hardware.