vLLM Quickstart: High-Throughput Model ServingΒΆ
In this notebook, we will initialize a vLLM engine, demonstrating how it batches requests and utilizes PagedAttention to achieve state-of-the-art throughput speeds compared to standard HuggingFace Transformers.
# Install vLLM if you haven't already
# !uv pip install vllm
from vllm import LLM, SamplingParams
# Define your prompts
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI inference lies in"
]
# Set generation parameters
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# Initialize the vLLM engine (this will load model weights into VRAM)
# We'll use a tiny model for demonstration purposes
llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0")
# Generate outputs (Batched automatically by vLLM!)
outputs = llm.generate(prompts, sampling_params)
# Print outputs
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")