RustAPI Cookbook
Welcome to the RustAPI Architecture Cookbook. This documentation is designed to be the single source of truth for the project’s philosophy, patterns, and practical implementation details.
Note
This is a living document. As our architecture evolves, so will this cookbook.
What is this?
This is not just API documentation. This is a collection of:
- Keynotes: High-level architectural decisions and “why” we made them.
- Patterns: The repeated structures (like
ActionandService) that form the backbone of our code. - Recipes: Practical, step-by-step guides for adding features, testing, and maintaining cleanliness.
- Learning Paths: Structured progressions with real-world examples.
🚀 New: Examples Repository
Looking for hands-on learning? Check out our Examples Repository with 18 complete projects:
| Category | Examples |
|---|---|
| Getting Started | hello-world, crud-api |
| Authentication | auth-api (JWT), rate-limit-demo |
| Database | sqlx-crud, event-sourcing |
| AI/LLM | toon-api, mcp-server |
| Real-time | websocket, graphql-api |
| Production | microservices, serverless-lambda |
👉 See Learning & Examples for structured learning paths.
Visual Identity
This cookbook is styled with the RustAPI Premium Dark theme, focusing on readability, contrast, and modern “glassmorphism” aesthetics.
Quick Start
- Want to add a feature? Jump to Adding a New Feature.
- Want to understand performance? Read Performance Philosophy.
- Need to check code quality? See Maintenance.
- New to RustAPI? Follow our Learning Paths.
Getting Started
Welcome to RustAPI. This section will guide you from installation to your first running API.
Installation
Note
RustAPI is designed for Rust 1.75 or later.
Prerequisites
Before we begin, ensure you have the Rust toolchain installed. If you haven’t, the best way is via rustup.rs.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Installing the CLI
RustAPI comes with a powerful CLI to scaffold projects. Install it directly from crates.io:
cargo install cargo-rustapi
Verify your installation:
cargo-rustapi --version
Adding to an Existing Project
If you prefer not to use the CLI, you can add RustAPI to your Cargo.toml manually:
cargo add rustapi-rs@0.1.335
Or add this to your Cargo.toml:
[dependencies]
rustapi-rs = "0.1.335"
Editor Setup
For the best experience, we recommend VS Code with the rust-analyzer extension. This provides:
- Real-time error checking
- Intelligent code completion
- In-editor documentation
Quickstart
Tip
From zero to a production-ready API in 60 seconds.
Install the CLI
First, install the RustAPI CLI tool:
cargo install cargo-rustapi
Create a New Project
Use the CLI to generate a new project. We’ll call it my-api.
cargo rustapi new my-api
cd my-api
Note: If
cargo rustapidoesn’t work, you can also runcargo-rustapi new my-apidirectly.
This command sets up a complete project structure with handling, models, and tests ready to go.
The Code
Open src/main.rs. You’ll see how simple it is:
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/hello")]
async fn hello() -> Json<String> {
Json("Hello from RustAPI!".to_string())
}
#[rustapi_rs::main]
async fn main() -> Result<()> {
// Auto-discovery magic ✨
RustApi::auto()
.run("127.0.0.1:8080")
.await
}
Run the Server
Start your API server:
cargo run
You should see output similar to:
INFO rustapi: 🚀 Server running at http://127.0.0.1:8080
INFO rustapi: 📚 API docs at http://127.0.0.1:8080/docs
Test It Out
Open your browser to http://127.0.0.1:8080/docs.
You’ll see the Swagger UI automatically generated from your code. Try out the endpoint directly from the browser!
What Just Happened?
You just launched a high-performance, async Rust web server with:
- ✅ Automatic OpenAPI documentation
- ✅ Type-safe request validation
- ✅ Distributed tracing
- ✅ Global error handling
Welcome to RustAPI.
Project Structure
RustAPI projects follow a standard, modular structure designed for scalability.
my-api/
├── Cargo.toml // Dependencies and workspace config
├── src/
│ ├── handlers/ // Request handlers (Controllers)
│ │ ├── mod.rs
│ │ └── items.rs // Example resource handler
│ ├── models/ // Data structures and Schema
│ │ ├── mod.rs
│ ├── error.rs // Custom error types
│ └── main.rs // Application entry point & Router
└── .env.example // Environment variables template
Key Files
src/main.rs
The heart of your application. This is where you configure the RustApi builder, register routes, and set up state.
src/handlers/
Where your business logic lives. Handlers are async functions that take extractors (like Json, Path, State) and return responses.
src/models/
Your data types. By deriving Schema, they automatically appear in your OpenAPI documentation.
src/error.rs
Centralized error handling. Mapping your AppError to ApiError allows you to simply return Result<T, AppError> in your handlers.
Core Concepts
Documentation of the fundamental architectural decisions and patterns in RustAPI.
Handlers & Extractors
The Handler is the fundamental unit of work in RustAPI. It transforms an incoming HTTP request into an outgoing HTTP response.
Unlike many web frameworks that enforce a strict method signature (e.g., fn(req: Request, res: Response)), RustAPI embraces a flexible, type-safe approach powered by Rust’s trait system.
The Philosophy: “Ask for what you need”
In RustAPI, you don’t manually parse the request object inside your business logic. Instead, you declare the data you need as function arguments, and the framework’s Extractors handle the plumbing for you.
If the data cannot be extracted (e.g., missing header, invalid JSON), the request is rejected before your handler is ever called. This means your handler logic is guaranteed to operate on valid, type-safe data.
Anatomy of a Handler
A handler is simply an asynchronous function that takes zero or more Extractors as arguments and returns something that implements IntoResponse.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn create_user(
State(db): State<DbPool>, // 1. Dependency Injection
Path(user_id): Path<Uuid>, // 2. URL Path Parameter
Json(payload): Json<CreateUser>, // 3. JSON Request Body
) -> Result<impl IntoResponse, ApiError> {
let user = db.create_user(user_id, payload).await?;
Ok((StatusCode::CREATED, Json(user)))
}
}
Key Rules
- Order Matters (Slightly): Extractors that consume the request body (like
Json<T>orMultipart) must be the last argument. This is because the request body is a stream that can only be read once. - Async by Default: Handlers are
async fn. This allows non-blocking I/O operations (DB calls, external API requests). - Debuggable: Handlers are just functions. You can unit test them easily.
Extractors: The FromRequest Trait
Extractors are types that implement FromRequest (or FromRequestParts for headers/query params). They isolate the “HTTP parsing” logic from your “Business” logic.
Common Build-in Extractors
| Extractor | Source | Example Usage |
|---|---|---|
Path<T> | URL Path Segments | fn get_user(Path(id): Path<u32>) |
Query<T> | Query String | fn search(Query(params): Query<SearchFn>) |
Json<T> | Request Body | fn update(Json(data): Json<UpdateDto>) |
HeaderMap | HTTP Headers | fn headers(headers: HeaderMap) |
State<T> | Application State | fn db_op(State(pool): State<PgPool>) |
Extension<T> | Request-local extensions | fn logic(Extension(user): Extension<User>) |
Custom Extractors
You can create your own extractors to encapsulate repetitive validation or parsing logic. For example, extracting a user ID from a verified JWT:
#![allow(unused)]
fn main() {
pub struct AuthenticatedUser(pub Uuid);
#[async_trait]
impl<S> FromRequestParts<S> for AuthenticatedUser
where
S: Send + Sync,
{
type Rejection = ApiError;
async fn from_request_parts(parts: &mut Parts, state: &S) -> Result<Self, Self::Rejection> {
let auth_header = parts.headers.get("Authorization")
.ok_or(ApiError::Unauthorized("Missing token"))?;
let token = auth_header.to_str().map_err(|_| ApiError::Unauthorized("Invalid token"))?;
let user_id = verify_jwt(token)?; // Your verification logic
Ok(AuthenticatedUser(user_id))
}
}
// Usage in handler: cleaner and reusable!
async fn profile(AuthenticatedUser(uid): AuthenticatedUser) -> impl IntoResponse {
format!("User ID: {}", uid)
}
}
Responses: The IntoResponse Trait
A handler can return any type that implements IntoResponse. RustAPI provides implementations for many common types:
StatusCode(e.g., return200 OKor404 Not Found)Json<T>(serializes struct to JSON)String/&str(plain text response)Vec<u8>/Bytes(binary data)HeaderMap(response headers)Html<String>(HTML content)
Tuple Responses
You can combine types using tuples to set status codes and headers along with the body:
#![allow(unused)]
fn main() {
// Returns 201 Created + JSON Body
async fn create() -> (StatusCode, Json<User>) {
(StatusCode::CREATED, Json(user))
}
// Returns Custom Header + Plain Text
async fn custom() -> (HeaderMap, &'static str) {
let mut headers = HeaderMap::new();
headers.insert("X-Custom", "Value".parse().unwrap());
(headers, "Response with headers")
}
}
Error Handling
Handlers often return Result<T, E>. If the handler returns Ok(T), the T is converted to a response. If it returns Err(E), the E is converted to a response.
This effectively means your Error type must implement IntoResponse.
#![allow(unused)]
fn main() {
// Recommended pattern: Centralized API Error enum
pub enum ApiError {
NotFound(String),
InternalServerError,
}
impl IntoResponse for ApiError {
fn into_response(self) -> Response {
let (status, message) = match self {
ApiError::NotFound(msg) => (StatusCode::NOT_FOUND, msg),
ApiError::InternalServerError => (StatusCode::INTERNAL_SERVER_ERROR, "Something went wrong".to_string()),
};
(status, Json(json!({ "error": message }))).into_response()
}
}
}
Best Practices
- Keep Handlers Thin: Move complex business logic to “Service” structs or domain modules. Handlers should focus on HTTP translation (decoding request -> calling service -> encoding response).
- Use
Statefor Dependencies: Avoid global variables. Pass DB pools and config viaState. - Parse Early: Use specific types in
Json<T>structs rather thanserde_json::Valueto leverage the type system for validation.
System Architecture
RustAPI follows a Facade Architecture — a stable public API that shields you from internal complexity and breaking changes.
System Overview
graph TB
subgraph Client["🌐 Client Layer"]
HTTP[HTTP Request]
LLM[LLM/AI Agent]
MCP[MCP Client]
end
subgraph Public["📦 rustapi-rs (Public Facade)"]
direction TB
Prelude[prelude::*]
Macros["#[rustapi_rs::get/post]<br>#[rustapi_rs::main]"]
Types[Json, Query, Path, Form]
end
subgraph Core["⚙️ rustapi-core (Engine)"]
direction TB
Router[Radix Router<br>matchit]
Extract[Extractors<br>FromRequest trait]
MW[Middleware Stack<br>Tower-like layers]
Resp[Response Builder<br>IntoResponse trait]
end
subgraph Extensions["🔌 Extension Crates"]
direction LR
OpenAPI["rustapi-openapi<br>OpenAPI 3.1 + Docs"]
Validate["rustapi-validate<br>Validation (v2 native)"]
Toon["rustapi-toon<br>LLM Optimization"]
Extras["rustapi-extras<br>JWT/CORS/RateLimit"]
WsCrate["rustapi-ws<br>WebSocket Support"]
ViewCrate["rustapi-view<br>Template Engine"]
end
subgraph Foundation["🏗️ Foundation Layer"]
direction LR
Tokio[tokio<br>Async Runtime]
Hyper[hyper 1.0<br>HTTP Protocol]
Serde[serde<br>Serialization]
end
HTTP --> Public
LLM --> Public
MCP --> Public
Public --> Core
Core --> Extensions
Extensions --> Foundation
Core --> Foundation
Request Flow
sequenceDiagram
participant C as Client
participant R as Router
participant M as Middleware
participant E as Extractors
participant H as Handler
participant S as Serializer
C->>R: HTTP Request
R->>R: Match route (radix tree)
R->>M: Pass to middleware stack
loop Each Middleware
M->>M: Process (JWT, CORS, RateLimit)
end
M->>E: Extract parameters
E->>E: Json<T>, Path<T>, Query<T>
E->>E: Validate (v2 native / optional legacy)
alt Validation Failed
E-->>C: 422 Unprocessable Entity
else Validation OK
E->>H: Call async handler
H->>S: Return response type
alt TOON Enabled
S->>S: Check Accept header
S->>S: Serialize as TOON/JSON
S->>S: Add token count headers
else Standard
S->>S: Serialize as JSON
end
S-->>C: HTTP Response
end
Crate Dependency Graph
graph BT
subgraph User["Your Application"]
App[main.rs]
end
subgraph Facade["Single Import"]
RS[rustapi-rs]
end
subgraph Internal["Internal Crates"]
Core[rustapi-core]
Macros[rustapi-macros]
OpenAPI[rustapi-openapi]
Validate[rustapi-validate]
Toon[rustapi-toon]
Extras[rustapi-extras]
WS[rustapi-ws]
View[rustapi-view]
end
subgraph External["External Dependencies"]
Tokio[tokio]
Hyper[hyper]
Serde[serde]
Validator[validator]
Tungstenite[tungstenite]
Tera[tera]
end
App --> RS
RS --> Core
RS --> Macros
RS --> OpenAPI
RS --> Validate
RS -.->|optional| Toon
RS -.->|optional| Extras
RS -.->|optional| WS
RS -.->|optional| View
Core --> Tokio
Core --> Hyper
Core --> Serde
OpenAPI --> Serde
Validate -.->|legacy optional| Validator
Toon --> Serde
WS --> Tungstenite
View --> Tera
style RS fill:#e1f5fe
style App fill:#c8e6c9
Design Principles
| Principle | Implementation |
|---|---|
| Single Entry Point | use rustapi_rs::prelude::* imports everything you need |
| Zero Boilerplate | Macros generate routing, OpenAPI specs, and validation |
| Compile-Time Safety | Generic extractors catch type errors at compile time |
| Opt-in Complexity | Features like JWT, TOON are behind feature flags |
| Engine Abstraction | Internal hyper/tokio upgrades don’t break your code |
Crate Responsibilities
| Crate | Role |
|---|---|
rustapi-rs | Public facade — single use for everything |
rustapi-core | HTTP engine, routing, extractors, response handling |
rustapi-macros | Procedural macros: #[rustapi_rs::get], #[rustapi_rs::main] |
rustapi-openapi | Native OpenAPI 3.1 model, schema registry, and docs endpoints |
rustapi-validate | Validation runtime (v2 native default, legacy validator optional) |
rustapi-toon | TOON format serializer, content negotiation, LLM headers |
rustapi-extras | JWT auth, CORS, rate limiting, audit logging |
rustapi-ws | WebSocket support with broadcast channels |
rustapi-view | Template engine (Tera) for server-side rendering |
rustapi-jobs | Background job processing (Redis/Postgres) |
rustapi-testing | Test utilities, matchers, expectations |
Performance Philosophy
RustAPI is built on a simple premise: Abstractions shouldn’t cost you runtime performance.
We leverage Rust’s unique ownership system and modern async ecosystem (Tokio, Hyper) to deliver performance that rivals C++ servers, while maintaining developer safe-guards.
The Pillars of Speed
1. Zero-Copy Networking
Where possible, RustAPI avoids copying memory. When you receive a large JSON payload or file upload, we aim to pass pointers to the underlying memory buffer rather than cloning the data.
BytesoverVec<u8>: We use thebytescrate extensively. Passing aBytesobject around isO(1)(it’s just a reference-counted pointer and length), whereas cloning aVec<u8>isO(n).- String View: Extractors like
PathandQueryoften leverageCow<'str, str>(Clone on Write) to avoid allocations if the data doesn’t need to be modified.
2. Multi-Core Async Runtime
RustAPI runs on Tokio, a work-stealing, multi-threaded runtime.
- Non-blocking I/O: A single thread can handle thousands of concurrent idle connections (e.g., WebSockets waiting for messages) with minimal memory overhead.
- Work Stealing: If one CPU core is overloaded with tasks, other idle cores will “steal” work from its queue, ensuring balanced utilization of your hardware.
3. Compile-Time Router
Our router (matchit) is based on a Radix Trie structure.
- O(log n) Lookup: Route matching speed depends on the length of the URL, not the number of routes defined. Having 10 routes or 10,000 routes has negligible impact on routing latency.
- Allocation-Free Matching: For standard paths, routing decisions happen without heap allocations.
Memory Management
Stack vs. Heap
RustAPI encourages stack allocation for small, short-lived data.
- Extractors are often allocated on the stack.
- Response bodies are streamed, meaning a 1GB file download doesn’t require 1GB of RAM. It flows through a small, constant-sized buffer.
Connection Pooling
For database performance, we strongly recommend using connection pooling (e.g., sqlx::Pool).
- Reuse: Establishing a TCP connection and performing a simplified SSL handshake for every request is slow. Pooling keeps connections open and ready.
- Multiplexing: Some drivers allow multiple queries to be in-flight on a single connection simultaneously.
Optimizing Your App
To get the most out of RustAPI, follow these guidelines:
-
Avoid Blocking the Async Executor: Never run CPU-intensive tasks (cryptography, image processing) or blocking I/O (std::fs::read) directly in an async handler.
- Solution: Use
tokio::task::spawn_blockingto offload these to a dedicated thread pool.
#![allow(unused)] fn main() { // BAD: Blocks the thread, potentially stalling other requests fn handler() { let digest = tough_crypto_hash(data); } // GOOD: Runs on a thread meant for blocking work async fn handler() { let digest = tokio::task::spawn_blocking(move || { tough_crypto_hash(data) }).await.unwrap(); } } - Solution: Use
-
JSON Serialization: While
serdeis fast, JSON text processing is CPU heavy.- For extremely high-throughput endpoints, consider binary formats like Protobuf or MessagePack if the client supports it.
-
Keep
StateLight: YourStatestruct is cloned for every request. Wrap large shared data inArc<T>so only the pointer is cloned, not the data itself.
#![allow(unused)]
fn main() {
// Fast
#[derive(Clone)]
struct AppState {
db: PgPool, // Internally uses Arc
config: Arc<Config>, // Wrapped in Arc manually
}
}
Benchmarking
Performance is not a guessing game. Below are results from our internal benchmarks on reference hardware.
Comparative Benchmarks
| Framework | Requests/sec | Latency (avg) | Memory |
|---|---|---|---|
| RustAPI | ~185,000 | ~0.54ms | ~8MB |
| RustAPI + core-simd-json | ~220,000 | ~0.45ms | ~8MB |
| Actix-web | ~178,000 | ~0.56ms | ~10MB |
| Axum | ~165,000 | ~0.61ms | ~12MB |
| Rocket | ~95,000 | ~1.05ms | ~15MB |
| FastAPI (Python) | ~12,000 | ~8.3ms | ~45MB |
🔬 Test Configuration
- Hardware: Intel i7-12700K, 32GB RAM
- Method:
wrk -t12 -c400 -d30s http://127.0.0.1:8080/api/users - Scenario: JSON serialization of 100 user objects
- Build:
cargo build --release
Results may vary based on hardware and workload. Run your own benchmarks:
cd benches
./run_benchmarks.ps1
Why So Fast?
| Optimization | Description |
|---|---|
| ⚡ SIMD-JSON | 2-4x faster JSON parsing with core-simd-json feature |
| 🔄 Zero-copy parsing | Direct memory access for path/query params |
| 📦 SmallVec PathParams | Stack-optimized path parameters |
| 🎯 Compile-time dispatch | All extractors resolved at compile time |
| 🌊 Streaming bodies | Handle large uploads without memory bloat |
Remember: RustAPI provides the capability for high performance, but your application logic ultimately dictates the speed. Use tools like wrk, k6, or drill to stress-test your specific endpoints.
Testing Strategy
Reliable software requires a robust testing strategy. RustAPI is designed to be testable at every level, from individual functions to full end-to-end scenarios.
The Testing Pyramid
We recommend a balanced approach:
- Unit Tests (70%): Fast, isolated tests for individual logic pieces.
- Integration Tests (20%): Testing handlers and extractors wired together.
- End-to-End (E2E) Tests (10%): Testing the running server from the outside.
1. Unit Testing Handlers
Since handlers are just regular functions, you can unit test them by invoking them directly. However, dealing with Extractors directly in tests can sometimes be verbose.
Often, it is better to extract your “Business Logic” into a separate function or trait, test that thoroughly, and keep the Handler layer thin.
#![allow(unused)]
fn main() {
// Domain Logic (Easy to test)
fn calculate_total(items: &[Item]) -> u32 {
items.iter().map(|i| i.price).sum()
}
// Handler (Just plumbing)
async fn checkout(Json(cart): Json<Cart>) -> Json<Receipt> {
let total = calculate_total(&cart.items);
Json(Receipt { total })
}
}
2. Integration Testing with Tower
RustAPI routers implement tower::Service. This means you can send requests to your router directly in memory without spawning a TCP server or using localhost. This is extremely fast.
We rely on tower::util::ServiceExt to call the router.
Setup
Add tower and http-body-util for testing utilities:
[dev-dependencies]
tower = { version = "0.4", features = ["util"] }
http-body-util = "0.1"
tokio = { version = "1", features = ["full"] }
Example Test
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_create_user() {
// 1. Build the app (same as in main.rs)
let app = app();
// 2. Construct a Request
let response = app
.oneshot(
Request::builder()
.method(http::Method::POST)
.uri("/users")
.header(http::header::CONTENT_TYPE, "application/json")
.body(Body::from(r#"{"username": "alice"}"#))
.unwrap(),
)
.await
.unwrap();
// 3. Assert Status
assert_eq!(response.status(), StatusCode::CREATED);
// 4. Assert Body
let body_bytes = response.into_body().collect().await.unwrap().to_bytes();
let body: User = serde_json::from_slice(&body_bytes).unwrap();
assert_eq!(body.username, "alice");
}
}
3. Mocking Dependencies with State
To test handlers that rely on databases or external APIs, you should mock those dependencies.
Use Traits to define the capabilities, and use generics or dynamic dispatch in your State.
#![allow(unused)]
fn main() {
// 1. Define the interface
#[async_trait]
trait UserRepository: Send + Sync {
async fn get_user(&self, id: u32) -> Option<User>;
}
// 2. Real Implementation
struct PostgresRepo { pool: PgPool }
// 3. Mock Implementation
struct MockRepo;
#[async_trait]
impl UserRepository for MockRepo {
async fn get_user(&self, _id: u32) -> Option<User> {
Some(User { username: "mock_user".into() })
}
}
// 4. Use in Handler
async fn get_user(
State(repo): State<Arc<dyn UserRepository>>, // Accepts any impl
Path(id): Path<u32>
) -> Json<User> {
// ...
}
}
In your tests, inject Arc::new(MockRepo) into the State.
4. End-to-End Testing
For E2E tests, you can spawn the actual server on a random port and use a real HTTP client (like reqwest) to hit it.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn e2e_test() {
// Binding to port 0 lets the OS choose a random available port
let listener = std::net::TcpListener::bind("127.0.0.1:0").unwrap();
let addr = listener.local_addr().unwrap();
// Spawn server in background
tokio::spawn(async move {
RustApi::serve(listener, app()).await.unwrap();
});
// Make real requests
let client = reqwest::Client::new();
let resp = client.get(format!("http://{}/health", addr))
.send()
.await
.unwrap();
assert!(resp.status().is_success());
}
}
This approach is slower but validates strictly everything, including network serialization and actual TCP behavior.
Crate Deep Dives
Warning
This section is for those who want to understand the framework’s internal organs. You don’t need to know this to use RustAPI, but it helps if you want to master it.
RustAPI is a collection of focused, interoperable crates. Each crate has a specific philosophy and “Lens” through which it views the world.
- rustapi-core: The Engine
- rustapi-macros: The Magic
- rustapi-validate: The Gatekeeper
- rustapi-grpc: The Bridge
rustapi-core: The Engine
rustapi-core is the foundational crate of the framework. It provides the essential types and traits that glue everything together, although application developers typically interact with the facade crate rustapi.
Core Responsibilities
- Routing: Mapping HTTP requests to Handlers.
- Extraction: The
FromRequesttrait definition. - Response: The
IntoResponsetrait definition. - Middleware: The
LayerandServiceintegration with Tower. - HTTP/3: Built-in QUIC support via
h3andquinn(optional feature).
The Router Internals
We use matchit, a high-performance Radix Tree implementation for routing.
Why Radix Trees?
- Speed: Lookup time is proportional to the length of the path, not the number of routes.
- Priority: Specific paths (
/users/profile) always take precedence over wildcards (/users/:id), regardless of definition order. - Parameters: Efficiently parses named parameters like
:idor*pathwithout regular expressions.
HTTP/3 & QUIC
rustapi-core includes optional support for HTTP/3 (QUIC). This is enabled via the http3 feature flag and powered by quinn and h3. It allows generic specialized methods on RustApi like .run_http3() and .run_dual_stack().
The Handler Trait Magic
The Handler trait is what allows you to write functions with arbitrary arguments.
#![allow(unused)]
fn main() {
// This looks simple...
async fn my_handler(state: State<Db>, json: Json<Data>) { ... }
// ...but under the hood, it compiles to something like:
impl Handler for my_handler {
fn call(req: Request) -> Future<Output=Response> {
// 1. Extract State
// 2. Extract Json
// 3. Call original function
// 4. Convert return to Response
}
}
}
This is achieved through recursive trait implementations on tuples. RustAPI supports handlers with up to 16 arguments.
Middleware Architecture
rustapi-core is built on top of tower. This means any standard Tower middleware works out of the box.
#![allow(unused)]
fn main() {
// The Service stack looks like an onion:
// Outer Layer (Timeout)
// -> Middle Layer (Trace)
// -> Inner Layer (Router)
// -> Handler
}
When you call .layer(), you are wrapping the inner service with a new outer layer.
The BoxRoute
To keep compilation times fast and types manageable, the Router eventually “erases” the specific types of your handlers into a BoxRoute (a boxed tower::Service). This is a dynamic dispatch boundary that trades a tiny amount of runtime performance (nanoseconds) for significantly faster compile times and usability.
rustapi-macros: The Magic
rustapi-macros reduces boilerplate by generating code at compile time.
#[debug_handler]
The most important macro for beginners. Rust’s error messages for complex generic traits (like Handler) can be notoriously difficult to understand.
If your handler doesn’t implement the Handler trait (e.g., because you used an argument that isn’t a valid Extractor), the compiler might give you an error spanning the entire RustApi::new() chain, miles away from the actual problem.
#[debug_handler] fixes this.
It verifies the handler function in isolation and produces clear error messages pointing exactly to the invalid argument.
#![allow(unused)]
fn main() {
#[debug_handler]
async fn handler(
// Compile Error: "String" does not implement FromRequest.
// Did you mean "Json<String>" or "Body"?
body: String
) { ... }
}
#[derive(FromRequest)]
Automatically implement FromRequest for your structs.
#![allow(unused)]
fn main() {
#[derive(FromRequest)]
struct MyExtractor {
// These fields must themselves be Extractors
header: HeaderMap,
body: Json<MyData>,
}
// Now you can use it in a handler
async fn handler(input: MyExtractor) {
println!("{:?}", input.header);
}
}
This is heavily used to group multiple extractors into a single struct (often called the “Parameter Object” pattern), keeping function signatures clean.
Route Metadata Macros
RustAPI provides several attribute macros for enriching OpenAPI documentation:
#[rustapi_rs::tag]
Groups endpoints under a common tag in Swagger UI:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users")]
#[rustapi_rs::tag("Users")]
async fn list_users() -> Json<Vec<User>> { ... }
}
#[rustapi_rs::summary] & #[rustapi_rs::description]
Adds human-readable documentation:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::summary("Get user by ID")]
#[rustapi_rs::description("Returns a single user by their unique identifier.")]
async fn get_user(Path(id): Path<i64>) -> Json<User> { ... }
}
#[rustapi_rs::param]
Customizes the OpenAPI schema type for path parameters. This is essential when the auto-inferred type is incorrect:
#![allow(unused)]
fn main() {
use uuid::Uuid;
// Without #[param], the `id` parameter would be documented as "integer"
// because of the naming convention. With #[param], it's correctly documented as UUID.
#[rustapi_rs::get("/items/{id}")]
#[rustapi_rs::param(id, schema = "uuid")]
async fn get_item(Path(id): Path<Uuid>) -> Json<Item> {
find_item(id).await
}
}
Supported schema types: "uuid", "integer", "int32", "string", "number", "boolean"
Alternative syntax:
#![allow(unused)]
fn main() {
#[rustapi_rs::param(id = "uuid")] // Shorter form
}
rustapi-validate: The Gatekeeper
Data validation should happen at the edges of your system, before invalid data ever reaches your business logic. rustapi-validate provides a robust, unified validation engine supporting both synchronous and asynchronous rules.
The Unified Validation System
RustAPI (v0.1.15+) introduces a unified validation system that supports:
- Legacy Validator: The classic
validatorcrate (via#[derive(validator::Validate)]). - V2 Engine: The new native engine (via
#[derive(rustapi_macros::Validate)]) which properly supports async usage. - Async Validation: Database checks, API calls, and other IO-bound validation rules.
Synchronous Validation
For standard validation rules (length, email, range, regex), use the Validate macro.
Tip
Use
rustapi_macros::Validatefor new code to unlock async features.
#![allow(unused)]
fn main() {
use rustapi_macros::Validate; // Logic from V2 engine
use serde::Deserialize;
#[derive(Debug, Deserialize, Validate)]
pub struct SignupRequest {
#[validate(length(min = 3, message = "Username too short"))]
pub username: String,
#[validate(email(message = "Invalid email format"))]
pub email: String,
#[validate(range(min = 18, max = 150))]
pub age: u8,
}
}
The ValidatedJson Extractor
For synchronous validation, use the ValidatedJson<T> extractor.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn signup(
ValidatedJson(payload): ValidatedJson<SignupRequest>
) -> impl IntoResponse {
// payload is guaranteed to be valid here
process_signup(payload)
}
}
Asynchronous Validation
When you need to check data against a database (e.g., “is this email unique?”) or an external service, use Async Validation.
Async Rules
The V2 engine supports async rules directly in the struct definition.
#![allow(unused)]
fn main() {
use rustapi_macros::Validate;
use rustapi_validate::v2::{ValidationContext, RuleError};
#[derive(Debug, Deserialize, Validate)]
pub struct CreateUserRequest {
// Built-in async rule (requires database integration)
#[validate(async_unique(table = "users", column = "email"))]
pub email: String,
// Custom async function
#[validate(custom_async = "check_username_availability")]
pub username: String,
}
// Custom async validator function
async fn check_username_availability(
username: &String,
_ctx: &ValidationContext
) -> Result<(), RuleError> {
if username == "admin" {
return Err(RuleError::new("reserved", "This username is reserved"));
}
// Perform DB check...
Ok(())
}
}
The AsyncValidatedJson Extractor
For types with async rules, you must use AsyncValidatedJson.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn create_user(
AsyncValidatedJson(payload): AsyncValidatedJson<CreateUserRequest>
) -> impl IntoResponse {
// payload is valid AND unique in database
create_user_in_db(payload).await
}
}
Error Handling
Whether you use synchronous or asynchronous validation, errors are normalized into a standard ApiError format (HTTP 422 Unprocessable Entity).
{
"error": {
"type": "validation_error",
"message": "Request validation failed",
"fields": [
{
"field": "email",
"code": "email",
"message": "Invalid email format"
},
{
"field": "username",
"code": "reserved",
"message": "This username is reserved"
}
]
},
"error_id": "err_a1b2..."
}
Backward Compatibility
The system is fully backward compatible. You can continue using validator::Validate on your structs, and ValidatedJson will accept them automatically via the unified Validatable trait.
#![allow(unused)]
fn main() {
// Legacy code still works!
#[derive(validator::Validate)]
struct OldStruct { ... }
async fn handler(ValidatedJson(body): ValidatedJson<OldStruct>) { ... }
}
rustapi-openapi: The Cartographer
Lens: “The Cartographer” Philosophy: “Documentation as Code.”
Automatic Spec Generation
We believe that if documentation is manual, it is wrong. RustAPI uses a native OpenAPI generator to build the specification directly from your code.
The Schema Trait
Any type that is part of your API (request or response) must implement Schema.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Metric {
/// The name of the metric
name: String,
/// Value (0-100)
#[schema(minimum = 0, maximum = 100)]
value: i32,
}
}
Operation Metadata
Use macros to enrich endpoints:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/metrics")]
#[rustapi_rs::tag("Metrics")]
#[rustapi_rs::summary("List all metrics")]
#[rustapi_rs::response(200, Json<Vec<Metric>>)]
async fn list_metrics() -> Json<Vec<Metric>> { ... }
}
Swagger UI
The RustApi builder automatically mounts a Swagger UI at the path you specify:
#![allow(unused)]
fn main() {
RustApi::new()
.docs("/docs") // Mounts Swagger UI at /docs
// ...
}
Path Parameter Schema Types
By default, RustAPI infers the OpenAPI schema type for path parameters based on naming conventions:
- Parameters named
id,user_id,postId, etc. →integer - Parameters named
uuid,user_uuid, etc. →stringwithuuidformat - Other parameters →
string
However, sometimes auto-inference is incorrect. For example, you might have a parameter named id that is actually a UUID. Use the #[rustapi_rs::param] attribute to override the inferred type:
#![allow(unused)]
fn main() {
use uuid::Uuid;
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::param(id, schema = "uuid")]
#[rustapi_rs::tag("Users")]
async fn get_user(Path(id): Path<Uuid>) -> Json<User> {
// The OpenAPI spec will now correctly show:
// { "type": "string", "format": "uuid" }
// instead of the default { "type": "integer", "format": "int64" }
get_user_by_id(id).await
}
}
Supported Schema Types
| Schema Type | OpenAPI Schema |
|---|---|
"uuid" | { "type": "string", "format": "uuid" } |
"integer", "int", "int64" | { "type": "integer", "format": "int64" } |
"int32" | { "type": "integer", "format": "int32" } |
"string" | { "type": "string" } |
"number", "float" | { "type": "number" } |
"boolean", "bool" | { "type": "boolean" } |
Alternative Syntax
You can also use a shorter syntax:
#![allow(unused)]
fn main() {
// Shorter syntax: param_name = "schema_type"
#[rustapi_rs::get("/posts/{post_id}")]
#[rustapi_rs::param(post_id = "uuid")]
async fn get_post(Path(post_id): Path<Uuid>) -> Json<Post> { ... }
}
Programmatic API
When building routes programmatically, you can use the .param() method:
#![allow(unused)]
fn main() {
use rustapi_rs::handler::get_route;
// Using the Route builder
let route = get_route("/items/{id}", get_item)
.param("id", "uuid")
.tag("Items")
.summary("Get item by UUID");
app.mount_route(route);
}
rustapi-extras: The Toolbox
Lens: “The Toolbox” Philosophy: “Batteries included, but swappable.”
Feature Flags
This crate is a collection of production-ready middleware. Everything is behind a feature flag so you don’t pay for what you don’t use.
| Feature | Component |
|---|---|
jwt | JwtLayer, AuthUser extractor |
cors | CorsLayer |
csrf | CsrfLayer, CsrfToken extractor |
audit | AuditStore, AuditLogger |
insight | InsightLayer, InsightStore |
rate-limit | RateLimitLayer |
replay | ReplayLayer (Time-Travel Debugging) |
timeout | TimeoutLayer |
guard | PermissionGuard |
sanitization | Input sanitization utilities |
Middleware Usage
Middleware wraps your entire API or specific routes.
#![allow(unused)]
fn main() {
let app = RustApi::new()
.layer(CorsLayer::permissive())
.layer(CompressionLayer::new())
.route("/", get(handler));
}
CSRF Protection
Cross-Site Request Forgery protection using the Double-Submit Cookie pattern.
#![allow(unused)]
fn main() {
use rustapi_extras::csrf::{CsrfConfig, CsrfLayer, CsrfToken};
// Configure CSRF middleware
let csrf_config = CsrfConfig::new()
.cookie_name("csrf_token")
.header_name("X-CSRF-Token")
.cookie_secure(true); // HTTPS only
let app = RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.route("/form", get(show_form))
.route("/submit", post(handle_submit));
}
Extracting the Token
Use the CsrfToken extractor to access the token in handlers:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/form")]
async fn show_form(token: CsrfToken) -> Html<String> {
Html(format!(r#"
<input type="hidden" name="_csrf" value="{}" />
"#, token.as_str()))
}
}
How It Works
- Safe methods (
GET,HEAD) generate and set the token cookie - Unsafe methods (
POST,PUT,DELETE) require the token in theX-CSRF-Tokenheader - If header doesn’t match cookie →
403 Forbidden
See CSRF Protection Recipe for a complete guide.
Audit Logging
For enterprise compliance (GDPR/SOC2), the audit feature provides a structured way to record sensitive actions.
#![allow(unused)]
fn main() {
async fn delete_user(
AuthUser(user): AuthUser,
State(audit): State<AuditLogger>
) {
audit.log(AuditEvent::new("user.deleted")
.actor(user.id)
.target("user_123")
);
}
}
Traffic Insight
The insight feature provides powerful real-time traffic analysis and debugging capabilities without external dependencies. It is designed to be low-overhead and privacy-conscious.
[dependencies]
rustapi-extras = { version = "0.1.335", features = ["insight"] }
Setup
#![allow(unused)]
fn main() {
use rustapi_extras::insight::{InsightLayer, InMemoryInsightStore, InsightConfig};
use std::sync::Arc;
let store = Arc::new(InMemoryInsightStore::new());
let config = InsightConfig::default();
let app = RustApi::new()
.layer(InsightLayer::new(config, store.clone()));
}
Accessing Data
You can inspect the collected data (e.g., via an admin dashboard):
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/admin/insights")]
async fn get_insights(State(store): State<Arc<InMemoryInsightStore>>) -> Json<InsightStats> {
// Returns aggregated stats like req/sec, error rates, p99 latency
Json(store.get_stats().await)
}
}
The InsightStore trait allows you to implement custom backends (e.g., ClickHouse or Elasticsearch) if you need long-term retention.
Observability
The otel and structured-logging features bring enterprise-grade observability.
OpenTelemetry
#![allow(unused)]
fn main() {
use rustapi_extras::otel::{OtelLayer, OtelConfig};
let config = OtelConfig::default().service_name("my-service");
let app = RustApi::new()
.layer(OtelLayer::new(config));
}
Structured Logging
Emit logs as JSON for aggregators like Datadog or Splunk. This is different from request logging; it formats your application logs.
#![allow(unused)]
fn main() {
use rustapi_extras::structured_logging::{StructuredLoggingLayer, JsonFormatter};
let app = RustApi::new()
.layer(StructuredLoggingLayer::new(JsonFormatter::default()));
}
Advanced Security
OAuth2 Client
The oauth2-client feature provides a complete client implementation.
#![allow(unused)]
fn main() {
use rustapi_extras::oauth2::{OAuth2Client, OAuth2Config, Provider};
let config = OAuth2Config::new(
Provider::Google,
"client_id",
"client_secret",
"http://localhost:8080/callback"
);
let client = OAuth2Client::new(config);
}
Security Headers
Add standard security headers (HSTS, X-Frame-Options, etc.).
#![allow(unused)]
fn main() {
use rustapi_extras::security_headers::SecurityHeadersLayer;
let app = RustApi::new()
.layer(SecurityHeadersLayer::default());
}
API Keys
Simple API Key authentication strategy.
#![allow(unused)]
fn main() {
use rustapi_extras::api_key::ApiKeyLayer;
let app = RustApi::new()
.layer(ApiKeyLayer::new("my-secret-key"));
}
Permission Guards
The guard feature provides role-based access control (RBAC) helpers.
#![allow(unused)]
fn main() {
use rustapi_extras::guard::PermissionGuard;
// Only allows users with "admin" role
#[rustapi_rs::get("/admin")]
async fn admin_panel(
_guard: PermissionGuard
) -> &'static str {
"Welcome Admin"
}
}
Input Sanitization
The sanitization feature helps prevent XSS by cleaning user input.
#![allow(unused)]
fn main() {
use rustapi_extras::sanitization::sanitize_html;
let safe_html = sanitize_html("<script>alert(1)</script>Hello");
// Result: "<script>alert(1)</script>Hello"
}
Resilience
Circuit Breaker
Prevent cascading failures by stopping requests to failing upstreams.
#![allow(unused)]
fn main() {
use rustapi_extras::circuit_breaker::CircuitBreakerLayer;
let app = RustApi::new()
.layer(CircuitBreakerLayer::new());
}
Retry
Automatically retry failed requests with backoff.
#![allow(unused)]
fn main() {
use rustapi_extras::retry::RetryLayer;
let app = RustApi::new()
.layer(RetryLayer::default());
}
Timeout
Ensure requests don’t hang indefinitely.
#![allow(unused)]
fn main() {
use rustapi_extras::timeout::TimeoutLayer;
use std::time::Duration;
let app = RustApi::new()
.layer(TimeoutLayer::new(Duration::from_secs(30)));
}
Optimization
Caching
Cache responses based on headers or path.
#![allow(unused)]
fn main() {
use rustapi_extras::cache::CacheLayer;
let app = RustApi::new()
.layer(CacheLayer::new());
}
Request Deduplication
Prevent duplicate requests (e.g., from double clicks) from processing twice.
#![allow(unused)]
fn main() {
use rustapi_extras::dedup::DedupLayer;
let app = RustApi::new()
.layer(DedupLayer::new());
}
Debugging
Time-Travel Debugging (Replay)
The replay feature allows you to record production traffic and replay it locally for debugging.
See the Time-Travel Debugging Recipe for full details.
#![allow(unused)]
fn main() {
use rustapi_extras::replay::{ReplayLayer, ReplayConfig, InMemoryReplayStore};
let replay_config = ReplayConfig::default();
let store = InMemoryReplayStore::new(1_000);
let app = RustApi::new()
.layer(ReplayLayer::new(replay_config).with_store(store));
}
rustapi-toon: The Diplomat
Lens: “The Diplomat” Philosophy: “Optimizing for Silicon Intelligence.”
What is TOON?
Token-Oriented Object Notation is a format designed to be consumed by Large Language Models (LLMs). It reduces token usage by stripping unnecessary syntax (braces, quotes) while maintaining semantic structure.
Content Negotiation
The LlmResponse<T> type automatically negotiates the response format based on the Accept header.
#![allow(unused)]
fn main() {
async fn agent_data() -> LlmResponse<Data> {
// Returns JSON for browsers
// Returns TOON for AI Agents (using fewer tokens)
}
}
Token Savings
TOON often reduces token count by 30-50% compared to JSON, saving significant costs and context window space when communicating with models like GPT-4 or Gemini.
rustapi-ws: The Live Wire
Lens: “The Live Wire” Philosophy: “Real-time, persistent connections made simple.”
The WebSocket Extractor
Upgrading an HTTP connection to a WebSocket uses the standard extractor pattern:
#![allow(unused)]
fn main() {
async fn ws_handler(
ws: WebSocket,
) -> impl IntoResponse {
ws.on_upgrade(handle_socket)
}
}
Architecture
We recommend an Actor Model for WebSocket state.
- Each connection spawns a new async task (the actor).
- Use
tokio::sync::broadcastchannels for global events (like chat rooms). - Use
mpscchannels for direct messaging.
rustapi-grpc: The Bridge
Lens: “The Bridge”
Philosophy: “HTTP and gRPC, one runtime.”
rustapi-grpc is an optional crate that helps you run a RustAPI HTTP server and a Tonic gRPC server in the same process.
What You Get
run_concurrently(http, grpc)for running two server futures side-by-side.run_rustapi_and_grpc(app, http_addr, grpc)convenience helper.run_rustapi_and_grpc_with_shutdown(app, http_addr, signal, grpc_with_shutdown)for graceful shared shutdown.- Re-exports of
tonicandprost.
Enable It
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["grpc"] }
Basic Usage
use rustapi_rs::grpc::{run_rustapi_and_grpc, tonic};
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/health")]
async fn health() -> &'static str { "ok" }
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let http_app = RustApi::new().route("/health", get(health));
let grpc_addr = "127.0.0.1:50051".parse()?;
let grpc_server = tonic::transport::Server::builder()
.add_service(MyGreeterServer::new(MyGreeter::default()))
.serve(grpc_addr);
run_rustapi_and_grpc(http_app, "127.0.0.1:8080", grpc_server).await?;
Ok(())
}
Graceful Shutdown
use rustapi_rs::grpc::{run_rustapi_and_grpc_with_shutdown, tonic};
run_rustapi_and_grpc_with_shutdown(
http_app,
"127.0.0.1:8080",
tokio::signal::ctrl_c(),
move |shutdown| {
tonic::transport::Server::builder()
.add_service(MyGreeterServer::new(MyGreeter::default()))
.serve_with_shutdown("127.0.0.1:50051".parse().unwrap(), shutdown)
},
).await?;
rustapi-view: The Artist
Lens: “The Artist” Philosophy: “Server-side rendering with modern tools.”
Tera Integration
We use Tera, a Jinja2-like template engine, for rendering HTML on the server.
#![allow(unused)]
fn main() {
async fn home(
State(templates): State<Templates>
) -> View {
let mut ctx = Context::new();
ctx.insert("user", "Alice");
View::new("home.html", ctx)
}
}
Layouts and Inheritance
Tera supports template inheritance, allowing you to define a base layout (base.html) and extend it in child templates (index.html), keeping your frontend DRY.
rustapi-jobs: The Workhorse
Lens: “The Workhorse” Philosophy: “Fire and forget, with reliability guarantees.”
Background Processing
Long-running tasks shouldn’t block HTTP requests. rustapi-jobs provides a robust queue system that can run in-memory or be backed by Redis/Postgres.
Usage Example
Here is how to set up a simple background job queue using the in-memory backend.
1. Define the Job and Data
Jobs are separated into two parts:
- The Data struct (the payload), which must be serializable.
- The Job struct (the handler), which contains the logic.
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
use rustapi_jobs::{Job, JobContext, Result};
use async_trait::async_trait;
// 1. The payload data
#[derive(Serialize, Deserialize, Debug, Clone)]
struct EmailJobData {
to: String,
subject: String,
body: String,
}
// 2. The handler struct (usually stateless)
#[derive(Clone)]
struct EmailJob;
#[async_trait]
impl Job for EmailJob {
const NAME: &'static str = "email_job";
type Data = EmailJobData;
async fn execute(&self, _ctx: JobContext, data: Self::Data) -> Result<()> {
println!("Sending email to {} with subject: {}", data.to, data.subject);
// Simulate work
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(())
}
}
}
2. Configure the Queue
In your main function, initialize the queue and start the worker.
use rustapi_jobs::{JobQueue, InMemoryBackend};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Create the backend
let backend = InMemoryBackend::new();
// 2. Create the queue
let queue = JobQueue::new(backend);
// 3. Register the job handler
queue.register_job(EmailJob).await;
// 4. Start the worker in the background
let worker_queue = queue.clone();
tokio::spawn(async move {
if let Err(e) = worker_queue.start_worker().await {
eprintln!("Worker failed: {:?}", e);
}
});
// 5. Enqueue a job (pass the DATA, not the handler)
queue.enqueue::<EmailJob>(EmailJobData {
to: "user@example.com".into(),
subject: "Welcome!".into(),
body: "Thanks for joining.".into(),
}).await?;
Ok(())
}
Backends
- Memory: Great for development and testing. Zero infrastructure required.
- Redis: High throughput persistence. Recommended for production.
- Postgres: Transactional reliability (ACID). Best if you cannot lose jobs.
Redis Backend
Enable the redis feature in Cargo.toml:
[dependencies]
rustapi-jobs = { version = "0.1.335", features = ["redis"] }
#![allow(unused)]
fn main() {
use rustapi_jobs::backend::redis::RedisBackend;
let backend = RedisBackend::new("redis://127.0.0.1:6379").await?;
let queue = JobQueue::new(backend);
}
Postgres Backend
Enable the postgres feature in Cargo.toml. This uses sqlx.
[dependencies]
rustapi-jobs = { version = "0.1.335", features = ["postgres"] }
#![allow(unused)]
fn main() {
use rustapi_jobs::backend::postgres::PostgresBackend;
use sqlx::postgres::PgPoolOptions;
let pool = PgPoolOptions::new().connect("postgres://user:pass@localhost/db").await?;
let backend = PostgresBackend::new(pool);
// Ensure the jobs table exists
backend.migrate().await?;
let queue = JobQueue::new(backend);
}
Reliability Features
The worker system includes built-in reliability features:
- Exponential Backoff: Automatically retries failing jobs with increasing delays.
- Dead Letter Queue (DLQ): “Poison” jobs that fail repeatedly are isolated for manual inspection.
- Concurrency Control: Limit the number of concurrent workers to prevent overloading your system.
rustapi-testing: The Auditor
Lens: “The Auditor” Philosophy: “Trust, but verify.”
rustapi-testing provides a comprehensive suite of tools for integration testing your RustAPI applications. It focuses on two main areas:
- In-process API testing: Testing your endpoints without binding to a real TCP port.
- External service mocking: Mocking downstream services (like payment gateways or auth providers) that your API calls.
Installation
Add the crate to your dev-dependencies:
[dev-dependencies]
rustapi-testing = { version = "0.1.335" }
The TestClient
Integration testing is often slow and painful because it involves spinning up a server, waiting for ports, and managing child processes. TestClient solves this by wrapping your RustApi application and executing requests directly against the service layer.
Basic Usage
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[tokio::test]
async fn test_hello_world() {
let app = RustApi::new().route("/", get(|| async { "Hello!" }));
let client = TestClient::new(app);
let response = client.get("/").await;
response
.assert_status(200)
.assert_body_contains("Hello!");
}
Testing JSON APIs
The client provides fluent helpers for JSON APIs.
#[derive(Serialize)]
struct CreateUser {
username: String,
}
#[tokio::test]
async fn test_create_user() {
let app = RustApi::new().route("/users", post(create_user_handler));
let client = TestClient::new(app);
let response = client.post_json("/users", &CreateUser {
username: "alice".into()
}).await;
response
.assert_status(201)
.assert_json(&serde_json::json!({
"id": 1,
"username": "alice"
}));
}
Mocking Services with MockServer
Real-world applications usually talk to other services. MockServer allows you to spin up a lightweight HTTP server that responds to requests based on pre-defined expectations.
Setting up a Mock Server
use rustapi_testing::{MockServer, MockResponse, RequestMatcher};
#[tokio::test]
async fn test_external_integration() {
// 1. Start the mock server
let server = MockServer::start().await;
// 2. Define an expectation
server.expect(RequestMatcher::new(Method::GET, "/external-api/data"))
.respond_with(MockResponse::new()
.status(StatusCode::OK)
.json(serde_json::json!({ "result": "success" })))
.times(1);
// 3. Configure your app to use the mock server's URL
let app = create_app_with_config(Config {
external_api_url: server.base_url(),
});
let client = TestClient::new(app);
// 4. Run your test
client.get("/my-endpoint-calling-external").await.assert_status(200);
}
Expectations
You can define strict expectations on how your application interacts with the mock server.
Matching Requests
RequestMatcher allows matching by method, path, headers, and body.
// Match a POST request with specific body
server.expect(RequestMatcher::new(Method::POST, "/webhook")
.body_string("event_type=payment_success".into()))
.respond_with(MockResponse::new().status(StatusCode::OK));
Verification
The MockServer automatically verifies that all expectations were met when it is dropped (at the end of the test scope). If an expectation was set to be called once but was never called, the test will panic.
.once(): Must be called exactly once (default)..times(n): Must be called exactlyntimes..at_least_once(): Must be called 1 or more times..never(): Must not be called.
// Ensure we don't call the billing API if validation fails
server.expect(RequestMatcher::new(Method::POST, "/charge"))
.never();
Best Practices
- Dependency Injection: Design your application
Stateto accept base URLs for external services so you can inject theMockServerURL during tests. - Isolation: Create a new
MockServerfor each test case to ensure no shared state or interference. - Fluent Assertions: Use the chainable assertion methods on
TestResponseto keep tests readable.
cargo-rustapi: The Architect
Lens: “The Architect” Philosophy: “Scaffolding best practices from day one.”
The CLI
The RustAPI CLI isn’t just a project generator; it’s a productivity multiplier.
Commands
cargo rustapi new <name>: Create a new project with the perfect directory structure.cargo rustapi run: Run the development server.cargo rustapi run --reload: Run with hot-reload (auto-rebuild on file changes).cargo rustapi generate resource <name>: Scaffold a new API resource (Model + Handlers + Tests).cargo rustapi client --spec <path> --language <lang>: Generate a client library (Rust, TS, Python) from OpenAPI spec.cargo rustapi deploy <platform>: Generate deployment configs for Docker, Fly.io, Railway, or Shuttle.cargo rustapi migrate <action>: Database migration commands (create, run, revert, status, reset).
Templates
The templates used by the CLI are opinionated but flexible. They enforce:
- Modular folder structure.
- Implementation of
Statepattern. - Separation of
Errortypes.
Recipes
Recipes are practical, focused guides to solving specific problems with RustAPI.
Format
Each recipe follows a simple structure:
- Problem: What are we trying to solve?
- Solution: The code.
- Discussion: Why it works and what to watch out for.
Table of Contents
- Creating Resources
- Pagination & HATEOAS
- OpenAPI & Schemas
- JWT Authentication
- CSRF Protection
- Database Integration
- Testing & Mocking
- File Uploads
- Background Jobs
- Custom Middleware
- Real-time Chat
- Server-Side Rendering (SSR)
- AI Integration (TOON)
- Production Tuning
- Response Compression
- Resilience Patterns
- Graceful Shutdown
- Time-Travel Debugging (Replay)
- Deployment
- HTTP/3 (QUIC)
- gRPC Integration
- Automatic Status Page
Creating Resources
Problem: You need to add a new “Resource” (like Users, Products, or Posts) to your API with standard CRUD operations.
Solution
Create a new module src/handlers/users.rs:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Deserialize, Schema, Clone)]
pub struct User {
pub id: u64,
pub name: String,
}
#[derive(Deserialize, Schema)]
pub struct CreateUser {
pub name: String,
}
#[rustapi_rs::get("/users")]
pub async fn list() -> Json<Vec<User>> {
Json(vec![]) // Fetch from DB in real app
}
#[rustapi_rs::post("/users")]
pub async fn create(Json(payload): Json<CreateUser>) -> impl IntoResponse {
let user = User { id: 1, name: payload.name };
(StatusCode::CREATED, Json(user))
}
}
Then in main.rs, simply use RustApi::auto():
use rustapi_rs::prelude::*;
mod handlers; // Make sure the module is part of the compilation unit!
#[rustapi_rs::main]
async fn main() -> Result<()> {
// RustAPI automatically discovers all routes decorated with macros
RustApi::auto()
.run("127.0.0.1:8080")
.await
}
Discussion
RustAPI uses distributed slices (via linkme) to automatically register routes decorated with #[rustapi_rs::get], #[rustapi_rs::post], etc. This means you don’t need to manually import or mount every single handler in your main function.
Just ensure your handler modules are reachable (e.g., via mod handlers;), and the framework handles the rest. This encourages a clean, Domain-Driven Design (DDD) structure where resources are self-contained.
Pagination & HATEOAS
Implementing pagination correctly is crucial for API performance and usability. RustAPI provides built-in support for HATEOAS (Hypermedia As The Engine Of Application State) compliant pagination, which includes navigation links in the response.
Problem
You need to return a list of resources, but there are too many to return in a single request. You want to provide a standard way for clients to navigate through pages of data.
Solution
Use ResourceCollection and PageInfo from rustapi_core::hateoas. These types automatically generate HAL (Hypertext Application Language) compliant responses with _links (self, first, last, next, prev) and _embedded resources.
Example Code
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::{PageInfo, ResourceCollection};
use serde::{Deserialize, Serialize};
// 1. Define your resource
// Note: It must derive Schema for OpenAPI generation
#[derive(Serialize, Clone, Schema)]
struct User {
id: i64,
name: String,
}
// 2. Define query parameters
#[derive(Deserialize, Schema)]
struct Pagination {
page: Option<usize>,
size: Option<usize>,
}
// 3. Create the handler
#[rustapi_rs::get("/users")]
async fn list_users(Query(params): Query<Pagination>) -> Json<ResourceCollection<User>> {
let page = params.page.unwrap_or(0);
let size = params.size.unwrap_or(20).max(1); // Ensure size is at least 1 to prevent division by zero
// In a real app, you would fetch this from a database
// let (users, total_elements) = db.fetch_users(page, size).await?;
let users = vec![
User { id: 1, name: "Alice".to_string() },
User { id: 2, name: "Bob".to_string() },
];
let total_elements = 100;
// 4. Calculate pagination info
let page_info = PageInfo::calculate(total_elements, size, page);
// 5. Build the collection response
// "users" is the key in the _embedded map
// "/users" is the base URL for generating links
let collection = ResourceCollection::new("users", users)
.page_info(page_info)
.with_pagination("/users");
Json(collection)
}
}
Explanation
The response will look like this (HAL format):
{
"_embedded": {
"users": [
{ "id": 1, "name": "Alice" },
{ "id": 2, "name": "Bob" }
]
},
"_links": {
"self": { "href": "/users?page=0&size=20" },
"first": { "href": "/users?page=0&size=20" },
"last": { "href": "/users?page=4&size=20" },
"next": { "href": "/users?page=1&size=20" }
},
"page": {
"size": 20,
"totalElements": 100,
"totalPages": 5,
"number": 0
}
}
Key Components
ResourceCollection<T>: Wraps a list of items. It places them under_embeddedand adds_links.PageInfo: Holds metadata about the current page (size, total elements, total pages, current number).with_pagination(base_url): Automatically generates standard navigation links based on thePageInfoand the provided base URL.
Variations
Cursor-based Pagination
If you are using cursor-based pagination (e.g., before_id, after_id), you can manually construct links instead of using with_pagination:
#![allow(unused)]
fn main() {
let collection = ResourceCollection::new("users", users)
.self_link("/users?after=10")
.next_link("/users?after=20");
}
HATEOAS for Single Resources
You can also add links to individual resources using Resource<T>:
#![allow(unused)]
fn main() {
use rustapi_rs::hateoas::Linkable; // Trait for .with_links()
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<Resource<User>> {
let user = User { id, name: "Alice".to_string() };
let resource = user.with_links()
.self_link(format!("/users/{}", id))
.link("orders", format!("/users/{}/orders", id));
Json(resource)
}
}
Gotchas
- Schema Derive: The type
TinsideResourceCollection<T>orResource<T>MUST implementRustApiSchema(via#[derive(Schema)]) for OpenAPI generation to work. - Base URL: The
base_urlpassed towith_paginationshould generally match the route path. If your API is behind a proxy or prefix, ensure this URL is correct from the client’s perspective.
OpenAPI Schemas & References
RustAPI’s OpenAPI generation is built around the RustApiSchema trait, which is automatically implemented when you derive Schema. This system seamlessly handles JSON Schema 2020-12 references ($ref) to reduce duplication and support recursive types.
Automatic References
When you use #[derive(Schema)] on a struct or enum, RustAPI generates an implementation that:
- Registers the type in the OpenAPI
components/schemassection. - Returns a
$refpointing to that component whenever the type is used in another schema.
This means you don’t need to manually configure references – they just work.
#![allow(unused)]
fn main() {
use rustapi_openapi::Schema;
#[derive(Schema)]
struct Address {
street: String,
city: String,
}
#[derive(Schema)]
struct User {
username: String,
// This will generate {"$ref": "#/components/schemas/Address"}
address: Address,
}
}
Recursive Types
Recursive types (like a Comment that replies to another Comment) are supported automatically because the schema is registered before its fields are processed. However, you must use Box<T> or Option<T> for the recursive field to break the infinite size cycle in Rust.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Comment {
id: String,
text: String,
// Recursive reference works automatically
replies: Option<Vec<Box<Comment>>>,
}
}
Generics
Generic types are also supported. The schema name will include the concrete type parameters to ensure uniqueness.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Page<T> {
items: Vec<T>,
total: u64,
}
#[derive(Schema)]
struct Product {
name: String,
}
// Generates component: "Page_Product"
// Generates usage: {"$ref": "#/components/schemas/Page_Product"}
async fn list_products() -> Json<Page<Product>> { ... }
}
Renaming & Customization
You can customize how fields appear in the schema using standard Serde attributes, as rustapi-openapi respects #[serde(rename)].
#![allow(unused)]
fn main() {
#[derive(Schema, Serialize)]
struct UserConfig {
#[serde(rename = "userId")]
user_id: String, // In schema: "userId"
}
}
Note: Currently, #[derive(Schema)] does not support specific #[schema(...)] attributes for descriptions or examples directly on fields. You should use doc comments (if supported in future versions) or implement RustApiSchema manually for advanced customization.
Manual Implementation
If you need a schema that cannot be derived (e.g., for a third-party type), you can implement RustApiSchema manually.
#![allow(unused)]
fn main() {
use rustapi_openapi::schema::{RustApiSchema, SchemaCtx, SchemaRef, JsonSchema2020};
struct MyCustomType;
impl RustApiSchema for MyCustomType {
fn schema(ctx: &mut SchemaCtx) -> SchemaRef {
let name = "MyCustomType";
// Register if not exists
if ctx.components.contains_key(name) {
return SchemaRef::Ref { reference: format!("#/components/schemas/{}", name) };
}
// Insert placeholder
ctx.components.insert(name.to_string(), JsonSchema2020::new());
// Build schema
let mut schema = JsonSchema2020::string();
schema.format = Some("custom-format".to_string());
// Update component
ctx.components.insert(name.to_string(), schema);
SchemaRef::Ref { reference: format!("#/components/schemas/{}", name) }
}
fn name() -> std::borrow::Cow<'static, str> {
std::borrow::Cow::Borrowed("MyCustomType")
}
}
}
JWT Authentication
Authentication is critical for almost every API. RustAPI provides a built-in, production-ready JWT authentication system via the extras-jwt feature.
Dependencies
Enable the extras-jwt feature in your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["extras-jwt"] }
serde = { version = "1", features = ["derive"] }
1. Define Claims
Define your custom claims struct. It must be serializable and deserializable.
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Claims {
pub sub: String, // Subject (User ID)
pub role: String, // Custom claim: "admin", "user"
pub exp: usize, // Required for JWT expiration validation
}
}
2. Shared State
To avoid hardcoding secrets in multiple places, we’ll store our secret key in the application state.
#![allow(unused)]
fn main() {
#[derive(Clone)]
pub struct AppState {
pub secret: String,
}
}
3. The Handlers
We use the AuthUser<T> extractor to protect routes, and State<T> to access the secret for signing tokens during login.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use std::time::{SystemTime, UNIX_EPOCH};
#[rustapi_rs::get("/profile")]
async fn protected_profile(
// This handler will only be called if a valid token is present
AuthUser(claims): AuthUser<Claims>
) -> Json<String> {
Json(format!("Welcome back, {}! You are a {}.", claims.sub, claims.role))
}
#[rustapi_rs::post("/login")]
async fn login(State(state): State<AppState>) -> Result<Json<String>> {
// In a real app, validate credentials first!
use std::time::{SystemTime, UNIX_EPOCH};
let expiration = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() + 3600; // Token expires in 1 hour (3600 seconds)
let claims = Claims {
sub: "user_123".to_owned(),
role: "admin".to_owned(),
exp: expiration as usize,
};
// We use the secret from our shared state
let token = create_token(&claims, &state.secret)?;
Ok(Json(token))
}
}
4. Wiring it Up
Register the JwtLayer and the state in your application.
#[rustapi_rs::main]
async fn main() -> Result<()> {
// In production, load this from an environment variable!
let secret = "my_secret_key".to_string();
let state = AppState {
secret: secret.clone(),
};
// Configure JWT validation with the same secret
let jwt_layer = JwtLayer::<Claims>::new(secret);
RustApi::auto()
.state(state) // Register the shared state
.layer(jwt_layer) // Add the middleware
.run("127.0.0.1:8080")
.await
}
Bonus: Role-Based Access Control (RBAC)
Since we have the role in our claims, we can enforce permissions easily within the handler:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/admin")]
async fn admin_only(AuthUser(claims): AuthUser<Claims>) -> Result<String, StatusCode> {
if claims.role != "admin" {
return Err(StatusCode::FORBIDDEN);
}
Ok("Sensitive Admin Data".to_string())
}
}
How It Works
JwtLayerMiddleware: Intercepts requests, looks forAuthorization: Bearer <token>, validates the signature, and stores the decoded claims in the request extensions.AuthUserExtractor: Retrieves the claims from the request extensions. If the middleware failed or didn’t run, or if the token was missing/invalid, the extractor returns a401 Unauthorizederror.
This separation allows you to have some public routes (where JwtLayer might just pass through) and some protected routes (where AuthUser enforces presence). Note that JwtLayer by default does not reject requests without tokens; it just doesn’t attach claims. The extractor does the rejection.
OAuth2 Client Integration
Integrating with third-party identity providers (like Google, GitHub) is a common requirement for modern applications. RustAPI provides a streamlined OAuth2 client in rustapi-extras.
This recipe demonstrates how to set up an OAuth2 flow.
Prerequisites
Add rustapi-extras with the oauth2-client feature to your Cargo.toml.
[dependencies]
rustapi-extras = { version = "0.1.335", features = ["oauth2-client"] }
Basic Configuration
You can use presets for popular providers or configure a custom one.
#![allow(unused)]
fn main() {
use rustapi_extras::oauth2::{OAuth2Config, Provider};
// Using a preset (Google)
let config = OAuth2Config::google(
"your-client-id",
"your-client-secret",
"https://your-app.com/auth/callback/google"
);
// Or custom provider
let custom_config = OAuth2Config::new(
"client-id",
"client-secret",
"https://auth.example.com/authorize",
"https://auth.example.com/token",
"https://your-app.com/callback"
);
}
The Authorization Flow
- Redirect User: Generate an authorization URL and redirect the user.
- Handle Callback: Exchange the authorization code for an access token.
Step 1: Redirect User
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_extras::oauth2::{OAuth2Client, OAuth2Config};
async fn login(client: State<OAuth2Client>) -> impl IntoResponse {
// Generate URL with CSRF protection and PKCE
let auth_request = client.authorization_url();
// Store CSRF token and PKCE verifier in session (or cookie)
// In a real app, use secure, http-only cookies
// session.insert("csrf_token", auth_request.csrf_state.secret());
// session.insert("pkce_verifier", auth_request.pkce_verifier.secret());
// Redirect user
Redirect::to(auth_request.url().as_str())
}
}
Step 2: Handle Callback
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_extras::oauth2::{OAuth2Client, OAuth2Config};
#[derive(Deserialize)]
struct AuthCallback {
code: String,
state: String, // CSRF token
}
async fn callback(
Query(params): Query<AuthCallback>,
client: State<OAuth2Client>,
// session: Session, // Assuming session management
) -> impl IntoResponse {
// 1. Verify CSRF token from session matches params.state
// 2. Exchange code for token
// let pkce_verifier = session.get("pkce_verifier").unwrap();
match client.exchange_code(¶ms.code, /* pkce_verifier */).await {
Ok(token_response) => {
// Success! You have an access token.
// Use it to fetch user info or store it.
println!("Access Token: {}", token_response.access_token());
// Redirect to dashboard or home
Redirect::to("/dashboard")
}
Err(e) => {
// Handle error (e.g., invalid code)
(StatusCode::BAD_REQUEST, format!("Auth failed: {}", e)).into_response()
}
}
}
}
User Information
Once you have an access token, you can fetch user details. Most providers offer a /userinfo endpoint.
#![allow(unused)]
fn main() {
// Example using reqwest (feature required)
async fn get_user_info(token: &str) -> Result<serde_json::Value, reqwest::Error> {
let client = reqwest::Client::new();
client
.get("https://www.googleapis.com/oauth2/v3/userinfo")
.bearer_auth(token)
.send()
.await?
.json()
.await
}
}
Best Practices
- State Parameter: Always use the
stateparameter to prevent CSRF attacks. RustAPI’sauthorization_url()generates one for you. - PKCE: Proof Key for Code Exchange (PKCE) is recommended for all OAuth2 flows, especially for public clients. RustAPI handles PKCE generation.
- Secure Storage: Store tokens securely (e.g., encrypted cookies, secure session storage). Never expose access tokens in URLs or logs.
- HTTPS: OAuth2 requires HTTPS callbacks in production.
CSRF Protection
Cross-Site Request Forgery (CSRF) protection for your RustAPI applications using the Double-Submit Cookie pattern.
What is CSRF?
CSRF is an attack that tricks users into submitting unintended requests. For example, a malicious website could submit a form to your API while users are logged in, performing actions without their consent.
RustAPI’s CSRF protection works by:
- Generating a cryptographic token stored in a cookie
- Requiring the same token in a request header for state-changing requests
- Rejecting requests where the cookie and header don’t match
Quick Start
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["csrf"] }
use rustapi_rs::prelude::*;
use rustapi_extras::csrf::{CsrfConfig, CsrfLayer, CsrfToken};
#[rustapi_rs::get("/form")]
async fn show_form(token: CsrfToken) -> Html<String> {
Html(format!(r#"
<form method="POST" action="/submit">
<input type="hidden" name="csrf_token" value="{}" />
<button type="submit">Submit</button>
</form>
"#, token.as_str()))
}
#[rustapi_rs::post("/submit")]
async fn handle_submit() -> &'static str {
// If we get here, CSRF validation passed!
"Form submitted successfully"
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let csrf_config = CsrfConfig::new()
.cookie_name("csrf_token")
.header_name("X-CSRF-Token");
RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.mount(show_form)
.mount(handle_submit)
.run("127.0.0.1:8080")
.await
}
Configuration Options
#![allow(unused)]
fn main() {
let config = CsrfConfig::new()
// Cookie settings
.cookie_name("csrf_token") // Default: "csrf_token"
.cookie_path("/") // Default: "/"
.cookie_domain("example.com") // Default: None (same domain)
.cookie_secure(true) // Default: true (HTTPS only)
.cookie_http_only(false) // Default: false (JS needs access)
.cookie_same_site(SameSite::Strict) // Default: Strict
// Token settings
.header_name("X-CSRF-Token") // Default: "X-CSRF-Token"
.token_length(32); // Default: 32 bytes
}
How It Works
Safe Methods (No Validation)
GET, HEAD, OPTIONS, and TRACE requests are considered “safe” and don’t modify state. The CSRF middleware:
- ✅ Generates a new token if none exists
- ✅ Sets the token cookie in the response
- ✅ Does NOT validate the header
Unsafe Methods (Validation Required)
POST, PUT, PATCH, and DELETE requests require CSRF validation:
- 🔍 Reads the token from the cookie
- 🔍 Reads the expected token from the header
- ❌ If missing or mismatched → Returns
403 Forbidden - ✅ If valid → Proceeds to handler
Frontend Integration
HTML Forms
For traditional form submissions, include the token as a hidden field:
<form method="POST" action="/api/submit">
<input type="hidden" name="_csrf" value="{{ csrf_token }}" />
<!-- form fields -->
<button type="submit">Submit</button>
</form>
JavaScript / AJAX
For API calls, include the token in the request header:
// Read token from cookie
function getCsrfToken() {
return document.cookie
.split('; ')
.find(row => row.startsWith('csrf_token='))
?.split('=')[1];
}
// Include in fetch requests
fetch('/api/users', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRF-Token': getCsrfToken()
},
body: JSON.stringify({ name: 'John' })
});
Axios Interceptor
import axios from 'axios';
axios.interceptors.request.use(config => {
if (['post', 'put', 'patch', 'delete'].includes(config.method)) {
config.headers['X-CSRF-Token'] = getCsrfToken();
}
return config;
});
Extracting the Token in Handlers
Use the CsrfToken extractor to access the current token in your handlers:
#![allow(unused)]
fn main() {
use rustapi_extras::csrf::CsrfToken;
#[rustapi_rs::get("/api/csrf-token")]
async fn get_csrf_token(token: CsrfToken) -> Json<serde_json::Value> {
Json(serde_json::json!({
"csrf_token": token.as_str()
}))
}
}
Best Practices
1. Always Use HTTPS in Production
#![allow(unused)]
fn main() {
let config = CsrfConfig::new()
.cookie_secure(true); // Cookie only sent over HTTPS
}
2. Use Strict SameSite Policy
#![allow(unused)]
fn main() {
use cookie::SameSite;
let config = CsrfConfig::new()
.cookie_same_site(SameSite::Strict); // Most restrictive
}
3. Combine with Other Security Measures
#![allow(unused)]
fn main() {
RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.layer(SecurityHeadersLayer::strict()) // Add security headers
.layer(CorsLayer::permissive()) // Configure CORS
}
4. Rotate Tokens Periodically
Consider regenerating tokens after sensitive actions:
#![allow(unused)]
fn main() {
#[rustapi_rs::post("/auth/login")]
async fn login(/* ... */) -> impl IntoResponse {
// After successful login, a new CSRF token will be
// generated on the next GET request
// ...
}
}
Testing CSRF Protection
#![allow(unused)]
fn main() {
use rustapi_testing::{TestClient, TestRequest};
#[tokio::test]
async fn test_csrf_protection() {
let app = create_app_with_csrf();
let client = TestClient::new(app);
// GET request should work and set cookie
let res = client.get("/form").await;
assert_eq!(res.status(), StatusCode::OK);
let csrf_cookie = res.headers()
.get("set-cookie")
.unwrap()
.to_str()
.unwrap();
// Extract token value
let token = csrf_cookie
.split(';')
.next()
.unwrap()
.split('=')
.nth(1)
.unwrap();
// POST without token should fail
let res = client.post("/submit").await;
assert_eq!(res.status(), StatusCode::FORBIDDEN);
// POST with correct token should succeed
let res = client.request(
TestRequest::post("/submit")
.header("Cookie", format!("csrf_token={}", token))
.header("X-CSRF-Token", token)
).await;
assert_eq!(res.status(), StatusCode::OK);
}
}
Error Handling
When CSRF validation fails, the middleware returns a JSON error response:
{
"error": {
"code": "csrf_forbidden",
"message": "CSRF token validation failed"
}
}
You can customize this by wrapping the layer with your own error handler.
Security Considerations
| Consideration | Status |
|---|---|
| Token in cookie | ✅ HttpOnly=false (JS needs access) |
| Token validation | ✅ Constant-time comparison |
| SameSite cookie | ✅ Configurable (Strict by default) |
| Secure cookie | ✅ HTTPS-only by default |
| Token entropy | ✅ 32 bytes of cryptographic randomness |
See Also
- JWT Authentication - Token-based authentication
- Security Headers - Additional security layers
- CORS Configuration - Cross-origin request handling
Database Integration
RustAPI is database-agnostic, but SQLx is the recommended driver due to its async-first design and compile-time query verification.
This recipe shows how to integrate PostgreSQL/MySQL/SQLite using a global connection pool with best practices for production.
Dependencies
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["sqlx"] } # Enable SQLx error conversion
sqlx = { version = "0.8", features = ["runtime-tokio", "tls-rustls", "postgres", "uuid"] }
serde = { version = "1", features = ["derive"] }
tokio = { version = "1", features = ["full"] }
dotenvy = "0.15"
1. Setup Connection Pool
Create the pool once at startup and share it via State. Configure pool limits appropriately.
use sqlx::postgres::PgPoolOptions;
use std::sync::Arc;
use std::time::Duration;
#[derive(Clone)]
pub struct AppState {
pub db: sqlx::PgPool,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
dotenvy::dotenv().ok();
let db_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create a connection pool with production settings
let pool = PgPoolOptions::new()
.max_connections(50) // Adjust based on DB limits
.min_connections(5) // Keep some idle connections ready
.acquire_timeout(Duration::from_secs(5)) // Fail fast if DB is overloaded
.idle_timeout(Duration::from_secs(300)) // Close idle connections
.connect(&db_url)
.await
.expect("Failed to connect to DB");
// Run migrations (optional but recommended)
// Note: requires `sqlx-cli` or `sqlx` migrate feature
sqlx::migrate!("./migrations")
.run(&pool)
.await
.expect("Failed to migrate");
let state = AppState { db: pool };
RustApi::new()
.state(state)
.route("/users", post(create_user))
.run("0.0.0.0:3000")
.await
}
2. Using the Database in Handlers
Extract the State to get access to the pool.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Deserialize, Validate)]
struct CreateUser {
#[validate(length(min = 3))]
username: String,
#[validate(email)]
email: String,
}
#[derive(Serialize, Schema)]
struct User {
id: i32,
username: String,
email: String,
}
async fn create_user(
State(state): State<AppState>,
ValidatedJson(payload): ValidatedJson<CreateUser>,
) -> Result<(StatusCode, Json<User>), ApiError> {
// SQLx query macro performs compile-time checking!
// The query is checked against your running database during compilation.
let record = sqlx::query_as!(
User,
"INSERT INTO users (username, email) VALUES ($1, $2) RETURNING id, username, email",
payload.username,
payload.email
)
.fetch_one(&state.db)
.await
// Map sqlx::Error to ApiError (feature = "sqlx" handles this automatically)
.map_err(ApiError::from)?;
Ok((StatusCode::CREATED, Json(record)))
}
}
3. Transactions
For operations involving multiple queries, use a transaction to ensure atomicity.
#![allow(unused)]
fn main() {
async fn transfer_credits(
State(state): State<AppState>,
Json(payload): Json<TransferRequest>,
) -> Result<StatusCode, ApiError> {
// Start a transaction
let mut tx = state.db.begin().await.map_err(ApiError::from)?;
// Deduct from sender
let updated = sqlx::query!(
"UPDATE accounts SET balance = balance - $1 WHERE id = $2 RETURNING balance",
payload.amount,
payload.sender_id
)
.fetch_optional(&mut *tx)
.await
.map_err(ApiError::from)?;
// Check balance
if let Some(record) = updated {
if record.balance < 0 {
// Rollback is automatic on drop, but explicit rollback is clearer
tx.rollback().await.map_err(ApiError::from)?;
return Err(ApiError::bad_request("Insufficient funds"));
}
} else {
return Err(ApiError::not_found("Sender not found"));
}
// Add to receiver
sqlx::query!(
"UPDATE accounts SET balance = balance + $1 WHERE id = $2",
payload.amount,
payload.receiver_id
)
.execute(&mut *tx)
.await
.map_err(ApiError::from)?;
// Commit transaction
tx.commit().await.map_err(ApiError::from)?;
Ok(StatusCode::OK)
}
}
4. Integration Testing with TestContainers
For testing, use testcontainers to spin up a real database instance. This ensures your queries are correct without mocking the database driver.
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
use testcontainers::{clients, images};
use rustapi_testing::TestClient;
#[tokio::test]
async fn test_create_user() {
// Start Postgres container
let docker = clients::Cli::default();
let pg = docker.run(images::postgres::Postgres::default());
let port = pg.get_host_port_ipv4(5432);
let db_url = format!("postgres://postgres:postgres@localhost:{}/postgres", port);
// Setup pool
let pool = PgPoolOptions::new().connect(&db_url).await.unwrap();
sqlx::migrate!("./migrations").run(&pool).await.unwrap();
let state = AppState { db: pool };
// Create app and client
let app = RustApi::new().state(state).route("/users", post(create_user));
let client = TestClient::new(app);
// Test request
let response = client.post("/users")
.json(&serde_json::json!({
"username": "testuser",
"email": "test@example.com"
}))
.await;
assert_eq!(response.status(), StatusCode::CREATED);
let user: User = response.json().await;
assert_eq!(user.username, "testuser");
}
}
}
Error Handling
RustAPI provides automatic conversion from sqlx::Error to ApiError when the sqlx feature is enabled.
RowNotFound-> 404 Not FoundPoolTimedOut-> 503 Service Unavailable- Unique Constraint Violation -> 409 Conflict
- Check Constraint Violation -> 400 Bad Request
- Other errors -> 500 Internal Server Error (masked in production)
Testing Strategies
RustAPI provides robust tools for testing your application, ensuring reliability from unit tests to full integration scenarios.
Dependencies
Add rustapi-testing to your Cargo.toml. It is usually added as a dev-dependency.
[dev-dependencies]
rustapi-testing = "0.1.335"
tokio = { version = "1", features = ["full"] }
Integration Testing with TestClient
The TestClient allows you to test your API handlers without binding to a network port. It interacts directly with the service layer, making tests fast and deterministic.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[rustapi_rs::get("/hello")]
async fn hello() -> &'static str {
"Hello, World!"
}
#[tokio::test]
async fn test_hello_endpoint() {
// 1. Build your application
let app = RustApi::new().route("/hello", get(hello));
// 2. Create a TestClient
let client = TestClient::new(app);
// 3. Send requests
let response = client.get("/hello").send().await;
// 4. Assert response
assert_eq!(response.status(), 200);
assert_eq!(response.text().await, "Hello, World!");
}
}
Testing JSON APIs
TestClient has built-in support for JSON serialization and deserialization.
#![allow(unused)]
fn main() {
#[derive(Serialize, Deserialize, PartialEq, Debug)]
struct User {
id: u64,
name: String,
}
#[rustapi_rs::post("/users")]
async fn create_user(Json(user): Json<User>) -> Json<User> {
Json(user)
}
#[tokio::test]
async fn test_create_user() {
let app = RustApi::new().route("/users", post(create_user));
let client = TestClient::new(app);
let new_user = User { id: 1, name: "Alice".into() };
let response = client.post("/users")
.json(&new_user)
.send()
.await;
assert_eq!(response.status(), 200);
let returned_user: User = response.json().await;
assert_eq!(returned_user, new_user);
}
}
Mocking External Services
When your API calls external services (e.g., payment gateways, third-party APIs), you should mock them in tests to avoid network calls and ensure reproducibility.
rustapi-testing provides MockServer for this purpose.
#![allow(unused)]
fn main() {
use rustapi_testing::{MockServer, MockResponse};
#[tokio::test]
async fn test_external_integration() {
// 1. Start a mock server
let mock_server = MockServer::start().await;
// 2. Define an expectation
mock_server.expect(
rustapi_testing::RequestMatcher::new()
.method("GET")
.path("/external-data")
).respond_with(
MockResponse::new()
.status(200)
.body(r#"{"data": "mocked"}"#)
);
// 3. Use the mock server's URL in your app configuration
let mock_url = format!("{}{}", mock_server.base_url(), "/external-data");
// Simulating your app logic calling the external service
let client = reqwest::Client::new();
let res = client.get(&mock_url).send().await.unwrap();
assert_eq!(res.status(), 200);
let body = res.text().await.unwrap();
assert_eq!(body, r#"{"data": "mocked"}"#);
}
}
Testing Authenticated Routes
You can simulate authenticated requests by setting headers directly on the TestClient request builder.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_protected_route() {
let app = RustApi::new().route("/protected", get(protected_handler));
let client = TestClient::new(app);
let response = client.get("/protected")
.header("Authorization", "Bearer valid_token")
.send()
.await;
assert_eq!(response.status(), 200);
}
}
Best Practices
- Keep Tests Independent: Each test should setup its own app instance and state.
TestClientis lightweight enough for this. - Mock I/O: Use
MockServerfor HTTP, and in-memory implementations for databases (e.g.,sqlite::memory:) or traits for logic. - Test Edge Cases: Don’t just test the “happy path”. Test validation errors, 404s, and error handling.
File Uploads
Handling file uploads is a common requirement. RustAPI provides a Multipart extractor to parse multipart/form-data requests.
Dependencies
Add uuid and tokio with fs features to your Cargo.toml.
[dependencies]
rustapi-rs = "0.1.335"
tokio = { version = "1", features = ["fs", "io-util"] }
uuid = { version = "1", features = ["v4"] }
Buffered Upload Example
RustAPI’s Multipart extractor currently buffers the entire request body into memory before parsing. This means it is suitable for small to medium file uploads (e.g., images, documents) but care must be taken with very large files to avoid running out of RAM.
use rustapi_rs::prelude::*;
use rustapi_rs::extract::{Multipart, DefaultBodyLimit};
use std::path::Path;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Ensure uploads directory exists
tokio::fs::create_dir_all("./uploads").await?;
println!("Starting Upload Server at http://127.0.0.1:8080");
RustApi::new()
// Increase body limit to 1GB (default is usually 1MB)
.body_limit(1024 * 1024 * 1024)
.route("/upload", post(upload_handler))
// Increase body limit to 50MB (default is usually 2MB)
// ⚠️ IMPORTANT: Since Multipart buffers the whole body,
// setting this too high can exhaust server memory.
.layer(DefaultBodyLimit::max(50 * 1024 * 1024))
.run("127.0.0.1:8080")
.await
}
#[derive(Serialize, Schema)]
struct UploadResponse {
message: String,
files: Vec<FileResult>,
}
#[derive(Serialize, Schema)]
struct FileResult {
original_name: String,
stored_name: String,
content_type: String,
}
async fn upload_handler(mut multipart: Multipart) -> Result<Json<UploadResponse>> {
let mut uploaded_files = Vec::new();
// Iterate over the fields in the multipart form
while let Some(field) = multipart.next_field().await.map_err(|_| ApiError::bad_request("Invalid multipart"))? {
// Skip fields that are not files
if !field.is_file() {
continue;
}
let file_name = field.file_name().unwrap_or("unknown.bin").to_string();
let content_type = field.content_type().unwrap_or("application/octet-stream").to_string();
// ⚠️ Security: Never trust the user-provided filename directly!
// It could contain paths like "../../../etc/passwd".
// Always generate a safe filename or sanitize inputs.
let safe_filename = format!("{}-{}", uuid::Uuid::new_v4(), file_name);
// Option 1: Use the helper method (sanitizes filename automatically)
// field.save_to("./uploads", Some(&safe_filename)).await.map_err(|e| ApiError::internal(e.to_string()))?;
// Option 2: Manual write (gives you full control)
let data = field.bytes().await.map_err(|e| ApiError::internal(e.to_string()))?;
let path = Path::new("./uploads").join(&safe_filename);
tokio::fs::write(&path, &data).await.map_err(|e| ApiError::internal(e.to_string()))?;
println!("Saved file: {} -> {:?}", file_name, path);
uploaded_files.push(FileResult {
original_name: file_name,
stored_name: safe_filename,
content_type,
});
}
Ok(Json(UploadResponse {
message: "Upload successful".into(),
files: uploaded_files,
}))
}
Key Concepts
1. Buffering
RustAPI loads the entire multipart/form-data body into memory.
- Pros: Simple API, easy to work with.
- Cons: High memory usage for concurrent large uploads.
- Mitigation: Set a reasonable
DefaultBodyLimit(e.g., 10MB - 100MB) to prevent DoS attacks.
2. Body Limits
The default request body limit is small (2MB) to prevent attacks. You must explicitly increase this limit for file upload routes using .layer(DefaultBodyLimit::max(size_in_bytes)).
3. Security
- Path Traversal: Malicious users can send filenames like
../../system32/cmd.exe. Always rename files or sanitize filenames strictly. - Content Type Validation: The
Content-Typeheader is client-controlled and can be spoofed. Do not rely on it for security execution checks (e.g., preventing.phpexecution). - Executable Permissions: Store uploads in a directory where script execution is disabled.
Testing with cURL
You can test this endpoint using curl:
curl -X POST http://localhost:8080/upload \
-F "file1=@./image.png" \
-F "file2=@./document.pdf"
Response:
{
"message": "Upload successful",
"files": [
{
"original_name": "image.png",
"stored_name": "550e8400-e29b-41d4-a716-446655440000-image.png",
"content_type": "image/png"
},
...
]
}
Background Jobs
RustAPI provides a robust background job processing system through the rustapi-jobs crate. This allows you to offload time-consuming tasks (like sending emails, processing images, or generating reports) from the main request/response cycle, keeping your API fast and responsive.
Setup
First, add rustapi-jobs to your Cargo.toml. Since rustapi-jobs is not re-exported by the main crate by default, you must include it explicitly.
[dependencies]
rustapi-rs = "0.1"
rustapi-jobs = "0.1"
serde = { version = "1.0", features = ["derive"] }
async-trait = "0.1"
tokio = { version = "1.0", features = ["full"] }
Defining a Job
A job consists of a data structure (the payload) and an implementation of the Job trait.
#![allow(unused)]
fn main() {
use rustapi_jobs::{Job, JobContext, Result};
use serde::{Deserialize, Serialize};
use async_trait::async_trait;
// 1. Define the job payload
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct WelcomeEmailData {
pub user_id: String,
pub email: String,
}
// 2. Define the job handler struct
#[derive(Clone)]
pub struct WelcomeEmailJob;
// 3. Implement the Job trait
#[async_trait]
impl Job for WelcomeEmailJob {
// Unique name for the job type
const NAME: &'static str = "send_welcome_email";
// The payload type
type Data = WelcomeEmailData;
async fn execute(&self, ctx: JobContext, data: Self::Data) -> Result<()> {
println!("Processing job {} (attempt {})", ctx.job_id, ctx.attempt);
println!("Sending welcome email to {} ({})", data.email, data.user_id);
// Simulate work
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(())
}
}
}
Registering and Running the Queue
In your main application setup, you need to:
- Initialize the backend (Memory, Redis, or Postgres).
- Create the
JobQueue. - Register your job handlers.
- Start the worker loop in a background task.
- Add the
JobQueueto your application state so handlers can use it.
use rustapi_rs::prelude::*;
use rustapi_jobs::{JobQueue, InMemoryBackend};
// use crate::jobs::{WelcomeEmailJob, WelcomeEmailData}; // Import your job
#[tokio::main]
async fn main() -> std::io::Result<()> {
// 1. Initialize backend
// For production, use Redis or Postgres backend
let backend = InMemoryBackend::new();
// 2. Create queue
let queue = JobQueue::new(backend);
// 3. Register jobs
// You must register an instance of the job handler
queue.register_job(WelcomeEmailJob).await;
// 4. Start worker in background
let queue_for_worker = queue.clone();
tokio::spawn(async move {
if let Err(e) = queue_for_worker.start_worker().await {
eprintln!("Worker failed: {}", e);
}
});
// 5. Build application
RustApi::auto()
.with_state(queue) // Inject queue into state
.serve("127.0.0.1:3000")
.await
}
Enqueueing Jobs
You can now inject the JobQueue into your request handlers using the State extractor and enqueue jobs.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_jobs::JobQueue;
#[rustapi::post("/register")]
async fn register_user(
State(queue): State<JobQueue>,
Json(payload): Json<RegisterRequest>,
) -> Result<impl IntoResponse, ApiError> {
// ... logic to create user in DB ...
let user_id = "user_123".to_string(); // Simulated ID
// Enqueue the background job
// The queue will handle serialization and persistence
queue.enqueue::<WelcomeEmailJob>(WelcomeEmailData {
user_id,
email: payload.email,
}).await.map_err(|e| ApiError::InternalServerError(e.to_string()))?;
Ok(Json(json!({
"status": "registered",
"message": "Welcome email will be sent shortly"
})))
}
#[derive(Deserialize)]
struct RegisterRequest {
username: String,
email: String,
}
}
Resilience and Retries
rustapi-jobs handles failures automatically. If your execute method returns an Err, the job will be:
- Marked as failed.
- Optionally scheduled for retry with exponential backoff if retries are enabled.
- Retried up to
max_attemptswhen you configure it viaEnqueueOptions.
By default, EnqueueOptions::new() sets max_attempts to 0, so a failed job will not be retried unless you explicitly opt in by calling .max_attempts(...) with a value greater than the current attempts count.
To customize retry behavior, use enqueue_opts:
#![allow(unused)]
fn main() {
use rustapi_jobs::EnqueueOptions;
queue.enqueue_opts::<WelcomeEmailJob>(
data,
EnqueueOptions::new()
.max_attempts(5) // Retry up to 5 times
.delay(std::time::Duration::from_secs(60)) // Initial delay
).await?;
}
Backends
While InMemoryBackend is great for testing and simple apps, production systems should use persistent backends:
- Redis: High performance, good for volatile queues. Enable
redisfeature inrustapi-jobs. - Postgres: Best for reliability and transactional safety. Enable
postgresfeature.
# In Cargo.toml
rustapi-jobs = { version = "0.1", features = ["redis"] }
Custom Middleware
Problem: You need to execute code before or after every request (e.g., logging, authentication, metrics) or modify the response.
Solution
In RustAPI, the idiomatic way to implement custom middleware is by implementing the MiddlewareLayer trait. This trait provides a safe, asynchronous interface for inspecting and modifying requests and responses.
The MiddlewareLayer Trait
The trait is defined in rustapi_core::middleware:
pub trait MiddlewareLayer: Send + Sync + 'static {
fn call(
&self,
req: Request,
next: BoxedNext,
) -> Pin<Box<dyn Future<Output = Response> + Send + 'static>>;
fn clone_box(&self) -> Box<dyn MiddlewareLayer>;
}
Basic Example: Logging Middleware
Here is a simple middleware that logs the incoming request method and URI, calls the next handler, and then logs the response status.
#![allow(unused)]
fn main() {
use rustapi_core::middleware::{MiddlewareLayer, BoxedNext};
use rustapi_core::{Request, Response};
use std::pin::Pin;
use std::future::Future;
#[derive(Clone)]
pub struct SimpleLogger;
impl MiddlewareLayer for SimpleLogger {
fn call(
&self,
req: Request,
next: BoxedNext,
) -> Pin<Box<dyn Future<Output = Response> + Send + 'static>> {
// logic before handling request
let method = req.method().clone();
let uri = req.uri().clone();
println!("Incoming: {} {}", method, uri);
Box::pin(async move {
// call the next middleware/handler
let response = next(req).await;
// logic after handling request
println!("Completed: {} {} -> {}", method, uri, response.status());
response
})
}
fn clone_box(&self) -> Box<dyn MiddlewareLayer> {
Box::new(self.clone())
}
}
}
Applying Middleware
You can apply your custom middleware using .layer():
RustApi::new()
.layer(SimpleLogger)
.route("/", get(handler))
.run("127.0.0.1:8080")
.await?;
Advanced Patterns
Configuration
You can pass configuration to your middleware struct.
#![allow(unused)]
fn main() {
#[derive(Clone)]
pub struct RateLimitLayer {
max_requests: u32,
window_secs: u64,
}
impl RateLimitLayer {
pub fn new(max_requests: u32, window_secs: u64) -> Self {
Self { max_requests, window_secs }
}
}
// impl MiddlewareLayer for RateLimitLayer ...
}
Injecting State (Extensions)
Middleware can inject data into the request’s extensions, which can then be retrieved by handlers (e.g., via FromRequest extractors).
#![allow(unused)]
fn main() {
// In your middleware
fn call(&self, mut req: Request, next: BoxedNext) -> ... {
let user_id = "user_123".to_string();
req.extensions_mut().insert(user_id);
next(req)
}
// In your handler
async fn handler(req: Request) -> ... {
let user_id = req.extensions().get::<String>().unwrap();
// ...
}
}
Short-Circuiting (Authentication)
If a request fails validation (e.g., invalid token), you can return a response immediately without calling next(req).
#![allow(unused)]
fn main() {
fn call(&self, req: Request, next: BoxedNext) -> ... {
if !is_authorized(&req) {
return Box::pin(async {
http::Response::builder()
.status(401)
.body("Unauthorized".into())
.unwrap()
});
}
next(req)
}
}
Modifying the Response
You can inspect and modify the response returned by the handler.
#![allow(unused)]
fn main() {
let response = next(req).await;
let (mut parts, body) = response.into_parts();
parts.headers.insert("X-Custom-Header", "Value".parse().unwrap());
Response::from_parts(parts, body)
}
Advanced Middleware: Rate Limiting, Caching, and Deduplication
As your API grows, you’ll need to protect it from abuse and optimize performance. RustAPI provides a suite of advanced middleware in rustapi-extras to handle these concerns efficiently.
These patterns are essential for the “Enterprise Platform” learning path and high-traffic services.
Prerequisites
Add the rustapi-extras crate with the necessary features to your Cargo.toml.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# OR cherry-pick features
# rustapi-extras = { version = "0.1.335", features = ["rate-limit", "dedup", "cache"] }
Rate Limiting
Rate limiting protects your API from being overwhelmed by too many requests from a single client. It uses a “Token Bucket” or “Fixed Window” algorithm to enforce limits.
How it works
The RateLimitLayer tracks request counts per IP address. When a limit is exceeded, it returns 429 Too Many Requests with a Retry-After header.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::rate_limit::RateLimitLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
RateLimitLayer::new(100, Duration::from_secs(60)) // 100 requests per minute
)
.route("/", get(handler));
// ... run app
}
The middleware automatically adds standard headers to responses:
X-RateLimit-Limit: The maximum number of requests allowed.X-RateLimit-Remaining: The number of requests remaining in the current window.X-RateLimit-Reset: The timestamp when the window resets.
Request Deduplication
In distributed systems, clients may retry requests that have already been processed (e.g., due to network timeouts). Deduplication ensures that non-idempotent operations (like payments) are processed only once.
How it works
The DedupLayer checks for an Idempotency-Key header. If a request with the same key is seen within the TTL window, it returns 409 Conflict.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::dedup::DedupLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
DedupLayer::new()
.header_name("X-Idempotency-Key") // Optional: Custom header name
.ttl(Duration::from_secs(300)) // 5 minutes TTL
)
.route("/payments", post(payment_handler));
// ... run app
}
Clients should generate a unique UUID for each operation and send it in the Idempotency-Key header.
Response Caching
Caching can significantly reduce load on your servers by serving stored responses for identical requests.
How it works
The CacheLayer stores successful responses in memory based on the request method and URI. Subsequent requests are served from the cache until the TTL expires.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::cache::CacheLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
CacheLayer::new()
.ttl(Duration::from_secs(60)) // Cache for 60 seconds
.add_method("GET") // Cache GET requests
.add_method("HEAD") // Cache HEAD requests
)
.route("/heavy-computation", get(heavy_handler));
// ... run app
}
Cached responses include an X-Cache: HIT header. Original responses have X-Cache: MISS.
Combining Middleware
You can combine these layers to create a robust defense-in-depth strategy.
#![allow(unused)]
fn main() {
let app = RustApi::new()
// 1. Rate Limit (Outer): Reject excessive traffic first
.layer(RateLimitLayer::new(1000, Duration::from_secs(60)))
// 2. Deduplication: Prevent double-processing
.layer(DedupLayer::new())
// 3. Cache: Serve static/computed content quickly
.layer(CacheLayer::new().ttl(Duration::from_secs(30)))
.route("/", get(handler));
}
Note: Order matters! Placing Rate Limit first saves resources by rejecting requests before they hit the cache or application logic.
Real-time Chat (WebSockets)
WebSockets allow full-duplex communication between the client and server. RustAPI leverages the rustapi-ws crate (based on tungstenite and tokio) to make this easy.
Dependencies
[dependencies]
rustapi-ws = "0.1.335"
tokio = { version = "1", features = ["sync", "macros"] }
futures = "0.3"
The Upgrade Handler
WebSocket connections start as HTTP requests. We “upgrade” them using the WebSocket extractor.
#![allow(unused)]
fn main() {
use rustapi_ws::{WebSocket, WebSocketStream, Message};
use rustapi_rs::prelude::*;
use std::sync::Arc;
use tokio::sync::broadcast;
use futures::stream::StreamExt;
// Shared state for broadcasting messages to all connected clients
pub struct AppState {
pub tx: broadcast::Sender<String>,
}
async fn ws_handler(
ws: WebSocket,
State(state): State<Arc<AppState>>,
) -> impl IntoResponse {
// Finalize the upgrade and spawn the socket handler
ws.on_upgrade(|socket| handle_socket(socket, state))
}
}
Handling the Connection
#![allow(unused)]
fn main() {
async fn handle_socket(socket: WebSocketStream, state: Arc<AppState>) {
// Split the socket into a sender and receiver
let (mut sender, mut receiver) = socket.split();
// Subscribe to the global broadcast channel
let mut rx = state.tx.subscribe();
// Spawn a task to forward broadcast messages to this client
let mut send_task = tokio::spawn(async move {
while let Ok(msg) = rx.recv().await {
// If the client disconnects, this will fail and we break
if sender.send(Message::text(msg)).await.is_err() {
break;
}
}
});
// Handle incoming messages from THIS client
let mut recv_task = tokio::spawn(async move {
while let Some(Ok(msg)) = receiver.next().await {
match msg {
Message::Text(text) => {
println!("Received message: {}", text);
// Broadcast it to everyone else
let _ = state.tx.send(format!("User says: {}", text));
}
Message::Close(_) => break,
_ => {}
}
}
});
// Wait for either task to finish (disconnection)
tokio::select! {
_ = (&mut send_task) => recv_task.abort(),
_ = (&mut recv_task) => send_task.abort(),
};
}
}
Initialization
#[tokio::main]
async fn main() {
// Create a broadcast channel with capacity of 100 messages
let (tx, _rx) = broadcast::channel(100);
let state = Arc::new(AppState { tx });
let app = RustApi::new()
.state(state)
.route("/ws", get(ws_handler));
app.run("0.0.0.0:3000").await.unwrap();
}
Client-Side Testing
You can simply use JavaScript in the browser console:
let ws = new WebSocket("ws://localhost:3000/ws");
ws.onmessage = (event) => {
console.log("Message from server:", event.data);
};
ws.send("Hello from JS!");
Advanced Patterns
- User Authentication: Use the same
AuthUserextractor in thews_handler. If authentication fails, return an error before callingws.on_upgrade. - Ping/Pong: Browsers and Load Balancers kill idle connections. Implement a heartbeat mechanism to keep the connection alive.
rustapi-wshandles low-level ping/pong frames automatically in many cases, but application-level pings are also robust.
Server-Side Rendering (SSR)
While RustAPI excels at building JSON APIs, it also supports server-side rendering using the rustapi-view crate, which leverages the Tera template engine (inspired by Jinja2).
Dependencies
Add the following to your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["view"] }
serde = { version = "1.0", features = ["derive"] }
Creating Templates
Create a templates directory in your project root.
templates/base.html (The layout):
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>{% block title %}My App{% endblock %}</title>
</head>
<body>
<nav>
<a href="/">Home</a>
<a href="/about">About</a>
</nav>
<main>
{% block content %}{% endblock %}
</main>
<footer>
© 2026 RustAPI
</footer>
</body>
</html>
templates/index.html (The page):
{% extends "base.html" %}
{% block title %}Home - {{ app_name }}{% endblock %}
{% block content %}
<h1>Welcome, {{ user.name }}!</h1>
{% if user.is_admin %}
<p>You have admin privileges.</p>
{% endif %}
<h2>Latest Items</h2>
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
</ul>
{% endblock %}
Handling Requests
In your main.rs, initialize the Templates engine and inject it into the application state. Handlers can then extract it using State<Templates>.
use rustapi_rs::prelude::*;
use rustapi_view::{View, Templates};
use serde::Serialize;
#[derive(Serialize)]
struct User {
name: String,
is_admin: bool,
}
#[derive(Serialize)]
struct HomeContext {
app_name: String,
user: User,
items: Vec<String>,
}
#[rustapi_rs::get("/")]
async fn index(templates: State<Templates>) -> View<HomeContext> {
let context = HomeContext {
app_name: "My Awesome App".to_string(),
user: User {
name: "Alice".to_string(),
is_admin: true,
},
items: vec!["Apple".to_string(), "Banana".to_string(), "Cherry".to_string()],
};
// Render the "index.html" template with the context
View::render(&templates, "index.html", context).await
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Initialize Template Engine
// Loads all .html files from the "templates" directory
let templates = Templates::new("templates/**/*.html")?;
// 2. Add to State
let app = RustApi::new()
.state(templates)
.route("/", get(index));
println!("Listening on http://localhost:3000");
app.run("0.0.0.0:3000").await.unwrap();
Ok(())
}
Template Reloading
In Debug mode (cargo run), rustapi-view automatically reloads templates from disk on every request. This means you can edit your .html files and refresh the browser to see changes instantly without recompiling.
In Release mode (cargo run --release), templates are compiled and cached for maximum performance.
Asset Serving
To serve CSS, JS, and images, use serve_static on the RustApi builder.
let app = RustApi::new()
.state(templates)
.route("/", get(index))
.serve_static("/assets", "assets"); // Serves files from ./assets at /assets
AI Integration
RustAPI offers native support for building AI-friendly APIs using the rustapi-toon crate. This allows you to serve optimized content for Large Language Models (LLMs) while maintaining standard JSON responses for traditional clients.
The Problem: Token Costs
LLMs like GPT-4, Claude, and Gemini charge by the token. Standard JSON is verbose, containing many structural characters (", :, {, }) that count towards this limit.
JSON (55 tokens):
[
{"id": 1, "role": "admin", "active": true},
{"id": 2, "role": "user", "active": true}
]
TOON (32 tokens):
users[2]{id,role,active}:
1,admin,true
2,user,true
The Solution: Content Negotiation
RustAPI uses the Accept header to decide which format to return.
Accept: application/json-> Returns JSON.Accept: application/toon-> Returns TOON.Accept: application/llm(custom) -> Returns TOON.
This is handled automatically by the LlmResponse<T> type.
Dependencies
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["toon"] }
serde = { version = "1.0", features = ["derive"] }
Implementation
use rustapi_rs::prelude::*;
use rustapi_toon::LlmResponse; // Handles negotiation
use serde::Serialize;
#[derive(Serialize)]
struct User {
id: u32,
username: String,
role: String,
}
// Simple handler returning a list of users
#[rustapi_rs::get("/users")]
async fn get_users() -> LlmResponse<Vec<User>> {
let users = vec![
User { id: 1, username: "Alice".into(), role: "admin".into() },
User { id: 2, username: "Bob".into(), role: "editor".into() },
];
// LlmResponse automatically serializes to JSON or TOON
LlmResponse(users)
}
#[tokio::main]
async fn main() {
let app = RustApi::new().route("/users", get(get_users));
println!("Server running on http://127.0.0.1:3000");
app.run("127.0.0.1:3000").await.unwrap();
}
Testing
Standard Browser / Client:
curl http://localhost:3000/users
# Returns: [{"id":1,"username":"Alice",...}]
AI Agent / LLM:
curl -H "Accept: application/toon" http://localhost:3000/users
# Returns:
# users[2]{id,username,role}:
# 1,Alice,admin
# 2,Bob,editor
Providing Context to AI
When building an MCP (Model Context Protocol) server or simply feeding data to an LLM, use the TOON format to maximize the context window.
// Example: Generating a prompt with TOON data
let data = get_system_status().await;
let toon_string = rustapi_toon::to_string(&data).unwrap();
let prompt = format!(
"Analyze the following system status and report anomalies:\n\n{}",
toon_string
);
// Send `prompt` to OpenAI API...
Production Tuning
Problem: Your API needs to handle extreme load (10k+ requests per second).
Solution
1. Release Profile
Ensure Cargo.toml has optimal settings:
[profile.release]
lto = "fat"
codegen-units = 1
panic = "abort"
strip = true
2. Runtime Config
Configure the Tokio runtime for high throughput in main.rs:
#[tokio::main(worker_threads = num_cpus::get())]
async fn main() {
// ...
}
3. File Descriptors (Linux)
Increase the limit before running:
ulimit -n 100000
Discussion
RustAPI is fast by default, but the OS often becomes the bottleneck using default settings. panic = "abort" reduces binary size and slightly improves performance by removing unwinding tables.
Response Compression
RustAPI supports automatic response compression (Gzip, Deflate, Brotli) via the CompressionLayer. This middleware negotiates the best compression algorithm based on the client’s Accept-Encoding header.
Dependencies
To use compression, you must enable the compression feature in rustapi-core (or rustapi-rs). For Brotli support, enable compression-brotli.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["compression", "compression-brotli"] }
Basic Usage
The simplest way to enable compression is to add the layer to your application:
use rustapi_rs::prelude::*;
use rustapi_core::middleware::CompressionLayer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::new()
.layer(CompressionLayer::new())
.route("/", get(hello))
.run("127.0.0.1:8080")
.await
}
async fn hello() -> &'static str {
"Hello, World! This response will be compressed if the client supports it."
}
Configuration
You can customize the compression behavior using CompressionConfig:
use rustapi_rs::prelude::*;
use rustapi_core::middleware::{CompressionLayer, CompressionConfig};
#[tokio::main]
async fn main() -> Result<()> {
let config = CompressionConfig::new()
.min_size(1024) // Only compress responses larger than 1KB
.level(6) // Compression level (0-9)
.gzip(true) // Enable Gzip
.deflate(false) // Disable Deflate
.brotli(true) // Enable Brotli (if feature enabled)
.add_content_type("application/custom-json"); // Add custom type
RustApi::new()
.layer(CompressionLayer::with_config(config))
.route("/data", get(get_large_data))
.run("127.0.0.1:8080")
.await
}
Default Configuration
By default, CompressionLayer is configured with:
min_size: 1024 bytes (1KB)level: 6gzip: enableddeflate: enabledbrotli: enabled (if feature is present)content_types:text/*,application/json,application/javascript,application/xml,image/svg+xml
Best Practices
1. Don’t Compress Already Compressed Data
Images (JPEG, PNG), Videos, and Archives (ZIP) are already compressed. Compressing them again wastes CPU cycles and might even increase the file size. The default configuration excludes most binary formats, but be careful with custom types.
2. Set Minimum Size
Compressing very small responses (e.g., “OK”) can actually make them larger due to framing overhead. The default 1KB threshold is a good starting point.
3. Order of Middleware
Compression should usually be one of the last layers added (outermost), so it compresses the final response after other middleware (like logging or headers) have run.
#![allow(unused)]
fn main() {
RustApi::new()
.layer(CompressionLayer::new()) // Runs last on response (first on request)
.layer(LoggingLayer::new()) // Runs before compression on response
}
Resilience Patterns
Building robust applications requires handling failures gracefully. RustAPI provides a suite of middleware to help your service survive partial outages, latency spikes, and transient errors.
These patterns are essential for the “Enterprise Platform” learning path and microservices architectures.
Prerequisites
Add the resilience features to your Cargo.toml. For example:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# OR cherry-pick features
# rustapi-extras = { version = "0.1.335", features = ["circuit-breaker", "retry", "timeout"] }
Circuit Breaker
The Circuit Breaker pattern prevents your application from repeatedly trying to execute an operation that’s likely to fail. It gives the failing service time to recover.
How it works
- Closed: Requests flow normally.
- Open: After
failure_thresholdis reached, requests fail immediately with503 Service Unavailable. - Half-Open: After
timeoutpasses, a limited number of test requests are allowed. If they succeed, the circuit closes.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::circuit_breaker::CircuitBreakerLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
CircuitBreakerLayer::new()
.failure_threshold(5) // Open after 5 failures
.timeout(Duration::from_secs(30)) // Wait 30s before retrying
.success_threshold(2) // Require 2 successes to close
)
.route("/", get(handler));
// ... run app
}
Retry with Backoff
Transient failures (network blips, temporary timeouts) can often be resolved by simply retrying the request. The RetryLayer handles this automatically with configurable backoff strategies.
Strategies
- Exponential:
base * 2^attempt(Recommended for most cases) - Linear:
base * attempt - Fixed: Constant delay
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::retry::{RetryLayer, RetryStrategy};
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
RetryLayer::new()
.max_attempts(3)
.initial_backoff(Duration::from_millis(100))
.max_backoff(Duration::from_secs(5))
.strategy(RetryStrategy::Exponential)
.retryable_statuses(vec![500, 502, 503, 504, 429])
)
.route("/", get(handler));
// ... run app
}
Warning: Be careful when combining Retries with non-idempotent operations (like
POSTrequests that charge a credit card). The middleware safely handles cloning requests, but your business logic must support it.
Timeouts
Never let a request hang indefinitely. The TimeoutLayer enforces a hard limit on request duration, returning 408 Request Timeout if exceeded.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::timeout::TimeoutLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
// Fail if handler takes longer than 5 seconds
.layer(TimeoutLayer::from_secs(5))
.route("/", get(slow_handler));
// ... run app
}
Combining Layers (The Resilience Stack)
Order matters! Timeout should be the “outermost” constraint, followed by Circuit Breaker, then Retry.
In RustAPI (Tower) middleware, layers wrap around each other. The order you call .layer() wraps the previous service.
Recommended Order:
- Retry (Inner): Retries specific failures from the handler.
- Circuit Breaker (Middle): Stops retrying if the system is overloaded.
- Timeout (Outer): Enforces global time limit including all retries.
#![allow(unused)]
fn main() {
let app = RustApi::new()
// 1. Retry (handles transient errors)
.layer(RetryLayer::new())
// 2. Circuit Breaker (protects upstream)
.layer(CircuitBreakerLayer::new())
// 3. Timeout (applies to the whole operation)
.layer(TimeoutLayer::from_secs(10))
.route("/", get(handler));
}
Graceful Shutdown
Graceful shutdown allows your API to stop accepting new connections and finish processing active requests before terminating. This is crucial for avoiding data loss and ensuring a smooth deployment process.
Problem
When you stop a server (e.g., via CTRL+C or SIGTERM), you want to ensure that:
- The server stops listening on the port.
- Ongoing requests are allowed to complete.
- Resources (database connections, background jobs) are cleaned up properly.
Solution
RustAPI provides the run_with_shutdown method, which accepts a Future. When this future completes, the server initiates the shutdown process.
Basic Example (CTRL+C)
use rustapi_rs::prelude::*;
use tokio::signal;
#[tokio::main]
async fn main() -> Result<()> {
// 1. Define your application
let app = RustApi::new().route("/", get(hello));
// 2. Define the shutdown signal
let shutdown_signal = async {
signal::ctrl_c()
.await
.expect("failed to install CTRL+C handler");
};
// 3. Run with shutdown
println!("Server running... Press CTRL+C to stop.");
app.run_with_shutdown("127.0.0.1:3000", shutdown_signal).await?;
println!("Server stopped gracefully.");
Ok(())
}
async fn hello() -> &'static str {
// Simulate some work
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
"Hello, World!"
}
Production Example (Unix Signals)
In a production environment (like Kubernetes or Docker), you need to handle SIGTERM as well as SIGINT.
use rustapi_rs::prelude::*;
use tokio::signal;
#[tokio::main]
async fn main() -> Result<()> {
let app = RustApi::new().route("/", get(hello));
app.run_with_shutdown("0.0.0.0:3000", shutdown_signal()).await?;
Ok(())
}
async fn shutdown_signal() {
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("failed to install Ctrl+C handler");
};
#[cfg(unix)]
let terminate = async {
signal::unix::signal(signal::unix::SignalKind::terminate())
.expect("failed to install signal handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => println!("Received Ctrl+C"),
_ = terminate => println!("Received SIGTERM"),
}
}
Discussion
- Active Requests: RustAPI (via Hyper) will wait for active requests to complete.
- Timeout: You might want to wrap the server execution in a timeout if you want to force shutdown after a certain period (though Hyper usually handles connection draining well).
- Background Tasks: If you have spawned background tasks using
tokio::spawn, they are detached and will be aborted when the runtime shuts down. For critical background work, consider using a dedicated job queue (likerustapi-jobs) or aCancellationTokento coordinate shutdown.
Audit Logging & Compliance
In many enterprise applications, maintaining a detailed audit trail is crucial for security, compliance (GDPR, SOC2), and troubleshooting. RustAPI provides a comprehensive audit logging system in rustapi-extras.
This recipe covers how to create, log, and query audit events.
Prerequisites
Add rustapi-extras with the audit feature to your Cargo.toml.
[dependencies]
rustapi-extras = { version = "0.1.335", features = ["audit"] }
Core Concepts
The audit system is built around three main components:
- AuditEvent: Represents a single action performed by a user or system.
- AuditStore: Interface for persisting events (e.g.,
InMemoryAuditStore,FileAuditStore). - ComplianceInfo: Additional metadata for regulatory requirements.
Basic Usage
Log a simple event when a user is created.
use rustapi_extras::audit::{AuditEvent, AuditAction, InMemoryAuditStore, AuditStore};
#[tokio::main]
async fn main() {
// Initialize the store (could be FileAuditStore for persistence)
let store = InMemoryAuditStore::new();
// Create an event
let event = AuditEvent::new(AuditAction::Create)
.resource("users", "user-123") // Resource type & ID
.actor("admin@example.com") // Who performed the action
.ip_address("192.168.1.1".parse().unwrap())
.success(true); // Outcome
// Log it asynchronously
store.log(event);
// ... later, query events
let recent_logs = store.query().limit(10).execute().await;
println!("Recent logs: {:?}", recent_logs);
}
Compliance Features (GDPR & SOC2)
RustAPI’s audit system includes dedicated fields for compliance tracking.
GDPR Relevance
Events involving personal data can be flagged with legal basis and retention policies.
#![allow(unused)]
fn main() {
use rustapi_extras::audit::{ComplianceInfo, AuditEvent, AuditAction};
let compliance = ComplianceInfo::new()
.personal_data(true) // Involves PII
.data_subject("user-123") // The person the data belongs to
.legal_basis("consent") // Article 6 basis
.retention("30_days"); // Retention policy
let event = AuditEvent::new(AuditAction::Update)
.compliance(compliance)
.resource("profile", "user-123");
}
SOC2 Controls
Link events to specific security controls.
#![allow(unused)]
fn main() {
let compliance = ComplianceInfo::new()
.soc2_control("CC6.1"); // Access Control
let event = AuditEvent::new(AuditAction::Login)
.compliance(compliance)
.actor("employee@company.com");
}
Tracking Changes
For updates, it’s often useful to record what changed.
#![allow(unused)]
fn main() {
use rustapi_extras::audit::AuditChanges;
let changes = AuditChanges::new()
.field("email", "old@example.com", "new@example.com")
.field("role", "user", "admin");
let event = AuditEvent::new(AuditAction::Update)
.changes(changes)
.resource("users", "user-123");
}
Best Practices
- Log All Security Events: Logins (success/failure), permission changes, and API key management should always be audited.
- Include Context: Add
request_idorsession_idto correlate logs with tracing data. - Use Asynchronous Logging: The
AuditStoreis designed to be non-blocking. Use it in a background task ortokio::spawnif needed for heavy writes. - Secure the Logs: Ensure that the storage backend (file, database) is protected from tampering.
Replay: Time-Travel Debugging
Record HTTP request/response pairs and replay them against different environments for debugging and regression testing.
Security Notice: The replay system is designed for development and staging environments only. See Security for details.
Quick Start
Add the replay feature to your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["replay"] }
Add the ReplayLayer middleware to your application:
use rustapi_rs::prelude::*;
use rustapi_rs::replay::{ReplayLayer, InMemoryReplayStore};
use rustapi_core::replay::ReplayConfig;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let replay = ReplayLayer::new(
ReplayConfig::new()
.enabled(true)
.admin_token("my-secret-token")
.ttl_secs(3600)
);
RustApi::new()
.layer(replay)
.route("/api/users", get(list_users))
.run("127.0.0.1:8080")
.await
}
async fn list_users() -> Json<Vec<String>> {
Json(vec!["Alice".into(), "Bob".into()])
}
How It Works
- Record: The
ReplayLayermiddleware captures HTTP request/response pairs as they flow through your application - List: Query recorded entries via the admin API or CLI
- Replay: Re-send a recorded request against any target URL
- Diff: Compare the replayed response against the original to detect regressions
Admin API
All admin endpoints require a bearer token in the Authorization header:
Authorization: Bearer <admin_token>
| Method | Path | Description |
|---|---|---|
| GET | /__rustapi/replays | List recorded entries |
| GET | /__rustapi/replays/{id} | Show a single entry |
| POST | /__rustapi/replays/{id}/run?target=URL | Replay against target |
| POST | /__rustapi/replays/{id}/diff?target=URL | Replay and compute diff |
| DELETE | /__rustapi/replays/{id} | Delete an entry |
Query Parameters for List
limit- Maximum number of entries to returnmethod- Filter by HTTP method (GET, POST, etc.)path- Filter by path substringstatus_min- Minimum status code filter
Example: cURL
# List entries
curl -H "Authorization: Bearer my-secret-token" \
http://localhost:8080/__rustapi/replays?limit=10
# Show a specific entry
curl -H "Authorization: Bearer my-secret-token" \
http://localhost:8080/__rustapi/replays/<id>
# Replay against staging
curl -X POST -H "Authorization: Bearer my-secret-token" \
"http://localhost:8080/__rustapi/replays/<id>/run?target=http://staging:8080"
# Replay and diff
curl -X POST -H "Authorization: Bearer my-secret-token" \
"http://localhost:8080/__rustapi/replays/<id>/diff?target=http://staging:8080"
CLI Usage
Install with the replay feature:
cargo install cargo-rustapi --features replay
Commands
# List recorded entries
cargo rustapi replay list -s http://localhost:8080 -t my-secret-token
# List with filters
cargo rustapi replay list -t my-secret-token --method GET --limit 20
# Show entry details
cargo rustapi replay show <id> -t my-secret-token
# Replay against a target URL
cargo rustapi replay run <id> -T http://staging:8080 -t my-secret-token
# Replay and diff
cargo rustapi replay diff <id> -T http://staging:8080 -t my-secret-token
The --token (-t) parameter can also be set via the RUSTAPI_REPLAY_TOKEN environment variable:
export RUSTAPI_REPLAY_TOKEN=my-secret-token
cargo rustapi replay list
Configuration
ReplayConfig
use rustapi_core::replay::ReplayConfig;
let config = ReplayConfig::new()
// Enable recording (default: false)
.enabled(true)
// Required: admin bearer token
.admin_token("my-secret-token")
// Max entries in store (default: 500)
.store_capacity(1000)
// Entry TTL in seconds (default: 3600 = 1 hour)
.ttl_secs(7200)
// Sampling rate 0.0-1.0 (default: 1.0 = all requests)
.sample_rate(0.5)
// Max request body capture size (default: 64KB)
.max_request_body(131_072)
// Max response body capture size (default: 256KB)
.max_response_body(524_288)
// Only record specific paths
.record_path("/api/users")
.record_path("/api/orders")
// Or skip specific paths
.skip_path("/health")
.skip_path("/metrics")
// Add headers to redact
.redact_header("x-custom-secret")
// Add body fields to redact
.redact_body_field("password")
.redact_body_field("ssn")
.redact_body_field("credit_card")
// Custom admin route prefix (default: "/__rustapi/replays")
.admin_route_prefix("/__admin/replays");
Default Redacted Headers
The following headers are redacted by default (values replaced with [REDACTED]):
authorizationcookiex-api-keyx-auth-token
Body Field Redaction
JSON body fields are recursively redacted. For example, with .redact_body_field("password"):
// Before redaction
{"user": {"name": "alice", "password": "secret123"}}
// After redaction
{"user": {"name": "alice", "password": "[REDACTED]"}}
Custom Store
File-System Store
For persistent storage across restarts:
use rustapi_rs::replay::{ReplayLayer, FsReplayStore, FsReplayStoreConfig};
use rustapi_core::replay::ReplayConfig;
let config = ReplayConfig::new()
.enabled(true)
.admin_token("my-secret-token");
let fs_store = FsReplayStore::new(FsReplayStoreConfig {
directory: "./replay-data".into(),
max_file_size: Some(10 * 1024 * 1024), // 10MB per file
create_if_missing: true,
});
let layer = ReplayLayer::new(config).with_store(fs_store);
Implementing a Custom Store
Implement the ReplayStore trait for custom backends (Redis, database, etc.):
use async_trait::async_trait;
use rustapi_core::replay::{
ReplayEntry, ReplayQuery, ReplayStore, ReplayStoreResult,
};
struct MyCustomStore {
// your fields
}
#[async_trait]
impl ReplayStore for MyCustomStore {
async fn store(&self, entry: ReplayEntry) -> ReplayStoreResult<()> {
// Store the entry
Ok(())
}
async fn get(&self, id: &str) -> ReplayStoreResult<Option<ReplayEntry>> {
// Retrieve by ID
Ok(None)
}
async fn list(&self, query: &ReplayQuery) -> ReplayStoreResult<Vec<ReplayEntry>> {
// List with filtering
Ok(vec![])
}
async fn delete(&self, id: &str) -> ReplayStoreResult<bool> {
// Delete by ID
Ok(false)
}
async fn count(&self) -> ReplayStoreResult<usize> {
Ok(0)
}
async fn clear(&self) -> ReplayStoreResult<()> {
Ok(())
}
async fn delete_before(&self, timestamp_ms: u64) -> ReplayStoreResult<usize> {
// Delete entries older than timestamp
Ok(0)
}
fn clone_store(&self) -> Box<dyn ReplayStore> {
Box::new(self.clone())
}
}
Security
The replay system has multiple security layers built in:
- Disabled by default: Recording is off (
enabled: false) until explicitly enabled - Admin token required: All
/__rustapi/replaysendpoints require a valid bearer token. Requests without the token get a401 Unauthorizedresponse - Header redaction:
authorization,cookie,x-api-key, andx-auth-tokenvalues are replaced with[REDACTED]before storage - Body field redaction: Sensitive JSON fields (e.g.,
password,ssn) can be configured for redaction - TTL enforcement: Entries are automatically deleted after the configured TTL (default: 1 hour)
- Body size limits: Request (64KB) and response (256KB) bodies are truncated to prevent memory issues
- Bounded storage: The in-memory store uses a ring buffer with FIFO eviction
Recommendations:
- Use only in development/staging environments
- Use a strong, unique admin token
- Keep TTL short
- Add application-specific sensitive fields to the redaction list
- Monitor memory usage when using the in-memory store with large capacity values
Deployment
RustAPI includes built-in deployment tooling to helping you ship your applications to production with ease. The cargo rustapi deploy command generates configuration files and provides instructions for various platforms.
Supported Platforms
- Docker: Generate a production-ready
Dockerfile. - Fly.io: Generate
fly.tomland deploy instructions. - Railway: Generate
railway.tomland project setup. - Shuttle.rs: Generate
Shuttle.tomland setup instructions.
Usage
Docker
Generate a Dockerfile optimized for RustAPI applications:
cargo rustapi deploy docker
Options:
--output <path>: Output path (default:./Dockerfile)--rust-version <ver>: Rust version (default: 1.78)--port <port>: Port to expose (default: 8080)--binary <name>: Binary name (default: package name)
Fly.io
Prepare your application for Fly.io:
cargo rustapi deploy fly
Options:
--app <name>: Application name--region <region>: Fly.io region (default: iad)--init_only: Only generate config, don’t show deployment steps
Railway
Prepare your application for Railway:
cargo rustapi deploy railway
Options:
--project <name>: Project name--environment <env>: Environment name (default: production)
Shuttle.rs
Prepare your application for Shuttle.rs serverless deployment:
cargo rustapi deploy shuttle
Options:
--project <name>: Project name--init_only: Only generate config
Note: Shuttle.rs requires some code changes to use their runtime macro
#[shuttle_runtime::main]. The deploy command generates the configuration but you will need to adjust yourmain.rsto use their attributes if you are deploying to their platform.
HTTP/3 (QUIC) Support
RustAPI supports HTTP/3 (QUIC), the next generation of the HTTP protocol, providing lower latency, better performance over unstable networks, and improved security.
Enabling HTTP/3
HTTP/3 support is optional and can be enabled via feature flags in Cargo.toml.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["http3"] }
# For development with self-signed certificates
rustapi-rs = { version = "0.1.335", features = ["http3", "http3-dev"] }
Running an HTTP/3 Server
Since HTTP/3 requires TLS (even for local development), RustAPI provides helpers to make this easy.
Development (Self-Signed Certs)
For local development, you can use run_http3_dev which automatically generates self-signed certificates.
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/")]
async fn hello() -> &'static str {
"Hello from HTTP/3!"
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Requires "http3-dev" feature
RustApi::auto()
.run_http3_dev("127.0.0.1:8080")
.await
}
Production (QUIC)
For production, you should provide valid certificates.
use rustapi_rs::prelude::*;
use rustapi_core::http3::Http3Config;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let config = Http3Config::new("cert.pem", "key.pem");
RustApi::auto()
.run_http3(config)
.await
}
Dual Stack (HTTP/1.1 + HTTP/3)
You can serve both HTTP/1.1 and HTTP/3 on the same port (via Alt-Svc header promotion) or different ports.
use rustapi_rs::prelude::*;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Run HTTP/1.1 on port 8080 and HTTP/3 on port 4433 (or same port if supported)
RustApi::auto()
.run_dual_stack("127.0.0.1:8080")
.await
}
How It Works
HTTP/3 in RustAPI is built on top of quinn and h3. When enabled:
- UDP Binding: The server binds to a UDP socket (in addition to TCP if dual-stack).
- TLS: QUIC requires TLS 1.3. RustAPI handles the TLS configuration.
- Optimization: Responses are optimized for QUIC streams.
Testing
You can test HTTP/3 support using curl with HTTP/3 support:
curl --http3 -k https://localhost:8080/
Or using online tools like http3check.net.
gRPC Integration
RustAPI allows you to seamlessly integrate gRPC services alongside your HTTP API, running both on the same Tokio runtime or even the same port (with proper multiplexing, though separate ports are simpler). We use the rustapi-grpc crate, which provides helpers for Tonic.
Dependencies
Add the following to your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["grpc"] }
tonic = "0.10"
prost = "0.12"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
[build-dependencies]
tonic-build = "0.10"
Defining the Service (Proto)
Create a proto/helloworld.proto file:
syntax = "proto3";
package helloworld;
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply);
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}
The Build Script
In build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
tonic_build::compile_protos("proto/helloworld.proto")?;
Ok(())
}
Implementation
Here is how to run both servers concurrently with shared shutdown.
use rustapi_rs::prelude::*;
use rustapi_rs::grpc::{run_rustapi_and_grpc_with_shutdown, tonic};
use tonic::{Request, Response, Status};
// Import generated proto code (simplified for example)
pub mod hello_world {
tonic::include_proto!("helloworld");
}
use hello_world::greeter_server::{Greeter, GreeterServer};
use hello_world::{HelloReply, HelloRequest};
// --- gRPC Implementation ---
#[derive(Default)]
pub struct MyGreeter {}
#[tonic::async_trait]
impl Greeter for MyGreeter {
async fn say_hello(
&self,
request: Request<HelloRequest>,
) -> Result<Response<HelloReply>, Status> {
let name = request.into_inner().name;
let reply = hello_world::HelloReply {
message: format!("Hello {} from gRPC!", name),
};
Ok(Response::new(reply))
}
}
// --- HTTP Implementation ---
#[rustapi_rs::get("/health")]
async fn health() -> Json<&'static str> {
Json("OK")
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Define HTTP App
let http_app = RustApi::new().route("/health", get(health));
let http_addr = "0.0.0.0:3000";
// 2. Define gRPC Service
let grpc_addr = "0.0.0.0:50051".parse()?;
let greeter = MyGreeter::default();
println!("HTTP listening on http://{}", http_addr);
println!("gRPC listening on grpc://{}", grpc_addr);
// 3. Run both with shared shutdown (Ctrl+C)
run_rustapi_and_grpc_with_shutdown(
http_app,
http_addr,
tokio::signal::ctrl_c(),
move |shutdown| {
tonic::transport::Server::builder()
.add_service(GreeterServer::new(greeter))
.serve_with_shutdown(grpc_addr, shutdown)
},
).await?;
Ok(())
}
How It Works
- Shared Runtime: Both servers run on the same Tokio runtime, sharing thread pool resources efficiently.
- Graceful Shutdown: When
Ctrl+Cis pressed,run_rustapi_and_grpc_with_shutdownsignals both the HTTP server and the gRPC server to stop accepting new connections and finish pending requests. - Simplicity: You don’t need to manually spawn tasks or manage channels for shutdown signals.
Advanced: Multiplexing
To run both HTTP and gRPC on the same port, you would typically use a library like tower to inspect the Content-Type header (application/grpc vs others) and route accordingly. However, running on separate ports (e.g., 8080 for HTTP, 50051 for gRPC) is standard practice in Kubernetes and most deployment environments.
Automatic Status Page
RustAPI comes with a built-in, zero-configuration status page that gives you instant visibility into your application’s health and performance.
Enabling the Status Page
To enable the status page, simply call .status_page() on your RustApi builder:
use rustapi_rs::prelude::*;
#[rustapi_rs::main]
async fn main() -> Result<()> {
RustApi::auto()
.status_page() // <--- Enable Status Page
.run("127.0.0.1:8080")
.await
}
By default, the status page is available at /status.
Full Example
Here is a complete, runnable example that demonstrates how to set up the status page and generate some traffic to see the metrics in action.
You can find this example in crates/rustapi-rs/examples/status_demo.rs.
use rustapi_rs::prelude::*;
use std::time::Duration;
use tokio::time::sleep;
/// A simple demo to showcase the RustAPI Status Page.
///
/// Run with: `cargo run -p rustapi-rs --example status_demo`
/// Then verify:
/// - Status Page: http://127.0.0.1:3000/status
/// - Generate Traffic: http://127.0.0.1:3000/api/fast
/// - Generate Errors: http://127.0.0.1:3000/api/slow
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Define some handlers to generate metrics
// A fast endpoint
async fn fast_handler() -> &'static str {
"Fast response!"
}
// A slow endpoint with random delay to show latency
async fn slow_handler() -> &'static str {
sleep(Duration::from_millis(500)).await;
"Slow response... sleepy..."
}
// An endpoint that sometimes fails
async fn flaky_handler() -> Result<&'static str, rustapi_rs::Response> {
use std::sync::atomic::{AtomicBool, Ordering};
static FAILURE: AtomicBool = AtomicBool::new(false);
// Toggle failure every call
let fail = FAILURE.fetch_xor(true, Ordering::Relaxed);
if !fail {
Ok("Success!")
} else {
Err(rustapi_rs::StatusCode::INTERNAL_SERVER_ERROR.into_response())
}
}
// 2. Build the app with status page enabled
println!("Starting Status Page Demo...");
println!(" -> Open http://127.0.0.1:3000/status to see the dashboard");
println!(" -> Visit http://127.0.0.1:3000/fast to generate traffic");
println!(" -> Visit http://127.0.0.1:3000/slow to generate latency");
println!(" -> Visit http://127.0.0.1:3000/flaky to generate errors");
RustApi::auto()
.status_page() // <--- Enable Status Page
.route("/fast", get(fast_handler))
.route("/slow", get(slow_handler))
.route("/flaky", get(flaky_handler))
.run("127.0.0.1:3000")
.await
}
Dashboard Overview
The status page provides a comprehensive real-time view of your system.
1. Global System Stats
At the top of the dashboard, you’ll see high-level metrics for the entire application:
- System Uptime: How long the server has been running.
- Total Requests: The aggregate number of requests served across all endpoints.
- Active Endpoints: The number of distinct routes that have received traffic.
- Auto-Refresh: The page automatically updates every 5 seconds, so you can keep it open on a second monitor.
2. Endpoint Metrics Grid
The main section is a detailed table showing granular performance data for every endpoint:
| Metric | Description |
|---|---|
| Endpoint | The path of the route (e.g., /api/users). |
| Requests | Total number of hits this specific route has received. |
| Success Rate | Visual indicator of health. 🟢 Green: ≥95% success 🔴 Red: <95% success |
| Avg Latency | The average time (in milliseconds) it takes to serve a request. |
| Last Access | Timestamp of the most recent request to this endpoint. |
3. Visual Design
The dashboard is built with a “zero-dependency” philosophy. It renders a single, self-contained HTML page directly from the binary.
- Modern UI: Clean, card-based layout using system fonts.
- Responsive: Adapts perfectly to mobile and desktop screens.
- Lightweight: No external CSS/JS files to manage or load.
Custom Configuration
If you need more control, you can customize the path and title of the status page:
use rustapi_rs::prelude::*;
use rustapi_rs::status::StatusConfig;
#[rustapi_rs::main]
async fn main() -> Result<()> {
// Configure the status page
let config = StatusConfig::new()
.path("/admin/health") // Change URL to /admin/health
.title("Production Node 1"); // Custom title for easy identification
RustApi::auto()
.status_page_with_config(config)
.run("127.0.0.1:8080")
.await
}
Troubleshooting: Common Gotchas
This guide covers frequently encountered issues that can be confusing when working with RustAPI. If you’re stuck on a cryptic error, chances are the solution is here.
1. Missing Schema Derive on Extractor Types
Symptom:
error[E0277]: the trait bound `...: Handler<_>` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize)]
pub struct ListParams {
pub page: Option<u32>,
}
}
Solution:
Add the Schema derive macro to any struct used with extractors (Query<T>, Path<T>, Json<T>):
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, Schema)] // ✅ Schema added
pub struct ListParams {
pub page: Option<u32>,
}
}
Why?
- RustAPI generates OpenAPI documentation automatically
- All extractors require
T: RustApiSchematrait bound - The
Schemaderive macro implements this trait for you
2. Don’t Add External OpenAPI Generators Directly
Wrong:
[dependencies]
utoipa = "4.2" # ❌ Don't add this
Correct:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# rustapi-openapi is re-exported through rustapi-rs
Why?
- RustAPI has its own OpenAPI implementation (
rustapi-openapi) - External OpenAPI derive/macros are not part of RustAPI’s public API surface
- The
Schemaderive macro is already inrustapi_rs::prelude::*
3. Use rustapi_rs, Not Internal Crates
Symptom:
error[E0432]: unresolved import `rustapi_extras`
error[E0433]: failed to resolve: use of unresolved module `rustapi_core`
error[E0433]: failed to resolve: use of unresolved module `rustapi_macros`
Problem:
#![allow(unused)]
fn main() {
use rustapi_extras::SqlxErrorExt; // ❌ Old module name
use rustapi_core::RustApi; // ❌ Internal crate
use rustapi_macros::get; // ❌ Internal crate
}
Solution:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*; // ✅ Everything you need
use rustapi_rs::SqlxErrorExt; // ✅ Correct path for extras
}
For macros:
#![allow(unused)]
fn main() {
// ❌ Wrong (doesn't work)
#[rustapi_macros::get("/")]
async fn index() -> &'static str { "Hello" }
// ✅ Correct
#[rustapi_rs::get("/")]
async fn index() -> &'static str { "Hello" }
}
Why?
rustapi_core,rustapi_macros,rustapi_extrasare internal implementation crates- All public APIs are re-exported through the
rustapi-rsfacade crate - This follows the Facade Architecture pattern for API stability
4. Don’t Use IntoParams or #[param(...)]
Wrong:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, IntoParams)] // ❌ IntoParams is from utoipa
pub struct ListParams {
#[param(minimum = 1)] // ❌ This attribute doesn't exist
pub page: Option<u32>,
}
}
Correct:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, Schema)] // ✅ Use Schema
pub struct ListParams {
/// Page number (1-indexed) // ✅ Doc comments become OpenAPI descriptions
pub page: Option<u32>,
}
}
For validation, use RustAPI’s built-in system:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Debug, Deserialize, Validate, Schema)]
pub struct CreateTask {
#[validate(length(min = 1, max = 200))]
pub title: String,
#[validate(email)]
pub email: String,
}
// Use ValidatedJson for automatic validation
async fn create_task(
ValidatedJson(task): ValidatedJson<CreateTask>
) -> Result<Json<Task>> {
// Validation runs automatically, returns 422 on failure
Ok(Json(task))
}
}
5. serde_json::Value Has No Schema
Symptom:
error: the trait `RustApiSchema` is not implemented for `serde_json::Value`
Problem:
#![allow(unused)]
fn main() {
async fn handler() -> Json<serde_json::Value> { // ❌ No schema
Json(json!({ "key": "value" }))
}
}
Solution - Use a typed struct (recommended):
#![allow(unused)]
fn main() {
#[derive(Serialize, Schema)]
struct MyResponse {
key: String,
}
async fn handler() -> Json<MyResponse> { // ✅ Type-safe
Json(MyResponse {
key: "value".to_string(),
})
}
}
Why?
serde_json::Valuedoesn’t implementRustApiSchema- OpenAPI spec requires concrete types for documentation
- Type-safe structs catch errors at compile time
6. DateTime<Utc> Has No Schema
Symptom:
error[E0277]: the trait bound `DateTime<Utc>: RustApiSchema` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub id: u64,
pub created_at: DateTime<Utc>, // ❌ No RustApiSchema impl
}
}
Solution - Use String with RFC3339 format:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub id: u64,
pub created_at: String, // ✅ Use String
}
impl From<&Bookmark> for BookmarkResponse {
fn from(b: &Bookmark) -> Self {
Self {
id: b.id,
created_at: b.created_at.to_rfc3339(), // DateTime -> String
}
}
}
}
Alternative - Unix timestamp:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub created_at: i64, // Unix timestamp (seconds)
}
}
Best Practice:
- Use
DateTime<Utc>in your internal domain models - Use
String(RFC3339) in response DTOs - Convert using
From/Intotraits
7. Generic Types Need Schema Trait Bounds
Symptom:
error[E0277]: the trait bound `T: RustApiSchema` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct PaginatedResponse<T> { // ❌ Missing trait bound
pub items: Vec<T>,
pub total: usize,
}
}
Solution:
#![allow(unused)]
fn main() {
use rustapi_openapi::schema::RustApiSchema;
#[derive(Debug, Serialize, Schema)]
pub struct PaginatedResponse<T: RustApiSchema> { // ✅ Trait bound added
pub items: Vec<T>,
pub total: usize,
pub page: u32,
pub limit: u32,
}
}
Alternative - Type aliases for concrete types:
#![allow(unused)]
fn main() {
pub type BookmarkList = PaginatedResponse<BookmarkResponse>;
pub type CategoryList = PaginatedResponse<CategoryResponse>;
async fn list_bookmarks() -> Json<BookmarkList> {
// ...
}
}
8. impl IntoResponse Return Type Issues
Problem:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/")]
async fn handler() -> impl IntoResponse { // ❌ May cause Handler trait errors
Html("<h1>Hello</h1>")
}
}
Solution - Use concrete types:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/")]
async fn handler() -> Html<String> { // ✅ Concrete type
Html("<h1>Hello</h1>".to_string())
}
}
Common Response Types:
| Type | Use Case |
|---|---|
Html<String> | HTML content |
Json<T> | JSON response (T must impl Schema) |
String | Plain text |
StatusCode | Status code only |
(StatusCode, Json<T>) | Status + JSON |
Result<T, ApiError> | Fallible responses |
9. State Not Found at Runtime
Symptom:
panic: State not found in request extensions
Problem:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users")]
async fn list_users(State(db): State<Database>) -> Json<Vec<User>> {
// ...
}
// main.rs
RustApi::auto()
// ❌ Forgot to add .state(...)
.run("0.0.0.0:8080")
.await
}
Solution:
#![allow(unused)]
fn main() {
RustApi::auto()
.state(database) // ✅ Add the state!
.run("0.0.0.0:8080")
.await
}
10. Extractor Order Matters
Rule: Body-consuming extractors (Json<T>, Body) must come last.
Wrong:
#![allow(unused)]
fn main() {
async fn handler(
Json(body): Json<CreateUser>, // ❌ Body extractor first
State(db): State<Database>,
) -> Result<Json<User>> { ... }
}
Correct:
#![allow(unused)]
fn main() {
async fn handler(
State(db): State<Database>, // ✅ Non-body extractors first
Query(params): Query<Params>,
Json(body): Json<CreateUser>, // ✅ Body extractor last
) -> Result<Json<User>> { ... }
}
Why?
State,Query,Pathextract from request parts (headers, URL)Json,Bodyconsume the request body (can only be read once)
Quick Checklist: Adding a New Handler
- Add
Schemaderive to all extractor structs (Query<T>,Path<T>,Json<T>) - Add
Schemaderive to response structs - Use
#[rustapi_rs::get/post/...]macros (notrustapi_macros) - Add validation with
Validatederive if needed - Register state with
.state(...)onRustApi - Put body extractors (
Json<T>) last in parameter list - Run
cargo checkto verify - Test in Swagger UI at
http://localhost:8080/docs
The Golden Rules
- Add
Schemaderive to any struct used with extractors or responses - Don’t add external OpenAPI crates directly -
rustapi-openapiis already included - Import from
rustapi_rsonly - never use internal crates directly - Use
RustApi::auto()with handler macros for automatic route discovery
Follow these rules and you’ll have a smooth experience with RustAPI! 🚀
Learning & Examples
Welcome to the RustAPI learning resources! This section provides structured learning paths and links to comprehensive real-world examples to help you master the framework.
🎓 Structured Curriculum
New to RustAPI? Follow our step-by-step Structured Learning Path to go from beginner to production-ready.
📚 Learning Resources
Official Examples Repository
We maintain a comprehensive examples repository with 18 real-world projects demonstrating RustAPI’s full capabilities:
🔗 rustapi-rs-examples - Complete examples from hello-world to production microservices
Cookbook Internal Path
If you prefer reading through documentation first, follow this path through the cookbook:
- Foundations: Start with Handlers & Extractors and System Overview.
- Core Crates: Read about rustapi-core and rustapi-macros.
- Building Blocks: Try the Creating Resources recipe.
- Security: Implement JWT Authentication and CSRF Protection.
- Advanced: Explore Performance Tuning and HTTP/3.
- Background Jobs: Master rustapi-jobs for async processing.
Why Use the Examples Repository?
| Benefit | Description |
|---|---|
| Structured Learning | Progress from beginner → intermediate → advanced |
| Real-world Patterns | Production-ready implementations you can adapt |
| Feature Discovery | Find examples by the features you want to learn |
| AI-Friendly | Module-level docs help AI assistants understand your code |
🎯 Learning Paths
Choose a learning path based on your goals:
🚀 Path 1: REST API Developer
Build production-ready REST APIs with RustAPI.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | hello-world | Basic routing, handlers, server setup |
| 2 | crud-api | CRUD operations, extractors, error handling |
| 3 | auth-api | JWT authentication, protected routes |
| 4 | middleware-chain | Custom middleware, logging, CORS |
| 5 | sqlx-crud | Database integration, async queries |
Related Cookbook Recipes:
🏗️ Path 2: Microservices Architect
Design and build distributed systems with RustAPI.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | crud-api | Service fundamentals |
| 2 | middleware-chain | Cross-cutting concerns |
| 3 | rate-limit-demo | API protection, throttling |
| 4 | microservices | Service communication patterns |
| 5 | microservices-advanced | Service discovery, Consul integration |
| 6 | Service Mocking | Testing microservices with MockServer from rustapi-testing |
| 7 | Background jobs (conceptual) | Background processing with rustapi-jobs, Redis/Postgres backends |
Note: The Background jobs (conceptual) step refers to using the
rustapi-jobscrate rather than a standalone example project. Related Cookbook Recipes:
⚡ Path 3: Real-time Applications
Build interactive, real-time features with WebSockets.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | hello-world | Framework basics |
| 2 | websocket | WebSocket connections, message handling |
| 3 | middleware-chain | Connection middleware |
| 4 | graphql-api | Subscriptions, real-time queries |
Related Cookbook Recipes:
🤖 Path 4: AI/LLM Integration
Build AI-friendly APIs with TOON format and MCP support.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | crud-api | API fundamentals |
| 2 | toon-api | TOON format for LLM-friendly responses |
| 3 | mcp-server | Model Context Protocol implementation |
| 4 | proof-of-concept | Combining multiple AI features |
Related Cookbook Recipes:
🏢 Path 5: Enterprise Platform
Build robust, observable, and secure systems.
| Step | Feature | Description |
|---|---|---|
| 1 | Observability | Set up OpenTelemetry and Structured Logging |
| 2 | Resilience | Implement Circuit Breakers and Retries |
| 3 | Advanced Security | Add OAuth2 and Security Headers |
| 4 | Optimization | Configure Caching and Deduplication |
| 5 | Background Jobs | Implement Reliable Job Queues |
| 6 | Debugging | Set up Time-Travel Debugging |
| 7 | Reliable Testing | Master Mocking and Integration Testing |
Related Cookbook Recipes:
- rustapi-testing: The Auditor
- rustapi-extras: The Toolbox
- Time-Travel Debugging
- rustapi-jobs: The Workhorse
- Resilience Patterns
📦 Examples by Category
Getting Started
| Example | Description | Difficulty |
|---|---|---|
hello-world | Minimal RustAPI server | ⭐ Beginner |
crud-api | Complete CRUD operations | ⭐ Beginner |
Authentication & Security
| Example | Description | Difficulty |
|---|---|---|
auth-api | JWT authentication flow | ⭐⭐ Intermediate |
middleware-chain | Middleware composition | ⭐⭐ Intermediate |
rate-limit-demo | API rate limiting | ⭐⭐ Intermediate |
Database Integration
| Example | Description | Difficulty |
|---|---|---|
sqlx-crud | SQLx with PostgreSQL/SQLite | ⭐⭐ Intermediate |
event-sourcing | Event sourcing patterns | ⭐⭐⭐ Advanced |
AI & LLM
| Example | Description | Difficulty |
|---|---|---|
toon-api | TOON format responses | ⭐⭐ Intermediate |
mcp-server | Model Context Protocol | ⭐⭐⭐ Advanced |
Real-time & GraphQL
| Example | Description | Difficulty |
|---|---|---|
websocket | WebSocket chat example | ⭐⭐ Intermediate |
graphql-api | GraphQL with async-graphql | ⭐⭐⭐ Advanced |
Production Patterns
| Example | Description | Difficulty |
|---|---|---|
microservices | Basic service communication | ⭐⭐⭐ Advanced |
microservices-advanced | Consul service discovery | ⭐⭐⭐ Advanced |
serverless-lambda | AWS Lambda deployment | ⭐⭐⭐ Advanced |
🔧 Feature Matrix
Find examples by the RustAPI features they demonstrate:
| Feature | Examples |
|---|---|
#[get], #[post] macros | All examples |
State<T> extractor | crud-api, auth-api, sqlx-crud |
Json<T> extractor | crud-api, auth-api, graphql-api |
ValidatedJson<T> | auth-api, crud-api |
JWT (extras-jwt feature) | auth-api, microservices |
CORS (extras-cors feature) | middleware-chain, auth-api |
| Rate Limiting | rate-limit-demo, auth-api |
WebSockets (protocol-ws feature) | websocket, graphql-api |
TOON (protocol-toon feature) | toon-api, mcp-server |
OAuth2 (oauth2-client) | auth-api (extended) |
| Circuit Breaker | microservices |
Replay (extras-replay feature) | microservices (conceptual) |
OpenTelemetry (otel) | microservices-advanced |
| OpenAPI/Swagger | All examples |
🚦 Getting Started with Examples
Clone the Repository
git clone https://github.com/Tuntii/rustapi-rs-examples.git
cd rustapi-rs-examples
Run an Example
cd hello-world
cargo run
Test an Example
# Most examples have tests
cargo test
# Or use the TestClient
cd ../crud-api
cargo test
Explore the Structure
Each example includes:
README.md- Detailed documentation with API endpointssrc/main.rs- Entry point with server setupsrc/handlers.rs- Request handlers (where applicable)Cargo.toml- Dependencies and feature flags- Tests demonstrating the TestClient
📖 Cross-Reference: Cookbook ↔ Examples
| Cookbook Recipe | Related Examples |
|---|---|
| Creating Resources | crud-api, sqlx-crud |
| JWT Authentication | auth-api |
| CSRF Protection | auth-api, middleware-chain |
| Database Integration | sqlx-crud, event-sourcing |
| File Uploads | file-upload (planned) |
| Custom Middleware | middleware-chain |
| Real-time Chat | websocket |
| Production Tuning | microservices-advanced |
| Resilience Patterns | microservices |
| Time-Travel Debugging | microservices |
| Deployment | serverless-lambda |
💡 Contributing Examples
Have a great example to share? We welcome contributions!
- Fork the rustapi-rs-examples repository
- Create your example following our structure guidelines
- Add comprehensive documentation in README.md
- Submit a pull request
Example Guidelines
- Include a clear README with prerequisites and API endpoints
- Add code comments explaining RustAPI-specific patterns
- Include working tests using
rustapi-testing - List the feature flags used
🔗 Additional Resources
- RustAPI GitHub - Framework source code
- API Reference - Generated documentation
- Feature Flags Reference - All available features
- Architecture Guide - How RustAPI works internally
💬 Need help? Open an issue in the examples repository or join our community discussions!
Structured Learning Path
This curriculum is designed to take you from a RustAPI beginner to an advanced user capable of building production-grade microservices.
Phase 1: Foundations
Goal: Build a simple CRUD API and understand the core request/response cycle.
Module 1: Introduction & Setup
- Prerequisites: Rust installed, basic Cargo knowledge.
- Reading: Installation, Project Structure.
- Task: Create a new project using
cargo rustapi new my-api. - Expected Output: A running server that responds to
GET /with “Hello World”. - Pitfalls: Not enabling
tokiofeatures if setting up manually.
🛠️ Mini Project: “The Echo Server”
Create a new endpoint POST /echo that accepts any text body and returns it back to the client. This verifies your setup handles basic I/O correctly.
🧠 Knowledge Check
- What command scaffolds a new RustAPI project?
- Which feature flag is required for the async runtime?
- Where is the main entry point of the application typically located?
Module 2: Routing & Handlers
- Prerequisites: Module 1.
- Reading: Handlers & Extractors.
- Task: Create routes for
GET /users,POST /users,GET /users/{id}. - Expected Output: Endpoints that return static JSON data.
- Pitfalls: Forgetting to register routes in
main.rsif not using auto-discovery.
🛠️ Mini Project: “The Calculator”
Create an endpoint GET /add?a=5&b=10 that returns {"result": 15}. This practices query parameter extraction and JSON responses.
🧠 Knowledge Check
- Which macro is used to define a GET handler?
- How do you return a JSON response from a handler?
- What is the return type of a typical handler function?
Module 3: Extractors
- Prerequisites: Module 2.
- Reading: Handlers & Extractors.
- Task: Use
Path,Query, andJsonextractors to handle dynamic input. - Expected Output:
GET /users/{id}returns the ID.POST /usersechoes the JSON body. - Pitfalls: Consuming the body twice (e.g., using
JsonandBodyin the same handler).
🛠️ Mini Project: “The User Registry”
Create a POST /register endpoint that accepts a JSON body {"username": "...", "age": ...} and returns a welcome message using the username. Use the Json extractor.
🧠 Knowledge Check
- Which extractor is used for URL parameters like
/users/:id? - Which extractor parses the request body as JSON?
- Can you use multiple extractors in a single handler?
🏆 Phase 1 Capstone: “The Todo List API”
Objective: Build a simple in-memory Todo List API. Requirements:
GET /todos: List all todos.POST /todos: Create a new todo.GET /todos/:id: Get a specific todo.DELETE /todos/:id: Delete a todo.- Use
Stateto store the list in aMutex<Vec<Todo>>.
Phase 2: Core Development
Goal: Add real logic, validation, and documentation.
Module 4: State Management
- Prerequisites: Phase 1.
- Reading: State Extractor.
- Task: Create an
AppStatestruct with aMutex<Vec<User>>. Inject it into handlers. - Expected Output: A stateful API where POST adds a user and GET retrieves it (in-memory).
- Pitfalls: Using
std::sync::Mutexinstead oftokio::sync::Mutexin async code (thoughstdis fine for simple data).
🧠 Knowledge Check
- How do you inject global state into the application?
- Which extractor retrieves the application state?
- Why should you use
Arcfor shared state?
Module 4.5: Database Integration
- Prerequisites: Module 4.
- Reading: Database Integration.
- Task: Replace the in-memory
Mutex<Vec<User>>with a PostgreSQL connection pool (sqlx::PgPool). - Expected Output: Data persists across server restarts.
- Pitfalls: Blocking the async runtime with synchronous DB drivers (use
sqlxortokio-postgres).
🧠 Knowledge Check
- Why is connection pooling important?
- How do you share a DB pool across handlers?
- What is the benefit of compile-time query checking in SQLx?
Module 5: Validation
- Prerequisites: Module 4.
- Reading: Validation.
- Task: Add
#[derive(Validate)]to yourUserstruct. UseValidatedJson. - Expected Output: Requests with invalid email or short password return
422 Unprocessable Entity. - Pitfalls: Forgetting to add
#[validate]attributes to struct fields.
🧠 Knowledge Check
- Which trait must a struct implement to be validatable?
- What HTTP status code is returned on validation failure?
- How do you combine JSON extraction and validation?
Module 5.5: Error Handling
- Prerequisites: Module 5.
- Reading: Error Handling.
- Task: Create a custom
ApiErrorenum and implementIntoResponse. Return robust error messages. - Expected Output:
GET /users/999returns404 Not Foundwith a structured JSON error body. - Pitfalls: Exposing internal database errors (like SQL strings) to the client.
🧠 Knowledge Check
- What is the standard error type in RustAPI?
- How do you mask internal errors in production?
- What is the purpose of the
error_idfield?
Module 6: OpenAPI & HATEOAS
- Prerequisites: Module 5.
- Reading: OpenAPI, OpenAPI Refs, Pagination Recipe.
- Task: Add
#[derive(Schema)]to all DTOs. Use#[derive(Schema)]on a shared struct and reference it in multiple places. - Expected Output: Swagger UI at
/docsshowing full schema with shared components. - Pitfalls: Recursive schemas without
BoxorOption.
🧠 Knowledge Check
- What does
#[derive(Schema)]do? - How does RustAPI handle shared schema components?
- What is HATEOAS and why is it useful?
Module 6.5: File Uploads & Multipart
- Prerequisites: Module 6.
- Reading: File Uploads.
- Task: Create an endpoint
POST /uploadthat accepts a file and saves it to disk. - Expected Output:
curl -F file=@image.pnguploads the file. - Pitfalls: Loading large files entirely into memory (use streaming).
🧠 Knowledge Check
- Which extractor is used for file uploads?
- Why should you use
field.chunk()instead offield.bytes()? - How do you increase the request body size limit?
🏆 Phase 2 Capstone: “The Secure Blog Engine”
Objective: Enhance the Todo API into a Blog Engine. Requirements:
- Add
Postresource with title, content, and author. - Validate that titles are not empty and content is at least 10 chars.
- Add pagination to
GET /posts. - Enable Swagger UI to visualize the API.
Phase 3: Advanced Features
Goal: Security, Real-time, and Production readiness.
Module 7: Authentication (JWT & OAuth2)
- Prerequisites: Phase 2.
- Reading: JWT Auth Recipe, OAuth2 Client.
- Task:
- Implement a login route that returns a JWT.
- Protect user routes with
AuthUserextractor. - (Optional) Implement “Login with Google” using
OAuth2Client.
- Expected Output: Protected routes return
401 Unauthorizedwithout a valid token. - Pitfalls: Hardcoding secrets. Not checking token expiration.
🧠 Knowledge Check
- What is the role of the
AuthUserextractor? - How does OAuth2 PKCE improve security?
- Where should you store the JWT secret?
Module 8: Advanced Middleware
- Prerequisites: Module 7.
- Reading: Advanced Middleware.
- Task:
- Apply
RateLimitLayerto your login endpoint (10 requests/minute). - Add
DedupLayerto a payment endpoint. - Cache the response of a public “stats” endpoint.
- Apply
- Expected Output: Sending 11 login attempts results in
429 Too Many Requests. - Pitfalls: Caching responses that contain user-specific data.
🧠 Knowledge Check
- What header indicates when the rate limit resets?
- Why is request deduplication important for payments?
- Which requests are typically safe to cache?
Module 9: WebSockets & Real-time
- Prerequisites: Phase 2.
- Reading: WebSockets Recipe.
- Task: Create a chat endpoint where users can broadcast messages.
- Expected Output: Multiple clients connected via WS receiving messages in real-time.
- Pitfalls: Blocking the WebSocket loop with long-running synchronous tasks.
🧠 Knowledge Check
- How do you upgrade an HTTP request to a WebSocket connection?
- Can you share state between HTTP handlers and WebSocket handlers?
- What happens if a WebSocket handler panics?
Module 10: Production Readiness & Deployment
- Prerequisites: Phase 3.
- Reading: Production Tuning, Resilience, Deployment.
- Task:
- Add
CompressionLayer, andTimeoutLayer. - Use
cargo rustapi deploy dockerto generate a Dockerfile.
- Add
- Expected Output: A resilient API ready for deployment.
- Pitfalls: Setting timeouts too low for slow operations.
🧠 Knowledge Check
- Why is timeout middleware important?
- What command generates a production Dockerfile?
- How do you enable compression for responses?
Module 11: Background Jobs & Testing
- Prerequisites: Phase 3.
- Reading: Background Jobs Recipe, Testing Strategy.
- Task:
- Implement a job
WelcomeEmailJobthat sends a “Welcome” email (simulated withtokio::time::sleep). - Enqueue this job inside your
POST /registerhandler. - Write an integration test using
TestClientto verify the registration endpoint.
- Implement a job
- Expected Output: Registration returns 200 immediately (low latency); console logs show “Sending welcome email to …” shortly after (asynchronous). Tests pass.
- Pitfalls: Forgetting to start the job worker loop (
JobWorker::new(queue).run().await).
🛠️ Mini Project: “The Email Worker”
Create a system where users can request a “Report”.
POST /reports: Enqueues aGenerateReportJob. Returns{"job_id": "..."}immediately.- The job simulates 5 seconds of work and then writes “Report Generated” to a file or log.
- (Bonus) Use Redis backend for persistence.
🧠 Knowledge Check
- Why should you offload email sending to a background job?
- Which backend is suitable for local development vs production?
- How do you enqueue a job from a handler?
- How can you test that a job was enqueued without actually running it?
🏆 Phase 3 Capstone: “The Real-Time Collaboration Tool”
Objective: Build a real-time collaborative note-taking app. Requirements:
- Auth: Users must log in (JWT or OAuth2) to edit notes.
- Real-time: Changes to a note are broadcast to all viewers via WebSockets.
- Jobs: When a note is deleted, schedule a background job to archive it (simulate archive).
- Resilience: Rate limit API requests to prevent abuse.
- Deployment: specify a
Dockerfilefor the application.
Phase 4: Enterprise Scale
Goal: Build observable, resilient, and high-performance distributed systems.
Module 12: Observability & Auditing
- Prerequisites: Phase 3.
- Reading: Observability (Extras), Audit Logging.
- Task:
- Enable
structured-loggingandotel. - Configure tracing to export spans.
- Implement
AuditStoreand log a “User Login” event with IP address.
- Enable
- Expected Output: Logs are JSON formatted. Audit log contains a new entry for every login.
- Pitfalls: High cardinality in metric labels.
🧠 Knowledge Check
- What is the difference between logging and auditing?
- Which fields are required in an
AuditEvent? - How does structured logging aid debugging?
Module 13: Resilience & Security
- Prerequisites: Phase 3.
- Reading: Resilience Patterns, Time-Travel Debugging.
- Task:
- Wrap an external API call with a
CircuitBreaker. - Implement
RetryLayerfor transient failures. - (Optional) Use
ReplayLayerto record and replay a tricky bug scenario.
- Wrap an external API call with a
- Expected Output: System degrades gracefully when external service is down. Replay file captures the exact request sequence.
- Pitfalls: Infinite retry loops or retrying non-idempotent operations.
🧠 Knowledge Check
- What state does a Circuit Breaker have when it stops traffic?
- Why is jitter important in retry strategies?
- How does Time-Travel Debugging help with “Heisenbugs”?
Module 14: High Performance
- Prerequisites: Phase 3.
- Reading: HTTP/3 (QUIC), Performance Tuning, Compression.
- Task:
- Enable
http3feature and generate self-signed certs. - Serve traffic over QUIC.
- Add
CompressionLayerto compress large responses.
- Enable
- Expected Output: Browser/Client connects via HTTP/3. Responses have
content-encoding: gzip. - Pitfalls: Compressing small responses (waste of CPU) or already compressed data (images).
🧠 Knowledge Check
- What transport protocol does HTTP/3 use?
- How does
simd-jsonimprove performance? - Why shouldn’t you compress JPEG images?
🏆 Phase 4 Capstone: “The High-Scale Event Platform”
Objective: Architect a system capable of handling thousands of events per second. Requirements:
- Ingestion: HTTP/3 endpoint receiving JSON events.
- Processing: Push events to a
rustapi-jobsqueue (Redis backend). - Storage: Workers process events and store aggregates in a database.
- Observability: Full tracing from ingestion to storage.
- Audit: Log all configuration changes to the system.
- Resilience: Circuit breakers on database writes.
- Testing: Load test the ingestion endpoint (e.g., with k6 or similar) and observe metrics.
Phase 5: Specialized Skills
Goal: Master integration with AI, gRPC, and server-side rendering.
Module 15: Server-Side Rendering (SSR)
- Prerequisites: Phase 2.
- Reading: SSR Recipe.
- Task: Create a dashboard showing system status using
rustapi-view. - Expected Output: HTML page rendered with Tera templates, displaying dynamic data.
- Pitfalls: Forgetting to create the
templates/directory.
🧠 Knowledge Check
- Which template engine does RustAPI use?
- How do you pass data to a template?
- How does template reloading work in debug mode?
Module 16: gRPC Microservices
- Prerequisites: Phase 3.
- Reading: gRPC Recipe.
- Task: Run a gRPC service alongside your HTTP API that handles internal user lookups.
- Expected Output: Both servers running; HTTP endpoint calls gRPC method (simulated).
- Pitfalls: Port conflicts if not configured correctly.
🧠 Knowledge Check
- Which crate provides gRPC helpers for RustAPI?
- Can HTTP and gRPC share the same Tokio runtime?
- Why might you want to run both in the same process?
Module 17: AI Integration (TOON)
- Prerequisites: Phase 2.
- Reading: AI Integration Recipe.
- Task: Create an endpoint that returns standard JSON for browsers but TOON for
Accept: application/toon. - Expected Output:
curlrequests with different headers return different formats. - Pitfalls: Not checking the
Acceptheader in client code.
🧠 Knowledge Check
- What is TOON and why is it useful for LLMs?
- How does
LlmResponsedecide which format to return? - How much token usage can TOON save on average?
🏆 Phase 5 Capstone: “The Intelligent Dashboard”
Objective: Combine SSR, gRPC, and AI features. Requirements:
- Backend: Retrieve stats via gRPC from a “worker” service.
- Frontend: Render a dashboard using SSR.
- AI Agent: Expose a TOON endpoint for an AI agent to query the system status.
Next Steps
- Explore the Examples Repository.
- Contribute a new recipe to the Cookbook!