RustAPI Cookbook
Welcome to the RustAPI Architecture Cookbook. This documentation is designed to be the single source of truth for the project’s philosophy, patterns, and practical implementation details.
Note
This is a living document. As our architecture evolves, so will this cookbook.
What is this?
This is not just API documentation. This is a collection of:
- Keynotes: High-level architectural decisions and “why” we made them.
- Patterns: The repeated structures (like
ActionandService) that form the backbone of our code. - Recipes: Practical, step-by-step guides for adding features, testing, and maintaining cleanliness.
- Learning Paths: Structured progressions with real-world examples.
🚀 New: Examples Repository
Looking for hands-on learning? Check out our Examples Repository with 18 complete projects:
| Category | Examples |
|---|---|
| Getting Started | hello-world, crud-api |
| Authentication | auth-api (JWT), rate-limit-demo |
| Database | sqlx-crud, event-sourcing |
| AI/LLM | toon-api, mcp-server |
| Real-time | websocket, graphql-api |
| Production | microservices, serverless-lambda |
👉 See Learning & Examples for structured learning paths.
Visual Identity
This cookbook is styled with the RustAPI Premium Dark theme, focusing on readability, contrast, and modern “glassmorphism” aesthetics.
Quick Start
- Want to add a feature? Jump to Adding a New Feature.
- Want to understand performance? Read Performance Philosophy.
- Need to check code quality? See Maintenance.
- New to RustAPI? Follow our Learning Paths.
Getting Started
Welcome to RustAPI. This section will guide you from installation to your first running API.
Installation
Note
RustAPI is designed for Rust 1.75 or later.
Prerequisites
Before we begin, ensure you have the Rust toolchain installed. If you haven’t, the best way is via rustup.rs.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Installing the CLI
RustAPI comes with a powerful CLI to scaffold projects. Install it directly from crates.io:
cargo install cargo-rustapi
Verify your installation:
cargo-rustapi --version
Adding to an Existing Project
If you prefer not to use the CLI, you can add RustAPI to your Cargo.toml manually:
cargo add rustapi-rs@0.1.335
Or add this to your Cargo.toml:
[dependencies]
rustapi-rs = "0.1.335"
Editor Setup
For the best experience, we recommend VS Code with the rust-analyzer extension. This provides:
- Real-time error checking
- Intelligent code completion
- In-editor documentation
Quickstart
Tip
From zero to a production-ready API in 60 seconds.
Install the CLI
First, install the RustAPI CLI tool:
cargo install cargo-rustapi
Create a New Project
Use the CLI to generate a new project. We’ll call it my-api.
cargo rustapi new my-api
cd my-api
Note: If
cargo rustapidoesn’t work, you can also runcargo-rustapi new my-apidirectly.
This command sets up a complete project structure with handling, models, and tests ready to go.
The Code
Open src/main.rs. You’ll see how simple it is:
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/hello")]
async fn hello() -> Json<String> {
Json("Hello from RustAPI!".to_string())
}
#[rustapi_rs::main]
async fn main() -> Result<()> {
// Auto-discovery magic ✨
RustApi::auto()
.run("127.0.0.1:8080")
.await
}
Run the Server
Start your API server:
cargo run
You should see output similar to:
INFO rustapi: 🚀 Server running at http://127.0.0.1:8080
INFO rustapi: 📚 API docs at http://127.0.0.1:8080/docs
Test It Out
Open your browser to http://127.0.0.1:8080/docs.
You’ll see the Swagger UI automatically generated from your code. Try out the endpoint directly from the browser!
What Just Happened?
You just launched a high-performance, async Rust web server with:
- ✅ Automatic OpenAPI documentation
- ✅ Type-safe request validation
- ✅ Distributed tracing
- ✅ Global error handling
Welcome to RustAPI.
Project Structure
RustAPI projects follow a standard, modular structure designed for scalability.
my-api/
├── Cargo.toml // Dependencies and workspace config
├── src/
│ ├── handlers/ // Request handlers (Controllers)
│ │ ├── mod.rs
│ │ └── items.rs // Example resource handler
│ ├── models/ // Data structures and Schema
│ │ ├── mod.rs
│ ├── error.rs // Custom error types
│ └── main.rs // Application entry point & Router
└── .env.example // Environment variables template
Key Files
src/main.rs
The heart of your application. This is where you configure the RustApi builder, register routes, and set up state.
src/handlers/
Where your business logic lives. Handlers are async functions that take extractors (like Json, Path, State) and return responses.
src/models/
Your data types. By deriving Schema, they automatically appear in your OpenAPI documentation.
src/error.rs
Centralized error handling. Mapping your AppError to ApiError allows you to simply return Result<T, AppError> in your handlers.
Core Concepts
Documentation of the fundamental architectural decisions and patterns in RustAPI.
Handlers & Extractors
The Handler is the fundamental unit of work in RustAPI. It transforms an incoming HTTP request into an outgoing HTTP response.
Unlike many web frameworks that enforce a strict method signature (e.g., fn(req: Request, res: Response)), RustAPI embraces a flexible, type-safe approach powered by Rust’s trait system.
The Philosophy: “Ask for what you need”
In RustAPI, you don’t manually parse the request object inside your business logic. Instead, you declare the data you need as function arguments, and the framework’s Extractors handle the plumbing for you.
If the data cannot be extracted (e.g., missing header, invalid JSON), the request is rejected before your handler is ever called. This means your handler logic is guaranteed to operate on valid, type-safe data.
Anatomy of a Handler
A handler is simply an asynchronous function that takes zero or more Extractors as arguments and returns something that implements IntoResponse.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn create_user(
State(db): State<DbPool>, // 1. Dependency Injection
Path(user_id): Path<Uuid>, // 2. URL Path Parameter
Json(payload): Json<CreateUser>, // 3. JSON Request Body
) -> Result<impl IntoResponse, ApiError> {
let user = db.create_user(user_id, payload).await?;
Ok((StatusCode::CREATED, Json(user)))
}
}
Key Rules
- Order Matters (Slightly): Extractors that consume the request body (like
Json<T>orMultipart) must be the last argument. This is because the request body is a stream that can only be read once. - Async by Default: Handlers are
async fn. This allows non-blocking I/O operations (DB calls, external API requests). - Debuggable: Handlers are just functions. You can unit test them easily.
Extractors: The FromRequest Trait
Extractors are types that implement FromRequest (or FromRequestParts for headers/query params). They isolate the “HTTP parsing” logic from your “Business” logic.
Common Build-in Extractors
| Extractor | Source | Example Usage |
|---|---|---|
Path<T> | URL Path Segments | fn get_user(Path(id): Path<u32>) |
Query<T> | Query String | fn search(Query(params): Query<SearchFn>) |
Json<T> | Request Body | fn update(Json(data): Json<UpdateDto>) |
HeaderMap | HTTP Headers | fn headers(headers: HeaderMap) |
State<T> | Application State | fn db_op(State(pool): State<PgPool>) |
Extension<T> | Request-local extensions | fn logic(Extension(user): Extension<User>) |
Custom Extractors
You can create your own extractors to encapsulate repetitive validation or parsing logic. For example, extracting a user ID from a verified JWT:
#![allow(unused)]
fn main() {
pub struct AuthenticatedUser(pub Uuid);
#[async_trait]
impl<S> FromRequestParts<S> for AuthenticatedUser
where
S: Send + Sync,
{
type Rejection = ApiError;
async fn from_request_parts(parts: &mut Parts, state: &S) -> Result<Self, Self::Rejection> {
let auth_header = parts.headers.get("Authorization")
.ok_or(ApiError::Unauthorized("Missing token"))?;
let token = auth_header.to_str().map_err(|_| ApiError::Unauthorized("Invalid token"))?;
let user_id = verify_jwt(token)?; // Your verification logic
Ok(AuthenticatedUser(user_id))
}
}
// Usage in handler: cleaner and reusable!
async fn profile(AuthenticatedUser(uid): AuthenticatedUser) -> impl IntoResponse {
format!("User ID: {}", uid)
}
}
Responses: The IntoResponse Trait
A handler can return any type that implements IntoResponse. RustAPI provides implementations for many common types:
StatusCode(e.g., return200 OKor404 Not Found)Json<T>(serializes struct to JSON)String/&str(plain text response)Vec<u8>/Bytes(binary data)HeaderMap(response headers)Html<String>(HTML content)
Tuple Responses
You can combine types using tuples to set status codes and headers along with the body:
#![allow(unused)]
fn main() {
// Returns 201 Created + JSON Body
async fn create() -> (StatusCode, Json<User>) {
(StatusCode::CREATED, Json(user))
}
// Returns Custom Header + Plain Text
async fn custom() -> (HeaderMap, &'static str) {
let mut headers = HeaderMap::new();
headers.insert("X-Custom", "Value".parse().unwrap());
(headers, "Response with headers")
}
}
Error Handling
Handlers often return Result<T, E>. If the handler returns Ok(T), the T is converted to a response. If it returns Err(E), the E is converted to a response.
This effectively means your Error type must implement IntoResponse.
#![allow(unused)]
fn main() {
// Recommended pattern: Centralized API Error enum
pub enum ApiError {
NotFound(String),
InternalServerError,
}
impl IntoResponse for ApiError {
fn into_response(self) -> Response {
let (status, message) = match self {
ApiError::NotFound(msg) => (StatusCode::NOT_FOUND, msg),
ApiError::InternalServerError => (StatusCode::INTERNAL_SERVER_ERROR, "Something went wrong".to_string()),
};
(status, Json(json!({ "error": message }))).into_response()
}
}
}
Best Practices
- Keep Handlers Thin: Move complex business logic to “Service” structs or domain modules. Handlers should focus on HTTP translation (decoding request -> calling service -> encoding response).
- Use
Statefor Dependencies: Avoid global variables. Pass DB pools and config viaState. - Parse Early: Use specific types in
Json<T>structs rather thanserde_json::Valueto leverage the type system for validation.
System Architecture
RustAPI follows a Facade Architecture — a stable public API that shields you from internal complexity and breaking changes.
System Overview
graph TB
subgraph Client["🌐 Client Layer"]
HTTP[HTTP Request]
LLM[LLM/AI Agent]
MCP[MCP Client]
end
subgraph Public["📦 rustapi-rs (Public Facade)"]
direction TB
Prelude[prelude::*]
Macros["#[rustapi_rs::get/post]<br>#[rustapi_rs::main]"]
Types[Json, Query, Path, Form]
end
subgraph Core["⚙️ rustapi-core (Engine)"]
direction TB
Router[Radix Router<br>matchit]
Extract[Extractors<br>FromRequest trait]
MW[Middleware Stack<br>Tower-like layers]
Resp[Response Builder<br>IntoResponse trait]
end
subgraph Extensions["🔌 Extension Crates"]
direction LR
OpenAPI["rustapi-openapi<br>OpenAPI 3.1 + Docs"]
Validate["rustapi-validate<br>Validation (v2 native)"]
Toon["rustapi-toon<br>LLM Optimization"]
Extras["rustapi-extras<br>JWT/CORS/RateLimit"]
WsCrate["rustapi-ws<br>WebSocket Support"]
ViewCrate["rustapi-view<br>Template Engine"]
end
subgraph Foundation["🏗️ Foundation Layer"]
direction LR
Tokio[tokio<br>Async Runtime]
Hyper[hyper 1.0<br>HTTP Protocol]
Serde[serde<br>Serialization]
end
HTTP --> Public
LLM --> Public
MCP --> Public
Public --> Core
Core --> Extensions
Extensions --> Foundation
Core --> Foundation
Request Flow
sequenceDiagram
participant C as Client
participant R as Router
participant M as Middleware
participant E as Extractors
participant H as Handler
participant S as Serializer
C->>R: HTTP Request
R->>R: Match route (radix tree)
R->>M: Pass to middleware stack
loop Each Middleware
M->>M: Process (JWT, CORS, RateLimit)
end
M->>E: Extract parameters
E->>E: Json<T>, Path<T>, Query<T>
E->>E: Validate (v2 native / optional legacy)
alt Validation Failed
E-->>C: 422 Unprocessable Entity
else Validation OK
E->>H: Call async handler
H->>S: Return response type
alt TOON Enabled
S->>S: Check Accept header
S->>S: Serialize as TOON/JSON
S->>S: Add token count headers
else Standard
S->>S: Serialize as JSON
end
S-->>C: HTTP Response
end
Crate Dependency Graph
graph BT
subgraph User["Your Application"]
App[main.rs]
end
subgraph Facade["Single Import"]
RS[rustapi-rs]
end
subgraph Internal["Internal Crates"]
Core[rustapi-core]
Macros[rustapi-macros]
OpenAPI[rustapi-openapi]
Validate[rustapi-validate]
Toon[rustapi-toon]
Extras[rustapi-extras]
WS[rustapi-ws]
View[rustapi-view]
end
subgraph External["External Dependencies"]
Tokio[tokio]
Hyper[hyper]
Serde[serde]
Validator[validator]
Tungstenite[tungstenite]
Tera[tera]
end
App --> RS
RS --> Core
RS --> Macros
RS --> OpenAPI
RS --> Validate
RS -.->|optional| Toon
RS -.->|optional| Extras
RS -.->|optional| WS
RS -.->|optional| View
Core --> Tokio
Core --> Hyper
Core --> Serde
OpenAPI --> Serde
Validate -.->|legacy optional| Validator
Toon --> Serde
WS --> Tungstenite
View --> Tera
style RS fill:#e1f5fe
style App fill:#c8e6c9
Design Principles
| Principle | Implementation |
|---|---|
| Single Entry Point | use rustapi_rs::prelude::* imports everything you need |
| Zero Boilerplate | Macros generate routing, OpenAPI specs, and validation |
| Compile-Time Safety | Generic extractors catch type errors at compile time |
| Opt-in Complexity | Features like JWT, TOON are behind feature flags |
| Engine Abstraction | Internal hyper/tokio upgrades don’t break your code |
Crate Responsibilities
| Crate | Role |
|---|---|
rustapi-rs | Public facade — single use for everything |
rustapi-core | HTTP engine, routing, extractors, response handling |
rustapi-macros | Procedural macros: #[rustapi_rs::get], #[rustapi_rs::main] |
rustapi-openapi | Native OpenAPI 3.1 model, schema registry, and docs endpoints |
rustapi-validate | Validation runtime (v2 native default, legacy validator optional) |
rustapi-toon | TOON format serializer, content negotiation, LLM headers |
rustapi-extras | JWT auth, CORS, rate limiting, audit logging |
rustapi-ws | WebSocket support with broadcast channels |
rustapi-view | Template engine (Tera) for server-side rendering |
rustapi-jobs | Background job processing (Redis/Postgres) |
rustapi-testing | Test utilities, matchers, expectations |
Performance Philosophy
RustAPI is built on a simple premise: Abstractions shouldn’t cost you runtime performance.
We leverage Rust’s unique ownership system and modern async ecosystem (Tokio, Hyper) to deliver performance that rivals C++ servers, while maintaining developer safe-guards.
The Pillars of Speed
1. Zero-Copy Networking
Where possible, RustAPI avoids copying memory. When you receive a large JSON payload or file upload, we aim to pass pointers to the underlying memory buffer rather than cloning the data.
BytesoverVec<u8>: We use thebytescrate extensively. Passing aBytesobject around isO(1)(it’s just a reference-counted pointer and length), whereas cloning aVec<u8>isO(n).- String View: Extractors like
PathandQueryoften leverageCow<'str, str>(Clone on Write) to avoid allocations if the data doesn’t need to be modified.
2. Multi-Core Async Runtime
RustAPI runs on Tokio, a work-stealing, multi-threaded runtime.
- Non-blocking I/O: A single thread can handle thousands of concurrent idle connections (e.g., WebSockets waiting for messages) with minimal memory overhead.
- Work Stealing: If one CPU core is overloaded with tasks, other idle cores will “steal” work from its queue, ensuring balanced utilization of your hardware.
3. Compile-Time Router
Our router (matchit) is based on a Radix Trie structure.
- O(log n) Lookup: Route matching speed depends on the length of the URL, not the number of routes defined. Having 10 routes or 10,000 routes has negligible impact on routing latency.
- Allocation-Free Matching: For standard paths, routing decisions happen without heap allocations.
Memory Management
Stack vs. Heap
RustAPI encourages stack allocation for small, short-lived data.
- Extractors are often allocated on the stack.
- Response bodies are streamed, meaning a 1GB file download doesn’t require 1GB of RAM. It flows through a small, constant-sized buffer.
Connection Pooling
For database performance, we strongly recommend using connection pooling (e.g., sqlx::Pool).
- Reuse: Establishing a TCP connection and performing a simplified SSL handshake for every request is slow. Pooling keeps connections open and ready.
- Multiplexing: Some drivers allow multiple queries to be in-flight on a single connection simultaneously.
Optimizing Your App
To get the most out of RustAPI, follow these guidelines:
-
Avoid Blocking the Async Executor: Never run CPU-intensive tasks (cryptography, image processing) or blocking I/O (std::fs::read) directly in an async handler.
- Solution: Use
tokio::task::spawn_blockingto offload these to a dedicated thread pool.
#![allow(unused)] fn main() { // BAD: Blocks the thread, potentially stalling other requests fn handler() { let digest = tough_crypto_hash(data); } // GOOD: Runs on a thread meant for blocking work async fn handler() { let digest = tokio::task::spawn_blocking(move || { tough_crypto_hash(data) }).await.unwrap(); } } - Solution: Use
-
JSON Serialization: While
serdeis fast, JSON text processing is CPU heavy.- For extremely high-throughput endpoints, consider binary formats like Protobuf or MessagePack if the client supports it.
-
Keep
StateLight: YourStatestruct is cloned for every request. Wrap large shared data inArc<T>so only the pointer is cloned, not the data itself.
#![allow(unused)]
fn main() {
// Fast
#[derive(Clone)]
struct AppState {
db: PgPool, // Internally uses Arc
config: Arc<Config>, // Wrapped in Arc manually
}
}
Benchmarking
Performance is not a guessing game, but it is very easy to misquote stale numbers.
For that reason, RustAPI keeps its benchmark publication policy and canonical claims in docs/PERFORMANCE_BENCHMARKS.md.
Use that document for:
- the current benchmark source of truth,
- publication rules for new public claims,
- local and CI benchmark entry points, and
- historical-vs-current benchmark context.
Run benchmarks locally
From the repository root:
./scripts/bench.ps1
That currently executes cargo bench --workspace.
CI benchmark path
The repository also includes .github/workflows/benchmark.yml, which runs the same benchmark command and uploads the raw benchmark output as an artifact.
What to publish with benchmark results
Whenever you publish new numbers, include at minimum:
- hardware and OS
- Rust toolchain version
- command and workload description
- enabled feature flags
- throughput plus $p50$, $p95$, and $p99$ latency
- memory usage when available
Why So Fast?
| Optimization | Description |
|---|---|
| ⚡ SIMD-JSON | 2-4x faster JSON parsing with core-simd-json feature |
| 🔄 Zero-copy parsing | Direct memory access for path/query params |
| 📦 SmallVec PathParams | Stack-optimized path parameters |
| 🎯 Compile-time dispatch | All extractors resolved at compile time |
| 🌊 Streaming bodies | Handle large uploads without memory bloat |
Remember: RustAPI provides the capability for high performance, but your application logic ultimately dictates the speed. Use tools like wrk, k6, or drill to stress-test your specific endpoints.
Testing Strategy
Reliable software requires a robust testing strategy. RustAPI is designed to be testable at every level, from individual functions to full end-to-end scenarios.
The Testing Pyramid
We recommend a balanced approach:
- Unit Tests (70%): Fast, isolated tests for individual logic pieces.
- Integration Tests (20%): Testing handlers and extractors wired together.
- End-to-End (E2E) Tests (10%): Testing the running server from the outside.
1. Unit Testing Handlers
Since handlers are just regular functions, you can unit test them by invoking them directly. However, dealing with Extractors directly in tests can sometimes be verbose.
Often, it is better to extract your “Business Logic” into a separate function or trait, test that thoroughly, and keep the Handler layer thin.
#![allow(unused)]
fn main() {
// Domain Logic (Easy to test)
fn calculate_total(items: &[Item]) -> u32 {
items.iter().map(|i| i.price).sum()
}
// Handler (Just plumbing)
async fn checkout(Json(cart): Json<Cart>) -> Json<Receipt> {
let total = calculate_total(&cart.items);
Json(Receipt { total })
}
}
2. Integration Testing with Tower
RustAPI routers implement tower::Service. This means you can send requests to your router directly in memory without spawning a TCP server or using localhost. This is extremely fast.
We rely on tower::util::ServiceExt to call the router.
Setup
Add tower and http-body-util for testing utilities:
[dev-dependencies]
tower = { version = "0.4", features = ["util"] }
http-body-util = "0.1"
tokio = { version = "1", features = ["full"] }
Example Test
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_create_user() {
// 1. Build the app (same as in main.rs)
let app = app();
// 2. Construct a Request
let response = app
.oneshot(
Request::builder()
.method(http::Method::POST)
.uri("/users")
.header(http::header::CONTENT_TYPE, "application/json")
.body(Body::from(r#"{"username": "alice"}"#))
.unwrap(),
)
.await
.unwrap();
// 3. Assert Status
assert_eq!(response.status(), StatusCode::CREATED);
// 4. Assert Body
let body_bytes = response.into_body().collect().await.unwrap().to_bytes();
let body: User = serde_json::from_slice(&body_bytes).unwrap();
assert_eq!(body.username, "alice");
}
}
3. Mocking Dependencies with State
To test handlers that rely on databases or external APIs, you should mock those dependencies.
Use Traits to define the capabilities, and use generics or dynamic dispatch in your State.
#![allow(unused)]
fn main() {
// 1. Define the interface
#[async_trait]
trait UserRepository: Send + Sync {
async fn get_user(&self, id: u32) -> Option<User>;
}
// 2. Real Implementation
struct PostgresRepo { pool: PgPool }
// 3. Mock Implementation
struct MockRepo;
#[async_trait]
impl UserRepository for MockRepo {
async fn get_user(&self, _id: u32) -> Option<User> {
Some(User { username: "mock_user".into() })
}
}
// 4. Use in Handler
async fn get_user(
State(repo): State<Arc<dyn UserRepository>>, // Accepts any impl
Path(id): Path<u32>
) -> Json<User> {
// ...
}
}
In your tests, inject Arc::new(MockRepo) into the State.
4. End-to-End Testing
For E2E tests, you can spawn the actual server on a random port and use a real HTTP client (like reqwest) to hit it.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn e2e_test() {
// Binding to port 0 lets the OS choose a random available port
let listener = std::net::TcpListener::bind("127.0.0.1:0").unwrap();
let addr = listener.local_addr().unwrap();
// Spawn server in background
tokio::spawn(async move {
RustApi::serve(listener, app()).await.unwrap();
});
// Make real requests
let client = reqwest::Client::new();
let resp = client.get(format!("http://{}/health", addr))
.send()
.await
.unwrap();
assert!(resp.status().is_success());
}
}
This approach is slower but validates strictly everything, including network serialization and actual TCP behavior.
Crate Deep Dives
Warning
This section is for those who want to understand the framework’s internal organs. You don’t need to know this to use RustAPI, but it helps if you want to master it.
RustAPI is a collection of focused, interoperable crates. Each crate has a specific philosophy and “Lens” through which it views the world.
- rustapi-core: The Engine
- rustapi-macros: The Magic
- rustapi-validate: The Gatekeeper
- rustapi-grpc: The Bridge
rustapi-core: The Engine
rustapi-core is the foundational crate of the framework. It provides the essential types and traits that glue everything together, although application developers typically interact with the facade crate rustapi.
Core Responsibilities
- Routing: Mapping HTTP requests to Handlers.
- Extraction: The
FromRequesttrait definition. - Response: The
IntoResponsetrait definition. - Middleware: The
LayerandServiceintegration with Tower. - HTTP/3: Built-in QUIC support via
h3andquinn(optional feature).
The Router Internals
We use matchit, a high-performance Radix Tree implementation for routing.
Why Radix Trees?
- Speed: Lookup time is proportional to the length of the path, not the number of routes.
- Priority: Specific paths (
/users/profile) always take precedence over wildcards (/users/:id), regardless of definition order. - Parameters: Efficiently parses named parameters like
:idor*pathwithout regular expressions.
HTTP/3 & QUIC
rustapi-core includes optional support for HTTP/3 (QUIC). This is enabled via the http3 feature flag and powered by quinn and h3. It allows generic specialized methods on RustApi like .run_http3() and .run_dual_stack().
The Handler Trait Magic
The Handler trait is what allows you to write functions with arbitrary arguments.
#![allow(unused)]
fn main() {
// This looks simple...
async fn my_handler(state: State<Db>, json: Json<Data>) { ... }
// ...but under the hood, it compiles to something like:
impl Handler for my_handler {
fn call(req: Request) -> Future<Output=Response> {
// 1. Extract State
// 2. Extract Json
// 3. Call original function
// 4. Convert return to Response
}
}
}
This is achieved through recursive trait implementations on tuples. RustAPI supports handlers with up to 16 arguments.
Middleware Architecture
rustapi-core is built on top of tower. This means any standard Tower middleware works out of the box.
#![allow(unused)]
fn main() {
// The Service stack looks like an onion:
// Outer Layer (Timeout)
// -> Middle Layer (Trace)
// -> Inner Layer (Router)
// -> Handler
}
When you call .layer(), you are wrapping the inner service with a new outer layer.
The BoxRoute
To keep compilation times fast and types manageable, the Router eventually “erases” the specific types of your handlers into a BoxRoute (a boxed tower::Service). This is a dynamic dispatch boundary that trades a tiny amount of runtime performance (nanoseconds) for significantly faster compile times and usability.
rustapi-macros: The Magic
rustapi-macros reduces boilerplate by generating code at compile time.
#[debug_handler]
The most important macro for beginners. Rust’s error messages for complex generic traits (like Handler) can be notoriously difficult to understand.
If your handler doesn’t implement the Handler trait (e.g., because you used an argument that isn’t a valid Extractor), the compiler might give you an error spanning the entire RustApi::new() chain, miles away from the actual problem.
#[debug_handler] fixes this.
It verifies the handler function in isolation and produces clear error messages pointing exactly to the invalid argument.
#![allow(unused)]
fn main() {
#[debug_handler]
async fn handler(
// Compile Error: "String" does not implement FromRequest.
// Did you mean "Json<String>" or "Body"?
body: String
) { ... }
}
#[derive(FromRequest)]
Automatically implement FromRequest for your structs.
#![allow(unused)]
fn main() {
#[derive(FromRequest)]
struct MyExtractor {
// These fields must themselves be Extractors
header: HeaderMap,
body: Json<MyData>,
}
// Now you can use it in a handler
async fn handler(input: MyExtractor) {
println!("{:?}", input.header);
}
}
This is heavily used to group multiple extractors into a single struct (often called the “Parameter Object” pattern), keeping function signatures clean.
Route Metadata Macros
RustAPI provides several attribute macros for enriching OpenAPI documentation:
#[rustapi_rs::tag]
Groups endpoints under a common tag in Swagger UI:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users")]
#[rustapi_rs::tag("Users")]
async fn list_users() -> Json<Vec<User>> { ... }
}
#[rustapi_rs::summary] & #[rustapi_rs::description]
Adds human-readable documentation:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::summary("Get user by ID")]
#[rustapi_rs::description("Returns a single user by their unique identifier.")]
async fn get_user(Path(id): Path<i64>) -> Json<User> { ... }
}
#[rustapi_rs::param]
Customizes the OpenAPI schema type for path parameters. This is essential when the auto-inferred type is incorrect:
#![allow(unused)]
fn main() {
use uuid::Uuid;
// Without #[param], the `id` parameter would be documented as "integer"
// because of the naming convention. With #[param], it's correctly documented as UUID.
#[rustapi_rs::get("/items/{id}")]
#[rustapi_rs::param(id, schema = "uuid")]
async fn get_item(Path(id): Path<Uuid>) -> Json<Item> {
find_item(id).await
}
}
Supported schema types: "uuid", "integer", "int32", "string", "number", "boolean"
Alternative syntax:
#![allow(unused)]
fn main() {
#[rustapi_rs::param(id = "uuid")] // Shorter form
}
rustapi-validate: The Gatekeeper
Data validation should happen at the edges of your system, before invalid data ever reaches your business logic. rustapi-validate provides a robust, unified validation engine supporting both synchronous and asynchronous rules.
The Unified Validation System
RustAPI (v0.1.15+) introduces a unified validation system that supports:
- Legacy Validator: The classic
validatorcrate (via#[derive(validator::Validate)]). - V2 Engine: The new native engine (via
#[derive(rustapi_macros::Validate)]) which properly supports async usage. - Async Validation: Database checks, API calls, and other IO-bound validation rules.
Synchronous Validation
For standard validation rules (length, email, range, regex), use the Validate macro.
Tip
Use
rustapi_macros::Validatefor new code to unlock async features.
#![allow(unused)]
fn main() {
use rustapi_macros::Validate; // Logic from V2 engine
use serde::Deserialize;
#[derive(Debug, Deserialize, Validate)]
pub struct SignupRequest {
#[validate(length(min = 3, message = "Username too short"))]
pub username: String,
#[validate(email(message = "Invalid email format"))]
pub email: String,
#[validate(range(min = 18, max = 150))]
pub age: u8,
}
}
The ValidatedJson Extractor
For synchronous validation, use the ValidatedJson<T> extractor.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn signup(
ValidatedJson(payload): ValidatedJson<SignupRequest>
) -> impl IntoResponse {
// payload is guaranteed to be valid here
process_signup(payload)
}
}
Asynchronous Validation
When you need to check data against a database (e.g., “is this email unique?”) or an external service, use Async Validation.
Async Rules
The V2 engine supports async rules directly in the struct definition.
#![allow(unused)]
fn main() {
use rustapi_macros::Validate;
use rustapi_validate::v2::{ValidationContext, RuleError};
#[derive(Debug, Deserialize, Validate)]
pub struct CreateUserRequest {
// Built-in async rule (requires database integration)
#[validate(async_unique(table = "users", column = "email"))]
pub email: String,
// Custom async function
#[validate(custom_async = "check_username_availability")]
pub username: String,
}
// Custom async validator function
async fn check_username_availability(
username: &String,
_ctx: &ValidationContext
) -> Result<(), RuleError> {
if username == "admin" {
return Err(RuleError::new("reserved", "This username is reserved"));
}
// Perform DB check...
Ok(())
}
}
The AsyncValidatedJson Extractor
For types with async rules, you must use AsyncValidatedJson.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
async fn create_user(
AsyncValidatedJson(payload): AsyncValidatedJson<CreateUserRequest>
) -> impl IntoResponse {
// payload is valid AND unique in database
create_user_in_db(payload).await
}
}
Error Handling
Whether you use synchronous or asynchronous validation, errors are normalized into a standard ApiError format (HTTP 422 Unprocessable Entity).
{
"error": {
"type": "validation_error",
"message": "Request validation failed",
"fields": [
{
"field": "email",
"code": "email",
"message": "Invalid email format"
},
{
"field": "username",
"code": "reserved",
"message": "This username is reserved"
}
]
},
"error_id": "err_a1b2..."
}
Backward Compatibility
The system is fully backward compatible. You can continue using validator::Validate on your structs, and ValidatedJson will accept them automatically via the unified Validatable trait.
#![allow(unused)]
fn main() {
// Legacy code still works!
#[derive(validator::Validate)]
struct OldStruct { ... }
async fn handler(ValidatedJson(body): ValidatedJson<OldStruct>) { ... }
}
rustapi-openapi: The Cartographer
Lens: “The Cartographer” Philosophy: “Documentation as Code.”
Automatic Spec Generation
We believe that if documentation is manual, it is wrong. RustAPI uses a native OpenAPI generator to build the specification directly from your code.
The Schema Trait
Any type that is part of your API (request or response) must implement Schema.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Metric {
/// The name of the metric
name: String,
/// Value (0-100)
#[schema(minimum = 0, maximum = 100)]
value: i32,
}
}
Operation Metadata
Use macros to enrich endpoints:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/metrics")]
#[rustapi_rs::tag("Metrics")]
#[rustapi_rs::summary("List all metrics")]
#[rustapi_rs::response(200, Json<Vec<Metric>>)]
async fn list_metrics() -> Json<Vec<Metric>> { ... }
}
Swagger UI
The RustApi builder automatically mounts a Swagger UI at the path you specify:
#![allow(unused)]
fn main() {
RustApi::new()
.docs("/docs") // Mounts Swagger UI at /docs
// ...
}
Path Parameter Schema Types
By default, RustAPI infers the OpenAPI schema type for path parameters based on naming conventions:
- Parameters named
id,user_id,postId, etc. →integer - Parameters named
uuid,user_uuid, etc. →stringwithuuidformat - Other parameters →
string
However, sometimes auto-inference is incorrect. For example, you might have a parameter named id that is actually a UUID. Use the #[rustapi_rs::param] attribute to override the inferred type:
#![allow(unused)]
fn main() {
use uuid::Uuid;
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::param(id, schema = "uuid")]
#[rustapi_rs::tag("Users")]
async fn get_user(Path(id): Path<Uuid>) -> Json<User> {
// The OpenAPI spec will now correctly show:
// { "type": "string", "format": "uuid" }
// instead of the default { "type": "integer", "format": "int64" }
get_user_by_id(id).await
}
}
Supported Schema Types
| Schema Type | OpenAPI Schema |
|---|---|
"uuid" | { "type": "string", "format": "uuid" } |
"integer", "int", "int64" | { "type": "integer", "format": "int64" } |
"int32" | { "type": "integer", "format": "int32" } |
"string" | { "type": "string" } |
"number", "float" | { "type": "number" } |
"boolean", "bool" | { "type": "boolean" } |
Alternative Syntax
You can also use a shorter syntax:
#![allow(unused)]
fn main() {
// Shorter syntax: param_name = "schema_type"
#[rustapi_rs::get("/posts/{post_id}")]
#[rustapi_rs::param(post_id = "uuid")]
async fn get_post(Path(post_id): Path<Uuid>) -> Json<Post> { ... }
}
Programmatic API
When building routes programmatically, you can use the .param() method:
#![allow(unused)]
fn main() {
use rustapi_rs::handler::get_route;
// Using the Route builder
let route = get_route("/items/{id}", get_item)
.param("id", "uuid")
.tag("Items")
.summary("Get item by UUID");
app.mount_route(route);
}
rustapi-extras: The Toolbox
Lens: “The Toolbox” Philosophy: “Batteries included, but swappable.”
Feature Flags
This crate is a collection of production-ready middleware. Everything is behind a feature flag so you don’t pay for what you don’t use.
| Feature | Component |
|---|---|
jwt | JwtLayer, AuthUser extractor |
cors | CorsLayer |
csrf | CsrfLayer, CsrfToken extractor |
audit | AuditStore, AuditLogger |
insight | InsightLayer, InsightStore |
rate-limit | RateLimitLayer |
replay | ReplayLayer (Time-Travel Debugging) |
timeout | TimeoutLayer |
guard | PermissionGuard |
sanitization | Input sanitization utilities |
Middleware Usage
Middleware wraps your entire API or specific routes.
#![allow(unused)]
fn main() {
let app = RustApi::new()
.layer(CorsLayer::permissive())
.layer(CompressionLayer::new())
.route("/", get(handler));
}
CSRF Protection
Cross-Site Request Forgery protection using the Double-Submit Cookie pattern.
#![allow(unused)]
fn main() {
use rustapi_extras::csrf::{CsrfConfig, CsrfLayer, CsrfToken};
// Configure CSRF middleware
let csrf_config = CsrfConfig::new()
.cookie_name("csrf_token")
.header_name("X-CSRF-Token")
.cookie_secure(true); // HTTPS only
let app = RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.route("/form", get(show_form))
.route("/submit", post(handle_submit));
}
Extracting the Token
Use the CsrfToken extractor to access the token in handlers:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/form")]
async fn show_form(token: CsrfToken) -> Html<String> {
Html(format!(r#"
<input type="hidden" name="_csrf" value="{}" />
"#, token.as_str()))
}
}
How It Works
- Safe methods (
GET,HEAD) generate and set the token cookie - Unsafe methods (
POST,PUT,DELETE) require the token in theX-CSRF-Tokenheader - If header doesn’t match cookie →
403 Forbidden
See CSRF Protection Recipe for a complete guide.
Audit Logging
For enterprise compliance (GDPR/SOC2), the audit feature provides a structured way to record sensitive actions.
#![allow(unused)]
fn main() {
async fn delete_user(
AuthUser(user): AuthUser,
State(audit): State<AuditLogger>
) {
audit.log(AuditEvent::new("user.deleted")
.actor(user.id)
.target("user_123")
);
}
}
Traffic Insight
The insight feature provides powerful real-time traffic analysis and debugging capabilities without external dependencies. It is designed to be low-overhead and privacy-conscious.
[dependencies]
rustapi-extras = { version = "0.1.335", features = ["insight"] }
Setup
#![allow(unused)]
fn main() {
use rustapi_extras::insight::{InsightLayer, InMemoryInsightStore, InsightConfig};
use std::sync::Arc;
let store = Arc::new(InMemoryInsightStore::new());
let config = InsightConfig::default();
let app = RustApi::new()
.layer(InsightLayer::new(config, store.clone()));
}
Accessing Data
You can inspect the collected data (e.g., via an admin dashboard):
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/admin/insights")]
async fn get_insights(State(store): State<Arc<InMemoryInsightStore>>) -> Json<InsightStats> {
// Returns aggregated stats like req/sec, error rates, p99 latency
Json(store.get_stats().await)
}
}
The InsightStore trait allows you to implement custom backends (e.g., ClickHouse or Elasticsearch) if you need long-term retention.
Observability
The otel and structured-logging features bring enterprise-grade observability.
OpenTelemetry
#![allow(unused)]
fn main() {
use rustapi_extras::otel::{OtelLayer, OtelConfig};
let config = OtelConfig::default().service_name("my-service");
let app = RustApi::new()
.layer(OtelLayer::new(config));
}
Structured Logging
Emit logs as JSON for aggregators like Datadog or Splunk. This is different from request logging; it formats your application logs.
#![allow(unused)]
fn main() {
use rustapi_extras::structured_logging::{StructuredLoggingLayer, JsonFormatter};
let app = RustApi::new()
.layer(StructuredLoggingLayer::new(JsonFormatter::default()));
}
Advanced Security
OAuth2 Client
The oauth2-client feature provides a complete client implementation.
#![allow(unused)]
fn main() {
use rustapi_extras::oauth2::{OAuth2Client, OAuth2Config, Provider};
let config = OAuth2Config::new(
Provider::Google,
"client_id",
"client_secret",
"http://localhost:8080/callback"
);
let client = OAuth2Client::new(config);
}
Security Headers
Add standard security headers (HSTS, X-Frame-Options, etc.).
#![allow(unused)]
fn main() {
use rustapi_extras::security_headers::SecurityHeadersLayer;
let app = RustApi::new()
.layer(SecurityHeadersLayer::default());
}
API Keys
Simple API Key authentication strategy.
#![allow(unused)]
fn main() {
use rustapi_extras::api_key::ApiKeyLayer;
let app = RustApi::new()
.layer(ApiKeyLayer::new("my-secret-key"));
}
Permission Guards
The guard feature provides role-based access control (RBAC) helpers.
#![allow(unused)]
fn main() {
use rustapi_extras::guard::PermissionGuard;
// Only allows users with "admin" role
#[rustapi_rs::get("/admin")]
async fn admin_panel(
_guard: PermissionGuard
) -> &'static str {
"Welcome Admin"
}
}
Input Sanitization
The sanitization feature helps prevent XSS by cleaning user input.
#![allow(unused)]
fn main() {
use rustapi_extras::sanitization::sanitize_html;
let safe_html = sanitize_html("<script>alert(1)</script>Hello");
// Result: "<script>alert(1)</script>Hello"
}
Resilience
Circuit Breaker
Prevent cascading failures by stopping requests to failing upstreams.
#![allow(unused)]
fn main() {
use rustapi_extras::circuit_breaker::CircuitBreakerLayer;
let app = RustApi::new()
.layer(CircuitBreakerLayer::new());
}
Retry
Automatically retry failed requests with backoff.
#![allow(unused)]
fn main() {
use rustapi_extras::retry::RetryLayer;
let app = RustApi::new()
.layer(RetryLayer::default());
}
Timeout
Ensure requests don’t hang indefinitely.
#![allow(unused)]
fn main() {
use rustapi_extras::timeout::TimeoutLayer;
use std::time::Duration;
let app = RustApi::new()
.layer(TimeoutLayer::new(Duration::from_secs(30)));
}
Optimization
Caching
Cache responses based on headers or path.
#![allow(unused)]
fn main() {
use rustapi_extras::cache::CacheLayer;
let app = RustApi::new()
.layer(CacheLayer::new());
}
Request Deduplication
Prevent duplicate requests (e.g., from double clicks) from processing twice.
#![allow(unused)]
fn main() {
use rustapi_extras::dedup::DedupLayer;
let app = RustApi::new()
.layer(DedupLayer::new());
}
Debugging
Time-Travel Debugging (Replay)
The replay feature allows you to record production traffic and replay it locally for debugging.
See the Time-Travel Debugging Recipe for full details.
#![allow(unused)]
fn main() {
use rustapi_extras::replay::{ReplayLayer, ReplayConfig, InMemoryReplayStore};
let replay_config = ReplayConfig::default();
let store = InMemoryReplayStore::new(1_000);
let app = RustApi::new()
.layer(ReplayLayer::new(replay_config).with_store(store));
}
rustapi-toon: The Diplomat
Lens: “The Diplomat” Philosophy: “Optimizing for Silicon Intelligence.”
What is TOON?
Token-Oriented Object Notation is a format designed to be consumed by Large Language Models (LLMs). It reduces token usage by stripping unnecessary syntax (braces, quotes) while maintaining semantic structure.
Content Negotiation
The LlmResponse<T> type automatically negotiates the response format based on the Accept header.
#![allow(unused)]
fn main() {
async fn agent_data() -> LlmResponse<Data> {
// Returns JSON for browsers
// Returns TOON for AI Agents (using fewer tokens)
}
}
Token Savings
TOON often reduces token count by 30-50% compared to JSON, saving significant costs and context window space when communicating with models like GPT-4 or Gemini.
rustapi-ws: The Live Wire
Lens: “The Live Wire” Philosophy: “Real-time, persistent connections made simple.”
The WebSocket Extractor
Upgrading an HTTP connection to a WebSocket uses the standard extractor pattern:
#![allow(unused)]
fn main() {
async fn ws_handler(
ws: WebSocket,
) -> impl IntoResponse {
ws.on_upgrade(handle_socket)
}
}
Architecture
We recommend an Actor Model for WebSocket state.
- Each connection spawns a new async task (the actor).
- Use
tokio::sync::broadcastchannels for global events (like chat rooms). - Use
mpscchannels for direct messaging.
rustapi-grpc: The Bridge
Lens: “The Bridge”
Philosophy: “HTTP and gRPC, one runtime.”
rustapi-grpc is an optional crate that helps you run a RustAPI HTTP server and a Tonic gRPC server in the same process.
What You Get
run_concurrently(http, grpc)for running two server futures side-by-side.run_rustapi_and_grpc(app, http_addr, grpc)convenience helper.run_rustapi_and_grpc_with_shutdown(app, http_addr, signal, grpc_with_shutdown)for graceful shared shutdown.- Re-exports of
tonicandprost.
Enable It
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["grpc"] }
Basic Usage
use rustapi_rs::grpc::{run_rustapi_and_grpc, tonic};
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/health")]
async fn health() -> &'static str { "ok" }
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let http_app = RustApi::new().route("/health", get(health));
let grpc_addr = "127.0.0.1:50051".parse()?;
let grpc_server = tonic::transport::Server::builder()
.add_service(MyGreeterServer::new(MyGreeter::default()))
.serve(grpc_addr);
run_rustapi_and_grpc(http_app, "127.0.0.1:8080", grpc_server).await?;
Ok(())
}
Graceful Shutdown
use rustapi_rs::grpc::{run_rustapi_and_grpc_with_shutdown, tonic};
run_rustapi_and_grpc_with_shutdown(
http_app,
"127.0.0.1:8080",
tokio::signal::ctrl_c(),
move |shutdown| {
tonic::transport::Server::builder()
.add_service(MyGreeterServer::new(MyGreeter::default()))
.serve_with_shutdown("127.0.0.1:50051".parse().unwrap(), shutdown)
},
).await?;
rustapi-view: The Artist
Lens: “The Artist” Philosophy: “Server-side rendering with modern tools.”
Tera Integration
We use Tera, a Jinja2-like template engine, for rendering HTML on the server.
#![allow(unused)]
fn main() {
async fn home(
State(templates): State<Templates>
) -> View {
let mut ctx = Context::new();
ctx.insert("user", "Alice");
View::new("home.html", ctx)
}
}
Layouts and Inheritance
Tera supports template inheritance, allowing you to define a base layout (base.html) and extend it in child templates (index.html), keeping your frontend DRY.
rustapi-jobs: The Workhorse
Lens: “The Workhorse” Philosophy: “Fire and forget, with reliability guarantees.”
Background Processing
Long-running tasks shouldn’t block HTTP requests. rustapi-jobs provides a robust queue system that can run in-memory or be backed by Redis/Postgres.
Usage Example
Here is how to set up a simple background job queue using the in-memory backend.
1. Define the Job and Data
Jobs are separated into two parts:
- The Data struct (the payload), which must be serializable.
- The Job struct (the handler), which contains the logic.
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
use rustapi_jobs::{Job, JobContext, Result};
use async_trait::async_trait;
// 1. The payload data
#[derive(Serialize, Deserialize, Debug, Clone)]
struct EmailJobData {
to: String,
subject: String,
body: String,
}
// 2. The handler struct (usually stateless)
#[derive(Clone)]
struct EmailJob;
#[async_trait]
impl Job for EmailJob {
const NAME: &'static str = "email_job";
type Data = EmailJobData;
async fn execute(&self, _ctx: JobContext, data: Self::Data) -> Result<()> {
println!("Sending email to {} with subject: {}", data.to, data.subject);
// Simulate work
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(())
}
}
}
2. Configure the Queue
In your main function, initialize the queue and start the worker.
use rustapi_jobs::{JobQueue, InMemoryBackend};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Create the backend
let backend = InMemoryBackend::new();
// 2. Create the queue
let queue = JobQueue::new(backend);
// 3. Register the job handler
queue.register_job(EmailJob).await;
// 4. Start the worker in the background
let worker_queue = queue.clone();
tokio::spawn(async move {
if let Err(e) = worker_queue.start_worker().await {
eprintln!("Worker failed: {:?}", e);
}
});
// 5. Enqueue a job (pass the DATA, not the handler)
queue.enqueue::<EmailJob>(EmailJobData {
to: "user@example.com".into(),
subject: "Welcome!".into(),
body: "Thanks for joining.".into(),
}).await?;
Ok(())
}
Backends
- Memory: Great for development and testing. Zero infrastructure required.
- Redis: High throughput persistence. Recommended for production.
- Postgres: Transactional reliability (ACID). Best if you cannot lose jobs.
Redis Backend
Enable the redis feature in Cargo.toml:
[dependencies]
rustapi-jobs = { version = "0.1.335", features = ["redis"] }
#![allow(unused)]
fn main() {
use rustapi_jobs::backend::redis::RedisBackend;
let backend = RedisBackend::new("redis://127.0.0.1:6379").await?;
let queue = JobQueue::new(backend);
}
Postgres Backend
Enable the postgres feature in Cargo.toml. This uses sqlx.
[dependencies]
rustapi-jobs = { version = "0.1.335", features = ["postgres"] }
#![allow(unused)]
fn main() {
use rustapi_jobs::backend::postgres::PostgresBackend;
use sqlx::postgres::PgPoolOptions;
let pool = PgPoolOptions::new().connect("postgres://user:pass@localhost/db").await?;
let backend = PostgresBackend::new(pool);
// Ensure the jobs table exists
backend.migrate().await?;
let queue = JobQueue::new(backend);
}
Reliability Features
The worker system includes built-in reliability features:
- Exponential Backoff: Automatically retries failing jobs with increasing delays.
- Dead Letter Queue (DLQ): “Poison” jobs that fail repeatedly are isolated for manual inspection.
- Concurrency Control: Limit the number of concurrent workers to prevent overloading your system.
rustapi-testing: The Auditor
Lens: “The Auditor” Philosophy: “Trust, but verify.”
rustapi-testing provides a comprehensive suite of tools for integration testing your RustAPI applications. It focuses on two main areas:
- In-process API testing: Testing your endpoints without binding to a real TCP port.
- External service mocking: Mocking downstream services (like payment gateways or auth providers) that your API calls.
Installation
Add the crate to your dev-dependencies:
[dev-dependencies]
rustapi-testing = { version = "0.1.335" }
The TestClient
Integration testing is often slow and painful because it involves spinning up a server, waiting for ports, and managing child processes. TestClient solves this by wrapping your RustApi application and executing requests directly against the service layer.
Basic Usage
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[tokio::test]
async fn test_hello_world() {
let app = RustApi::new().route("/", get(|| async { "Hello!" }));
let client = TestClient::new(app);
let response = client.get("/").await;
response
.assert_status(200)
.assert_body_contains("Hello!");
}
Testing JSON APIs
The client provides fluent helpers for JSON APIs.
#[derive(Serialize)]
struct CreateUser {
username: String,
}
#[tokio::test]
async fn test_create_user() {
let app = RustApi::new().route("/users", post(create_user_handler));
let client = TestClient::new(app);
let response = client.post_json("/users", &CreateUser {
username: "alice".into()
}).await;
response
.assert_status(201)
.assert_json(&serde_json::json!({
"id": 1,
"username": "alice"
}));
}
Mocking Services with MockServer
Real-world applications usually talk to other services. MockServer allows you to spin up a lightweight HTTP server that responds to requests based on pre-defined expectations.
Setting up a Mock Server
use rustapi_testing::{MockServer, MockResponse, RequestMatcher};
#[tokio::test]
async fn test_external_integration() {
// 1. Start the mock server
let server = MockServer::start().await;
// 2. Define an expectation
server.expect(RequestMatcher::new(Method::GET, "/external-api/data"))
.respond_with(MockResponse::new()
.status(StatusCode::OK)
.json(serde_json::json!({ "result": "success" })))
.times(1);
// 3. Configure your app to use the mock server's URL
let app = create_app_with_config(Config {
external_api_url: server.base_url(),
});
let client = TestClient::new(app);
// 4. Run your test
client.get("/my-endpoint-calling-external").await.assert_status(200);
}
Expectations
You can define strict expectations on how your application interacts with the mock server.
Matching Requests
RequestMatcher allows matching by method, path, headers, and body.
// Match a POST request with specific body
server.expect(RequestMatcher::new(Method::POST, "/webhook")
.body_string("event_type=payment_success".into()))
.respond_with(MockResponse::new().status(StatusCode::OK));
Verification
The MockServer automatically verifies that all expectations were met when it is dropped (at the end of the test scope). If an expectation was set to be called once but was never called, the test will panic.
.once(): Must be called exactly once (default)..times(n): Must be called exactlyntimes..at_least_once(): Must be called 1 or more times..never(): Must not be called.
// Ensure we don't call the billing API if validation fails
server.expect(RequestMatcher::new(Method::POST, "/charge"))
.never();
Best Practices
- Dependency Injection: Design your application
Stateto accept base URLs for external services so you can inject theMockServerURL during tests. - Isolation: Create a new
MockServerfor each test case to ensure no shared state or interference. - Fluent Assertions: Use the chainable assertion methods on
TestResponseto keep tests readable.
cargo-rustapi: The Architect
Lens: “The Architect” Philosophy: “Scaffolding best practices from day one.”
The CLI
The RustAPI CLI isn’t just a project generator; it’s a productivity multiplier.
Commands
cargo rustapi new <name>: Create a new project with the perfect directory structure.cargo rustapi run: Run the development server.cargo rustapi run --reload: Run with hot-reload (auto-rebuild on file changes).cargo rustapi generate resource <name>: Scaffold a new API resource (Model + Handlers + Tests).cargo rustapi client --spec <path> --language <lang>: Generate a client library (Rust, TS, Python) from OpenAPI spec.cargo rustapi deploy <platform>: Generate deployment configs for Docker, Fly.io, Railway, or Shuttle.cargo rustapi migrate <action>: Database migration commands (create, run, revert, status, reset).
Templates
The templates used by the CLI are opinionated but flexible. They enforce:
- Modular folder structure.
- Implementation of
Statepattern. - Separation of
Errortypes.
Reference
Focused references for APIs, metadata, and syntax details that are easier to scan than long-form guides.
Macro Attribute Reference
RustAPI’s attribute macros do two jobs at once:
- they register routes and schemas at compile time, and
- they enrich the generated OpenAPI operation metadata.
This reference focuses on the route metadata attributes most users need first:
#[tag(...)]#[summary(...)]#[description(...)]#[param(...)]#[errors(...)]
Golden rule: In user code, use the facade macros from
rustapi-rs, e.g.#[rustapi_rs::get(...)], not internal crates.
Typical usage
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Schema)]
struct User {
id: String,
name: String,
}
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::tag("Users")]
#[rustapi_rs::summary("Get user by ID")]
#[rustapi_rs::description("Returns a single user by its unique identifier.")]
#[rustapi_rs::param(id, schema = "uuid")]
#[rustapi_rs::errors(404 = "User not found", 403 = "Forbidden")]
async fn get_user(Path(_id): Path<String>) -> Result<Json<User>> {
Ok(Json(User {
id: "550e8400-e29b-41d4-a716-446655440000".into(),
name: "Alice".into(),
}))
}
}
#[rustapi_rs::tag("...")]
Groups the operation under one or more OpenAPI tags.
Syntax
#![allow(unused)]
fn main() {
#[rustapi_rs::tag("Users")]
}
Effect
- Appends the tag value to the operation’s
tagslist. - Useful for Swagger grouping and cookbook-style API organization.
Example
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/items")]
#[rustapi_rs::tag("Items")]
async fn list_items() -> &'static str {
"ok"
}
}
#[rustapi_rs::summary("...")]
Sets the short OpenAPI summary for the operation.
Syntax
#![allow(unused)]
fn main() {
#[rustapi_rs::summary("List all items")]
}
Effect
- Fills the operation summary shown in Swagger and generated specs.
- Best used as a short, action-oriented sentence.
Example
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/items")]
#[rustapi_rs::summary("List all items")]
async fn list_items() -> &'static str {
"ok"
}
}
#[rustapi_rs::description("...")]
Sets the longer description for the operation.
Syntax
#![allow(unused)]
fn main() {
#[rustapi_rs::description("Returns all active items. Supports pagination.")]
}
Effect
- Fills the operation description field.
- Good for behavior notes, pagination semantics, or auth requirements.
Example
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/items")]
#[rustapi_rs::description("Returns active items only. Archived items are excluded.")]
async fn list_items() -> &'static str {
"ok"
}
}
#[rustapi_rs::param(...)]
Overrides the OpenAPI schema type for a path parameter.
This is useful when the auto-inferred type is not the schema shape you want to expose in docs.
Supported schema types
"uuid""integer"or"int""string""boolean"or"bool""number"
Supported forms
Form 1:
#![allow(unused)]
fn main() {
#[rustapi_rs::param(id, schema = "uuid")]
}
Form 2:
#![allow(unused)]
fn main() {
#[rustapi_rs::param(id = "uuid")]
}
Effect
- Adds a custom path parameter schema override to the generated route metadata.
- Particularly useful for IDs that are represented as strings but should be documented with UUID semantics.
Example
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/orders/{order_id}")]
#[rustapi_rs::param(order_id, schema = "uuid")]
async fn get_order(Path(_order_id): Path<String>) -> &'static str {
"ok"
}
}
Notes
- This attribute is intended for path parameters.
- RustAPI already auto-detects path params from handler signatures;
#[param(...)]is an override, not a requirement.
#[rustapi_rs::errors(...)]
Declares additional typed error responses for OpenAPI.
Syntax
#![allow(unused)]
fn main() {
#[rustapi_rs::errors(404 = "User not found", 403 = "Forbidden")]
}
Effect
- Adds those responses directly to the operation’s OpenAPI response map.
- Each declared response uses the standard
ErrorSchemaunderapplication/json.
Example
#![allow(unused)]
fn main() {
#[rustapi_rs::delete("/users/{id}")]
#[rustapi_rs::errors(404 = "User not found")]
async fn delete_user(Path(_id): Path<i64>) -> Result<()> {
Ok(())
}
}
Multiple status codes
#![allow(unused)]
fn main() {
#[rustapi_rs::post("/users")]
#[rustapi_rs::errors(
400 = "Invalid input",
409 = "Email already exists",
422 = "Validation failed"
)]
async fn create_user(Json(_body): Json<User>) -> Result<Created<User>> {
todo!()
}
}
Interaction with route macros
These metadata attributes are consumed by the HTTP method macros such as:
#[rustapi_rs::get(...)]#[rustapi_rs::post(...)]#[rustapi_rs::put(...)]#[rustapi_rs::patch(...)]#[rustapi_rs::delete(...)]
The route macro gathers metadata from the other attributes and turns them into builder calls such as:
.tag(...).summary(...).description(...).param(...).error_response(...)
Recommended ordering
Keep the route macro first, then place metadata attributes below it:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::tag("Users")]
#[rustapi_rs::summary("Get user")]
#[rustapi_rs::param(id, schema = "uuid")]
#[rustapi_rs::errors(404 = "User not found")]
async fn get_user(Path(_id): Path<String>) -> Result<&'static str> {
Ok("ok")
}
}
That matches the style already used across the repository and keeps metadata easy to scan.
What these macros do not do
- They do not replace
#[derive(Schema)]for your DTOs. - They do not change runtime authorization or validation behavior by themselves.
#[errors(...)]enriches OpenAPI docs; your handler still needs to return the appropriateApiErroror equivalent response at runtime.
Common mistakes
Forgetting Schema on request/response types
The metadata attributes do not remove the need for #[derive(Schema)] on DTOs used in OpenAPI-aware handlers.
Using internal crates directly
Prefer:
#![allow(unused)]
fn main() {
#[rustapi_rs::tag("Users")]
}
not imports from rustapi-macros or rustapi-core in user-facing examples.
Assuming #[errors(...)] changes runtime logic
It documents the operation. Your code still needs to actually return 404, 409, etc.
Related reading
Recipes
Recipes are practical, focused guides to solving specific problems with RustAPI.
Format
Each recipe follows a simple structure:
- Problem: What are we trying to solve?
- Solution: The code.
- Discussion: Why it works and what to watch out for.
Table of Contents
- Creating Resources
- Pagination & HATEOAS
- OpenAPI & Schemas
- JWT Authentication
- Session-Based Authentication
- OAuth2 Client
- OIDC & OAuth2 in Production
- CSRF Protection
- Database Integration
- Testing & Mocking
- File Uploads
- Background Jobs
- Custom Extractors
- Custom Middleware
- Error Handling
- Axum -> RustAPI Migration
- Actix-web -> RustAPI Migration
- Real-time Chat
- Server-Side Rendering (SSR)
- AI Integration (TOON)
- Production Tuning
- Response Compression
- Resilience Patterns
- Observability
- Middleware Debugging
- Graceful Shutdown
- Time-Travel Debugging (Replay)
- Deployment
- HTTP/3 (QUIC)
- gRPC Integration
- Automatic Status Page
Creating Resources
Problem: You need to add a new “Resource” (like Users, Products, or Posts) to your API with standard CRUD operations.
Solution
Create a new module src/handlers/users.rs:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Deserialize, Schema, Clone)]
pub struct User {
pub id: u64,
pub name: String,
}
#[derive(Deserialize, Schema)]
pub struct CreateUser {
pub name: String,
}
#[rustapi_rs::get("/users")]
pub async fn list() -> Json<Vec<User>> {
Json(vec![]) // Fetch from DB in real app
}
#[rustapi_rs::post("/users")]
pub async fn create(Json(payload): Json<CreateUser>) -> impl IntoResponse {
let user = User { id: 1, name: payload.name };
(StatusCode::CREATED, Json(user))
}
}
Then in main.rs, simply use RustApi::auto():
use rustapi_rs::prelude::*;
mod handlers; // Make sure the module is part of the compilation unit!
#[rustapi_rs::main]
async fn main() -> Result<()> {
// RustAPI automatically discovers all routes decorated with macros
RustApi::auto()
.run("127.0.0.1:8080")
.await
}
Discussion
RustAPI uses distributed slices (via linkme) to automatically register routes decorated with #[rustapi_rs::get], #[rustapi_rs::post], etc. This means you don’t need to manually import or mount every single handler in your main function.
Just ensure your handler modules are reachable (e.g., via mod handlers;), and the framework handles the rest. This encourages a clean, Domain-Driven Design (DDD) structure where resources are self-contained.
Pagination & HATEOAS
Implementing pagination correctly is crucial for API performance and usability. RustAPI provides built-in support for HATEOAS (Hypermedia As The Engine Of Application State) compliant pagination, which includes navigation links in the response.
Problem
You need to return a list of resources, but there are too many to return in a single request. You want to provide a standard way for clients to navigate through pages of data.
Solution
Use ResourceCollection and PageInfo from rustapi_core::hateoas. These types automatically generate HAL (Hypertext Application Language) compliant responses with _links (self, first, last, next, prev) and _embedded resources.
Example Code
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::{PageInfo, ResourceCollection};
use serde::{Deserialize, Serialize};
// 1. Define your resource
// Note: It must derive Schema for OpenAPI generation
#[derive(Serialize, Clone, Schema)]
struct User {
id: i64,
name: String,
}
// 2. Define query parameters
#[derive(Deserialize, Schema)]
struct Pagination {
page: Option<usize>,
size: Option<usize>,
}
// 3. Create the handler
#[rustapi_rs::get("/users")]
async fn list_users(Query(params): Query<Pagination>) -> Json<ResourceCollection<User>> {
let page = params.page.unwrap_or(0);
let size = params.size.unwrap_or(20).max(1); // Ensure size is at least 1 to prevent division by zero
// In a real app, you would fetch this from a database
// let (users, total_elements) = db.fetch_users(page, size).await?;
let users = vec![
User { id: 1, name: "Alice".to_string() },
User { id: 2, name: "Bob".to_string() },
];
let total_elements = 100;
// 4. Calculate pagination info
let page_info = PageInfo::calculate(total_elements, size, page);
// 5. Build the collection response
// "users" is the key in the _embedded map
// "/users" is the base URL for generating links
let collection = ResourceCollection::new("users", users)
.page_info(page_info)
.with_pagination("/users");
Json(collection)
}
}
Explanation
The response will look like this (HAL format):
{
"_embedded": {
"users": [
{ "id": 1, "name": "Alice" },
{ "id": 2, "name": "Bob" }
]
},
"_links": {
"self": { "href": "/users?page=0&size=20" },
"first": { "href": "/users?page=0&size=20" },
"last": { "href": "/users?page=4&size=20" },
"next": { "href": "/users?page=1&size=20" }
},
"page": {
"size": 20,
"totalElements": 100,
"totalPages": 5,
"number": 0
}
}
Key Components
ResourceCollection<T>: Wraps a list of items. It places them under_embeddedand adds_links.PageInfo: Holds metadata about the current page (size, total elements, total pages, current number).with_pagination(base_url): Automatically generates standard navigation links based on thePageInfoand the provided base URL.
Variations
Cursor-based Pagination
If you are using cursor-based pagination (e.g., before_id, after_id), you can manually construct links instead of using with_pagination:
#![allow(unused)]
fn main() {
let collection = ResourceCollection::new("users", users)
.self_link("/users?after=10")
.next_link("/users?after=20");
}
HATEOAS for Single Resources
You can also add links to individual resources using Resource<T>:
#![allow(unused)]
fn main() {
use rustapi_rs::hateoas::Linkable; // Trait for .with_links()
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<Resource<User>> {
let user = User { id, name: "Alice".to_string() };
let resource = user.with_links()
.self_link(format!("/users/{}", id))
.link("orders", format!("/users/{}/orders", id));
Json(resource)
}
}
Gotchas
- Schema Derive: The type
TinsideResourceCollection<T>orResource<T>MUST implementRustApiSchema(via#[derive(Schema)]) for OpenAPI generation to work. - Base URL: The
base_urlpassed towith_paginationshould generally match the route path. If your API is behind a proxy or prefix, ensure this URL is correct from the client’s perspective.
OpenAPI Schemas & References
RustAPI’s OpenAPI generation is built around the RustApiSchema trait, which is automatically implemented when you derive Schema. This system seamlessly handles JSON Schema 2020-12 references ($ref) to reduce duplication and support recursive types.
Automatic References
When you use #[derive(Schema)] on a struct or enum, RustAPI generates an implementation that:
- Registers the type in the OpenAPI
components/schemassection. - Returns a
$refpointing to that component whenever the type is used in another schema.
This means you don’t need to manually configure references – they just work.
#![allow(unused)]
fn main() {
use rustapi_openapi::Schema;
#[derive(Schema)]
struct Address {
street: String,
city: String,
}
#[derive(Schema)]
struct User {
username: String,
// This will generate {"$ref": "#/components/schemas/Address"}
address: Address,
}
}
Recursive Types
Recursive types (like a Comment that replies to another Comment) are supported automatically because the schema is registered before its fields are processed. However, you must use Box<T> or Option<T> for the recursive field to break the infinite size cycle in Rust.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Comment {
id: String,
text: String,
// Recursive reference works automatically
replies: Option<Vec<Box<Comment>>>,
}
}
Generics
Generic types are also supported. The schema name will include the concrete type parameters to ensure uniqueness.
#![allow(unused)]
fn main() {
#[derive(Schema)]
struct Page<T> {
items: Vec<T>,
total: u64,
}
#[derive(Schema)]
struct Product {
name: String,
}
// Generates component: "Page_Product"
// Generates usage: {"$ref": "#/components/schemas/Page_Product"}
async fn list_products() -> Json<Page<Product>> { ... }
}
Renaming & Customization
You can customize how fields appear in the schema using standard Serde attributes, as rustapi-openapi respects #[serde(rename)].
#![allow(unused)]
fn main() {
#[derive(Schema, Serialize)]
struct UserConfig {
#[serde(rename = "userId")]
user_id: String, // In schema: "userId"
}
}
Note: Currently, #[derive(Schema)] does not support specific #[schema(...)] attributes for descriptions or examples directly on fields. You should use doc comments (if supported in future versions) or implement RustApiSchema manually for advanced customization.
Manual Implementation
If you need a schema that cannot be derived (e.g., for a third-party type), you can implement RustApiSchema manually.
#![allow(unused)]
fn main() {
use rustapi_openapi::schema::{RustApiSchema, SchemaCtx, SchemaRef, JsonSchema2020};
struct MyCustomType;
impl RustApiSchema for MyCustomType {
fn schema(ctx: &mut SchemaCtx) -> SchemaRef {
let name = "MyCustomType";
// Register if not exists
if ctx.components.contains_key(name) {
return SchemaRef::Ref { reference: format!("#/components/schemas/{}", name) };
}
// Insert placeholder
ctx.components.insert(name.to_string(), JsonSchema2020::new());
// Build schema
let mut schema = JsonSchema2020::string();
schema.format = Some("custom-format".to_string());
// Update component
ctx.components.insert(name.to_string(), schema);
SchemaRef::Ref { reference: format!("#/components/schemas/{}", name) }
}
fn name() -> std::borrow::Cow<'static, str> {
std::borrow::Cow::Borrowed("MyCustomType")
}
}
}
JWT Authentication
Authentication is critical for almost every API. RustAPI provides a built-in, production-ready JWT authentication system via the extras-jwt feature.
Dependencies
Enable the extras-jwt feature in your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["extras-jwt"] }
serde = { version = "1", features = ["derive"] }
1. Define Claims
Define your custom claims struct. It must be serializable and deserializable.
#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Claims {
pub sub: String, // Subject (User ID)
pub role: String, // Custom claim: "admin", "user"
pub exp: usize, // Required for JWT expiration validation
}
}
2. Shared State
To avoid hardcoding secrets in multiple places, we’ll store our secret key in the application state.
#![allow(unused)]
fn main() {
#[derive(Clone)]
pub struct AppState {
pub secret: String,
}
}
3. The Handlers
We use the AuthUser<T> extractor to protect routes, and State<T> to access the secret for signing tokens during login.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use std::time::{SystemTime, UNIX_EPOCH};
#[rustapi_rs::get("/profile")]
async fn protected_profile(
// This handler will only be called if a valid token is present
AuthUser(claims): AuthUser<Claims>
) -> Json<String> {
Json(format!("Welcome back, {}! You are a {}.", claims.sub, claims.role))
}
#[rustapi_rs::post("/login")]
async fn login(State(state): State<AppState>) -> Result<Json<String>> {
// In a real app, validate credentials first!
use std::time::{SystemTime, UNIX_EPOCH};
let expiration = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() + 3600; // Token expires in 1 hour (3600 seconds)
let claims = Claims {
sub: "user_123".to_owned(),
role: "admin".to_owned(),
exp: expiration as usize,
};
// We use the secret from our shared state
let token = create_token(&claims, &state.secret)?;
Ok(Json(token))
}
}
4. Wiring it Up
Register the JwtLayer and the state in your application.
#[rustapi_rs::main]
async fn main() -> Result<()> {
// In production, load this from an environment variable!
let secret = "my_secret_key".to_string();
let state = AppState {
secret: secret.clone(),
};
// Configure JWT validation with the same secret
let jwt_layer = JwtLayer::<Claims>::new(secret);
RustApi::auto()
.state(state) // Register the shared state
.layer(jwt_layer) // Add the middleware
.run("127.0.0.1:8080")
.await
}
Bonus: Role-Based Access Control (RBAC)
Since we have the role in our claims, we can enforce permissions easily within the handler:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/admin")]
async fn admin_only(AuthUser(claims): AuthUser<Claims>) -> Result<String, StatusCode> {
if claims.role != "admin" {
return Err(StatusCode::FORBIDDEN);
}
Ok("Sensitive Admin Data".to_string())
}
}
How It Works
JwtLayerMiddleware: Intercepts requests, looks forAuthorization: Bearer <token>, validates the signature, and stores the decoded claims in the request extensions.AuthUserExtractor: Retrieves the claims from the request extensions. If the middleware failed or didn’t run, or if the token was missing/invalid, the extractor returns a401 Unauthorizederror.
This separation allows you to have some public routes (where JwtLayer might just pass through) and some protected routes (where AuthUser enforces presence). Note that JwtLayer by default does not reject requests without tokens; it just doesn’t attach claims. The extractor does the rejection.
Session-Based Authentication
Cookie-backed session auth is the shortest path from “I need login/logout” to a production-shaped RustAPI service.
This recipe shows how to:
- load a session from a cookie before your handler runs,
- read and mutate session data through the
Sessionextractor, - rotate the session ID on login / refresh,
- swap the store backend from memory to Redis without changing handler code.
Prerequisites
Enable the session feature on the public facade.
[dependencies]
rustapi-rs = { version = "0.1.389", features = ["extras-session"] }
If you want Redis-backed sessions, add the Redis backend feature too:
[dependencies]
rustapi-rs = { version = "0.1.389", features = ["extras-session", "extras-session-redis"] }
Solution
rustapi-rs now exposes the full session flow through the facade.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::extras::session::{MemorySessionStore, Session, SessionConfig, SessionLayer};
use std::time::Duration;
#[derive(Debug, Deserialize, Schema)]
struct LoginRequest {
user_id: String,
}
#[derive(Debug, Serialize, Schema)]
struct SessionView {
authenticated: bool,
user_id: Option<String>,
refreshed: bool,
session_id: Option<String>,
}
async fn session_view(session: &Session) -> SessionView {
let user_id = session.get::<String>("user_id").await.ok().flatten();
let refreshed = session
.get::<bool>("refreshed")
.await
.ok()
.flatten()
.unwrap_or(false);
SessionView {
authenticated: user_id.is_some(),
user_id,
refreshed,
session_id: session.id().await,
}
}
async fn login(session: Session, Json(payload): Json<LoginRequest>) -> Json<SessionView> {
session.cycle_id().await;
session.insert("user_id", &payload.user_id).await.expect("session insert");
session.insert("refreshed", false).await.expect("session insert");
Json(session_view(&session).await)
}
async fn me(session: Session) -> Json<SessionView> {
Json(session_view(&session).await)
}
async fn refresh(session: Session) -> Json<SessionView> {
if session.contains("user_id").await {
session.cycle_id().await;
session.insert("refreshed", true).await.expect("session insert");
}
Json(session_view(&session).await)
}
async fn logout(session: Session) -> NoContent {
session.destroy().await;
NoContent
}
let app = RustApi::new()
.layer(SessionLayer::new(
MemorySessionStore::new(),
SessionConfig::new()
.cookie_name("rustapi_auth")
.secure(false)
.ttl(Duration::from_secs(60 * 30)),
))
.route("/auth/login", post(login))
.route("/auth/me", get(me))
.route("/auth/refresh", post(refresh))
.route("/auth/logout", post(logout));
}
A complete runnable version lives in crates/rustapi-rs/examples/auth_api.rs.
How the flow works
SessionLayerparses the incoming session cookie.- The configured store loads the matching
SessionRecord. - The
Sessionextractor gives handlers typed access to the record. - Handler mutations are persisted after the response is produced.
- If the session was changed, the middleware emits a new
Set-Cookieheader. session.destroy().awaitdeletes the record and clears the cookie.
That means your handlers stay focused on business logic while the middleware handles persistence and cookie management.
Built-in store options
In-memory store
Use MemorySessionStore for tests, demos, and single-node deployments.
#![allow(unused)]
fn main() {
use rustapi_rs::extras::session::{MemorySessionStore, SessionConfig, SessionLayer};
let layer = SessionLayer::new(
MemorySessionStore::new(),
SessionConfig::new(),
);
}
Redis-backed store
Use RedisSessionStore when sessions must survive restarts or be shared across instances.
#![allow(unused)]
fn main() {
use rustapi_rs::extras::session::{RedisSessionStore, SessionConfig, SessionLayer};
let store = RedisSessionStore::from_url(&std::env::var("REDIS_URL")?)?
.key_prefix("rustapi:session:");
let layer = SessionLayer::new(store, SessionConfig::new());
}
The handler API is identical. Only the store changes.
Configuration notes
- Keep
cookie_http_only = truefor session cookies. - Use
secure(true)in production so cookies are HTTPS-only. - Use
same_site(SameSite::Lax)or stricter unless your cross-site flow needs otherwise. - Rotate the session ID on login and privilege changes with
session.cycle_id().awaitto reduce session fixation risk. - Prefer short TTLs plus rolling expiry for end-user sessions.
- Store only what you need in the session payload. Opaque IDs age better than giant identity blobs.
Verification
Run the built-in session tests first:
cargo test -p rustapi-extras --features session
Then try the runnable example:
cargo run -p rustapi-rs --example auth_api --features extras-session
OAuth2 Client Integration
Integrating with third-party identity providers (like Google, GitHub) is a common requirement for modern applications. RustAPI exposes the OAuth2 client through the public rustapi-rs facade.
This recipe demonstrates how to set up an OAuth2 flow.
Prerequisites
Enable the canonical facade feature in rustapi-rs.
[dependencies]
rustapi-rs = { version = "0.1.389", features = ["extras-oauth2-client"] }
Basic Configuration
You can use presets for popular providers or configure a custom one.
#![allow(unused)]
fn main() {
use rustapi_rs::extras::oauth2::OAuth2Config;
// Using a preset (Google)
let config = OAuth2Config::google(
"your-client-id",
"your-client-secret",
"https://your-app.com/auth/callback/google"
);
// Or custom provider
let custom_config = OAuth2Config::custom(
"https://auth.example.com/authorize",
"https://auth.example.com/token",
"client-id",
"client-secret",
"https://your-app.com/callback",
);
}
The Authorization Flow
- Redirect User: Generate an authorization URL and redirect the user.
- Handle Callback: Exchange the authorization code for an access token.
Step 1: Redirect User
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::extras::oauth2::OAuth2Client;
use rustapi_rs::extras::session::Session;
async fn login(State(client): State<OAuth2Client>, session: Session) -> Redirect {
// Generate URL with CSRF protection and PKCE
let auth_request = client.authorization_url();
session.insert("oauth_state", auth_request.csrf_state.as_str()).await.expect("state should serialize");
if let Some(pkce) = auth_request.pkce_verifier.as_ref() {
session.insert("oauth_pkce_verifier", pkce.verifier()).await.expect("pkce should serialize");
}
// Redirect user
Redirect::to(auth_request.url().as_str())
}
}
Step 2: Handle Callback
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::extras::oauth2::{CsrfState, OAuth2Client, PkceVerifier};
use rustapi_rs::extras::session::Session;
#[derive(Deserialize)]
struct AuthCallback {
code: String,
state: String, // CSRF token
}
async fn callback(
State(client): State<OAuth2Client>,
session: Session,
Query(params): Query<AuthCallback>,
) -> impl IntoResponse {
let expected_state = session.get::<String>("oauth_state").await.unwrap().unwrap();
client
.validate_state(&CsrfState::new(expected_state), ¶ms.state)
.expect("invalid oauth state");
let pkce_verifier = session
.get::<String>("oauth_pkce_verifier")
.await
.unwrap()
.map(PkceVerifier::new);
// 2. Exchange code for token
let token_response = client.exchange_code(¶ms.code, pkce_verifier.as_ref()).await;
match token_response {
Ok(token_response) => {
// Success! You have an access token.
// Use it to fetch user info or store it.
println!("Access Token: {}", token_response.access_token());
// Redirect to dashboard or home
Redirect::to("/dashboard")
}
Err(e) => {
// Handle error (e.g., invalid code)
(StatusCode::BAD_REQUEST, format!("Auth failed: {}", e)).into_response()
}
}
}
}
User Information
Once you have an access token, you can fetch user details. Most providers offer a /userinfo endpoint.
#![allow(unused)]
fn main() {
// Example using reqwest (feature required)
async fn get_user_info(token: &str) -> Result<serde_json::Value, reqwest::Error> {
let client = reqwest::Client::new();
client
.get("https://www.googleapis.com/oauth2/v3/userinfo")
.bearer_auth(token)
.send()
.await?
.json()
.await
}
}
Best Practices
- State Parameter: Always use the
stateparameter to prevent CSRF attacks. RustAPI’sauthorization_url()generates one for you. - PKCE: Proof Key for Code Exchange (PKCE) is recommended for all OAuth2 flows, especially for public clients. RustAPI handles PKCE generation.
- Session Storage: Store the CSRF state and PKCE verifier in a secure server-side session. Pair
extras-oauth2-clientwithextras-sessionfor the cleanest flow. - Secure Storage: Store tokens securely (e.g., encrypted cookies, secure session storage). Never expose access tokens in URLs or logs.
- HTTPS: OAuth2 requires HTTPS callbacks in production.
For a production-focused checklist, redirect strategy, and session integration guidance, continue with OIDC & OAuth2 in Production.
OIDC / OAuth2 in Production
This guide turns the basic OAuth2 client into a production-ready login flow.
The short version:
- use
OAuth2Clientto generate the authorization URL, - store CSRF state and PKCE verifier in a server-side session,
- verify
stateon callback, - exchange the code for tokens,
- rotate the application session before marking the user as authenticated.
Prerequisites
Enable both the OAuth2 client and session features on the public facade.
[dependencies]
rustapi-rs = { version = "0.1.389", features = ["extras-oauth2-client", "extras-session"] }
Configure the provider
Use one of the provider presets when possible.
#![allow(unused)]
fn main() {
use rustapi_rs::extras::oauth2::{OAuth2Client, OAuth2Config};
let config = OAuth2Config::google(
std::env::var("OAUTH_CLIENT_ID")?,
std::env::var("OAUTH_CLIENT_SECRET")?,
std::env::var("OAUTH_REDIRECT_URI")?,
)
.scope("openid")
.scope("email")
.scope("profile");
let client = OAuth2Client::new(config);
}
For non-preset providers, use OAuth2Config::custom(...).
#![allow(unused)]
fn main() {
use rustapi_rs::extras::oauth2::OAuth2Config;
let config = OAuth2Config::custom(
"https://id.example.com/oauth/authorize",
"https://id.example.com/oauth/token",
std::env::var("OAUTH_CLIENT_ID")?,
std::env::var("OAUTH_CLIENT_SECRET")?,
std::env::var("OAUTH_REDIRECT_URI")?,
);
}
Authorization redirect
The authorization handler should generate the provider URL and persist the CSRF + PKCE data in the current session.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::extras::oauth2::OAuth2Client;
use rustapi_rs::extras::session::Session;
async fn oauth_login(State(client): State<OAuth2Client>, session: Session) -> Redirect {
let auth_request = client.authorization_url();
session
.insert("oauth_state", auth_request.csrf_state.as_str())
.await
.expect("state should serialize");
if let Some(pkce) = auth_request.pkce_verifier.as_ref() {
session
.insert("oauth_pkce_verifier", pkce.verifier())
.await
.expect("pkce verifier should serialize");
}
Redirect::to(auth_request.url())
}
}
Callback handling
The callback handler validates the CSRF state, exchanges the code, and upgrades the application session.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_rs::extras::oauth2::{CsrfState, OAuth2Client, PkceVerifier};
use rustapi_rs::extras::session::Session;
#[derive(Debug, Deserialize, Schema)]
struct OAuthCallback {
code: String,
state: String,
}
async fn oauth_callback(
State(client): State<OAuth2Client>,
session: Session,
Query(callback): Query<OAuthCallback>,
) -> Result<Redirect> {
let expected_state = session
.get::<String>("oauth_state")
.await?
.ok_or_else(|| ApiError::unauthorized("Missing OAuth state"))?;
client
.validate_state(&CsrfState::new(expected_state), &callback.state)
.map_err(|error| ApiError::unauthorized(error.to_string()))?;
let pkce_verifier = session
.get::<String>("oauth_pkce_verifier")
.await?
.map(PkceVerifier::new);
let tokens = client
.exchange_code(&callback.code, pkce_verifier.as_ref())
.await
.map_err(|error| ApiError::unauthorized(error.to_string()))?;
session.cycle_id().await;
session.insert("user_id", "provider-subject-here").await?;
session.insert("refresh_token", tokens.refresh_token()).await?;
session.remove("oauth_state").await;
session.remove("oauth_pkce_verifier").await;
Ok(Redirect::to("/dashboard"))
}
}
Recommended production shape
Session strategy
- Keep provider state (
oauth_state, PKCE verifier, post-login redirect path) in the session, not in query strings. - Rotate the app session ID after a successful login with
session.cycle_id().await. - Prefer
RedisSessionStorewhen multiple instances share login traffic. - Clear bootstrap OAuth keys from the session after the callback succeeds or fails.
Token handling
- Do not log raw
access_token,refresh_token, orid_tokenvalues. - If you only need app authentication, store the provider subject and essential claims instead of the raw access token.
- If you must keep refresh tokens, treat them like secrets: server-side only, never in frontend-readable cookies.
- Call
refresh_token(...)only from trusted backend paths and overwrite old refresh tokens if the provider rotates them.
Provider and redirect hygiene
- Use exact HTTPS redirect URIs in production.
- Request the minimum scopes you need.
- Pin timeouts explicitly via
OAuth2Config::timeout(...)if your provider is slow. - Prefer issuer/provider presets unless you fully control the custom identity server.
Identity verification
- OpenID Connect is more than “OAuth + vibes”. Validate the
id_tokenwith the provider’s JWKs before trusting identity claims. - Use the provider
userinfoendpoint only after you decide which claims are authoritative. - Normalize external identities into your own application user model before starting long-lived sessions.
Local development
For local work, keep session cookies developer-friendly while still matching production flow structure.
#![allow(unused)]
fn main() {
use rustapi_rs::extras::session::SessionConfig;
let session_config = SessionConfig::new()
.cookie_name("rustapi_auth")
.secure(false);
}
That keeps the cookie usable over http://127.0.0.1:3000 while preserving the same handler code.
See also
CSRF Protection
Cross-Site Request Forgery (CSRF) protection for your RustAPI applications using the Double-Submit Cookie pattern.
What is CSRF?
CSRF is an attack that tricks users into submitting unintended requests. For example, a malicious website could submit a form to your API while users are logged in, performing actions without their consent.
RustAPI’s CSRF protection works by:
- Generating a cryptographic token stored in a cookie
- Requiring the same token in a request header for state-changing requests
- Rejecting requests where the cookie and header don’t match
Quick Start
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["csrf"] }
use rustapi_rs::prelude::*;
use rustapi_extras::csrf::{CsrfConfig, CsrfLayer, CsrfToken};
#[rustapi_rs::get("/form")]
async fn show_form(token: CsrfToken) -> Html<String> {
Html(format!(r#"
<form method="POST" action="/submit">
<input type="hidden" name="csrf_token" value="{}" />
<button type="submit">Submit</button>
</form>
"#, token.as_str()))
}
#[rustapi_rs::post("/submit")]
async fn handle_submit() -> &'static str {
// If we get here, CSRF validation passed!
"Form submitted successfully"
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let csrf_config = CsrfConfig::new()
.cookie_name("csrf_token")
.header_name("X-CSRF-Token");
RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.mount(show_form)
.mount(handle_submit)
.run("127.0.0.1:8080")
.await
}
Configuration Options
#![allow(unused)]
fn main() {
let config = CsrfConfig::new()
// Cookie settings
.cookie_name("csrf_token") // Default: "csrf_token"
.cookie_path("/") // Default: "/"
.cookie_domain("example.com") // Default: None (same domain)
.cookie_secure(true) // Default: true (HTTPS only)
.cookie_http_only(false) // Default: false (JS needs access)
.cookie_same_site(SameSite::Strict) // Default: Strict
// Token settings
.header_name("X-CSRF-Token") // Default: "X-CSRF-Token"
.token_length(32); // Default: 32 bytes
}
How It Works
Safe Methods (No Validation)
GET, HEAD, OPTIONS, and TRACE requests are considered “safe” and don’t modify state. The CSRF middleware:
- ✅ Generates a new token if none exists
- ✅ Sets the token cookie in the response
- ✅ Does NOT validate the header
Unsafe Methods (Validation Required)
POST, PUT, PATCH, and DELETE requests require CSRF validation:
- 🔍 Reads the token from the cookie
- 🔍 Reads the expected token from the header
- ❌ If missing or mismatched → Returns
403 Forbidden - ✅ If valid → Proceeds to handler
Frontend Integration
HTML Forms
For traditional form submissions, include the token as a hidden field:
<form method="POST" action="/api/submit">
<input type="hidden" name="_csrf" value="{{ csrf_token }}" />
<!-- form fields -->
<button type="submit">Submit</button>
</form>
JavaScript / AJAX
For API calls, include the token in the request header:
// Read token from cookie
function getCsrfToken() {
return document.cookie
.split('; ')
.find(row => row.startsWith('csrf_token='))
?.split('=')[1];
}
// Include in fetch requests
fetch('/api/users', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRF-Token': getCsrfToken()
},
body: JSON.stringify({ name: 'John' })
});
Axios Interceptor
import axios from 'axios';
axios.interceptors.request.use(config => {
if (['post', 'put', 'patch', 'delete'].includes(config.method)) {
config.headers['X-CSRF-Token'] = getCsrfToken();
}
return config;
});
Extracting the Token in Handlers
Use the CsrfToken extractor to access the current token in your handlers:
#![allow(unused)]
fn main() {
use rustapi_extras::csrf::CsrfToken;
#[rustapi_rs::get("/api/csrf-token")]
async fn get_csrf_token(token: CsrfToken) -> Json<serde_json::Value> {
Json(serde_json::json!({
"csrf_token": token.as_str()
}))
}
}
Best Practices
1. Always Use HTTPS in Production
#![allow(unused)]
fn main() {
let config = CsrfConfig::new()
.cookie_secure(true); // Cookie only sent over HTTPS
}
2. Use Strict SameSite Policy
#![allow(unused)]
fn main() {
use cookie::SameSite;
let config = CsrfConfig::new()
.cookie_same_site(SameSite::Strict); // Most restrictive
}
3. Combine with Other Security Measures
#![allow(unused)]
fn main() {
RustApi::new()
.layer(CsrfLayer::new(csrf_config))
.layer(SecurityHeadersLayer::strict()) // Add security headers
.layer(CorsLayer::permissive()) // Configure CORS
}
4. Rotate Tokens Periodically
Consider regenerating tokens after sensitive actions:
#![allow(unused)]
fn main() {
#[rustapi_rs::post("/auth/login")]
async fn login(/* ... */) -> impl IntoResponse {
// After successful login, a new CSRF token will be
// generated on the next GET request
// ...
}
}
Testing CSRF Protection
#![allow(unused)]
fn main() {
use rustapi_testing::{TestClient, TestRequest};
#[tokio::test]
async fn test_csrf_protection() {
let app = create_app_with_csrf();
let client = TestClient::new(app);
// GET request should work and set cookie
let res = client.get("/form").await;
assert_eq!(res.status(), StatusCode::OK);
let csrf_cookie = res.headers()
.get("set-cookie")
.unwrap()
.to_str()
.unwrap();
// Extract token value
let token = csrf_cookie
.split(';')
.next()
.unwrap()
.split('=')
.nth(1)
.unwrap();
// POST without token should fail
let res = client.post("/submit").await;
assert_eq!(res.status(), StatusCode::FORBIDDEN);
// POST with correct token should succeed
let res = client.request(
TestRequest::post("/submit")
.header("Cookie", format!("csrf_token={}", token))
.header("X-CSRF-Token", token)
).await;
assert_eq!(res.status(), StatusCode::OK);
}
}
Error Handling
When CSRF validation fails, the middleware returns a JSON error response:
{
"error": {
"code": "csrf_forbidden",
"message": "CSRF token validation failed"
}
}
You can customize this by wrapping the layer with your own error handler.
Security Considerations
| Consideration | Status |
|---|---|
| Token in cookie | ✅ HttpOnly=false (JS needs access) |
| Token validation | ✅ Constant-time comparison |
| SameSite cookie | ✅ Configurable (Strict by default) |
| Secure cookie | ✅ HTTPS-only by default |
| Token entropy | ✅ 32 bytes of cryptographic randomness |
See Also
- JWT Authentication - Token-based authentication
- Security Headers - Additional security layers
- CORS Configuration - Cross-origin request handling
Database Integration
RustAPI is database-agnostic, but SQLx is the recommended default for most RustAPI services because it is async-first, works naturally with State, and supports compile-time query verification.
This recipe shows how to integrate PostgreSQL/MySQL/SQLite using a shared pool, how to choose between SQLx, Diesel, and SeaORM, how to think about migrations, and which pooling practices are safest in production.
Dependencies
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["extras-sqlx"] } # Canonical facade feature for SQLx error conversion
sqlx = { version = "0.8", features = ["runtime-tokio", "tls-rustls", "postgres", "uuid"] }
serde = { version = "1", features = ["derive"] }
tokio = { version = "1", features = ["full"] }
dotenvy = "0.15"
1. Choosing SQLx vs Diesel vs SeaORM
RustAPI does not force a single database stack. Pick the tool that matches your team’s trade-offs.
| Stack | Best fit | Strengths | Watch-outs |
|---|---|---|---|
| SQLx | Default choice for most APIs | async-first, raw SQL clarity, compile-time query checks, easy State integration | you write SQL yourself |
| Diesel | teams that want schema-driven queries and strong compile-time modeling | mature ecosystem, strong query builder, great for heavily relational domains | core query execution is synchronous, so use a pool plus spawn_blocking |
| SeaORM | teams that want a higher-level async ORM | async API, entity-oriented modeling, less handwritten SQL | more abstraction, less direct control over SQL shape, no RustAPI-specific adapter layer |
Practical recommendation
- Choose SQLx when you want the most direct, idiomatic fit with RustAPI.
- Choose Diesel when your team values its schema/query-builder style enough to accept synchronous query execution boundaries.
- Choose SeaORM when entity-first ergonomics matter more than writing SQL manually.
If you are unsure, start with SQLx. It is the least surprising option for handler-first async services.
2. Migration strategy guidance
Treat schema migrations as part of application delivery, not an afterthought.
Recommended strategy by stack
- SQLx: keep migrations in a
migrations/directory and apply them withsqlx::migrate!()at startup for local/dev workflows, or via a deployment step in CI/CD for production. - Diesel: use Diesel CLI migrations as the source of truth; keep application startup focused on serving traffic rather than performing long-running schema work.
- SeaORM: use the SeaORM migration crate and run migrations as a separate deployment phase.
Production guidance
- Prefer forward-only migrations in normal delivery.
- Make destructive changes in multiple releases when possible (add column -> dual write/read -> remove old column later).
- Run migrations before routing production traffic to a new version when backward compatibility is not guaranteed.
- Keep app code tolerant of short-lived mixed-schema windows during rolling deploys.
- Seed data and schema changes should be separate concerns when possible.
For many teams, the safest pattern is:
- apply migrations,
- verify readiness,
- shift traffic,
- clean up old schema in a later deploy.
3. Connection pooling recommendations
No matter which stack you pick, the operational rule is the same: create the pool once at startup and share it through State.
Recommended defaults:
- keep one long-lived pool per database/service boundary
- never open a fresh connection per request
- size pool limits from the database server’s actual connection budget
- set
acquire_timeoutso overload fails fast instead of hanging forever - use small but non-zero
min_connectionsonly when warm capacity matters - keep transaction scopes short and never hold them across unrelated awaits
- if you have API workers plus job workers, budget pool capacity for both
As a starting point for a single service instance:
max_connections: enough for peak concurrent DB work, but well below the database hard capmin_connections:0-5depending on cold-start sensitivityacquire_timeout:2-5sidle_timeout: a few minutes, unless your environment aggressively scales to zero
If you use a synchronous driver such as Diesel, pool the connections and execute DB work with tokio::task::spawn_blocking so you do not block the async runtime.
4. Setup Connection Pool
Create the pool once at startup and share it via State. Configure pool limits appropriately.
use sqlx::postgres::PgPoolOptions;
use std::sync::Arc;
use std::time::Duration;
#[derive(Clone)]
pub struct AppState {
pub db: sqlx::PgPool,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
dotenvy::dotenv().ok();
let db_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create a connection pool with production settings
let pool = PgPoolOptions::new()
.max_connections(50) // Adjust based on DB limits
.min_connections(5) // Keep some idle connections ready
.acquire_timeout(Duration::from_secs(5)) // Fail fast if DB is overloaded
.idle_timeout(Duration::from_secs(300)) // Close idle connections
.connect(&db_url)
.await
.expect("Failed to connect to DB");
// Run migrations (optional but recommended)
// Note: requires `sqlx-cli` or `sqlx` migrate feature
sqlx::migrate!("./migrations")
.run(&pool)
.await
.expect("Failed to migrate");
let state = AppState { db: pool };
RustApi::new()
.state(state)
.route("/users", post(create_user))
.run("0.0.0.0:3000")
.await
}
5. Using the Database in Handlers
Extract the State to get access to the pool.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Deserialize, Validate)]
struct CreateUser {
#[validate(length(min = 3))]
username: String,
#[validate(email)]
email: String,
}
#[derive(Serialize, Schema)]
struct User {
id: i32,
username: String,
email: String,
}
async fn create_user(
State(state): State<AppState>,
ValidatedJson(payload): ValidatedJson<CreateUser>,
) -> Result<(StatusCode, Json<User>), ApiError> {
// SQLx query macro performs compile-time checking!
// The query is checked against your running database during compilation.
let record = sqlx::query_as!(
User,
"INSERT INTO users (username, email) VALUES ($1, $2) RETURNING id, username, email",
payload.username,
payload.email
)
.fetch_one(&state.db)
.await
// Map sqlx::Error to ApiError (feature = "sqlx" handles this automatically)
.map_err(ApiError::from)?;
Ok((StatusCode::CREATED, Json(record)))
}
}
6. Transactions
For operations involving multiple queries, use a transaction to ensure atomicity.
#![allow(unused)]
fn main() {
async fn transfer_credits(
State(state): State<AppState>,
Json(payload): Json<TransferRequest>,
) -> Result<StatusCode, ApiError> {
// Start a transaction
let mut tx = state.db.begin().await.map_err(ApiError::from)?;
// Deduct from sender
let updated = sqlx::query!(
"UPDATE accounts SET balance = balance - $1 WHERE id = $2 RETURNING balance",
payload.amount,
payload.sender_id
)
.fetch_optional(&mut *tx)
.await
.map_err(ApiError::from)?;
// Check balance
if let Some(record) = updated {
if record.balance < 0 {
// Rollback is automatic on drop, but explicit rollback is clearer
tx.rollback().await.map_err(ApiError::from)?;
return Err(ApiError::bad_request("Insufficient funds"));
}
} else {
return Err(ApiError::not_found("Sender not found"));
}
// Add to receiver
sqlx::query!(
"UPDATE accounts SET balance = balance + $1 WHERE id = $2",
payload.amount,
payload.receiver_id
)
.execute(&mut *tx)
.await
.map_err(ApiError::from)?;
// Commit transaction
tx.commit().await.map_err(ApiError::from)?;
Ok(StatusCode::OK)
}
}
7. Integration Testing with TestContainers
For testing, use testcontainers to spin up a real database instance. This ensures your queries are correct without mocking the database driver.
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
use testcontainers::{clients, images};
use rustapi_testing::TestClient;
#[tokio::test]
async fn test_create_user() {
// Start Postgres container
let docker = clients::Cli::default();
let pg = docker.run(images::postgres::Postgres::default());
let port = pg.get_host_port_ipv4(5432);
let db_url = format!("postgres://postgres:postgres@localhost:{}/postgres", port);
// Setup pool
let pool = PgPoolOptions::new().connect(&db_url).await.unwrap();
sqlx::migrate!("./migrations").run(&pool).await.unwrap();
let state = AppState { db: pool };
// Create app and client
let app = RustApi::new().state(state).route("/users", post(create_user));
let client = TestClient::new(app);
// Test request
let response = client.post("/users")
.json(&serde_json::json!({
"username": "testuser",
"email": "test@example.com"
}))
.await;
assert_eq!(response.status(), StatusCode::CREATED);
let user: User = response.json().await;
assert_eq!(user.username, "testuser");
}
}
}
Error Handling
RustAPI provides automatic conversion from sqlx::Error to ApiError when the sqlx feature is enabled.
RowNotFound-> 404 Not FoundPoolTimedOut-> 503 Service Unavailable- Unique Constraint Violation -> 409 Conflict
- Check Constraint Violation -> 400 Bad Request
- Other errors -> 500 Internal Server Error (masked in production)
If you are using Diesel or SeaORM instead of SQLx, keep the same external error contract for handlers even though the internal database error types differ. Consistent HTTP error behavior matters more than which query builder paid the bills.
Testing Strategies
RustAPI provides robust tools for testing your application, ensuring reliability from unit tests to full integration scenarios.
Dependencies
Add rustapi-testing to your Cargo.toml. It is usually added as a dev-dependency.
[dev-dependencies]
rustapi-testing = "0.1.335"
tokio = { version = "1", features = ["full"] }
Integration Testing with TestClient
The TestClient allows you to test your API handlers without binding to a network port. It interacts directly with the service layer, making tests fast and deterministic.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[rustapi_rs::get("/hello")]
async fn hello() -> &'static str {
"Hello, World!"
}
#[tokio::test]
async fn test_hello_endpoint() {
// 1. Build your application
let app = RustApi::new().route("/hello", get(hello));
// 2. Create a TestClient
let client = TestClient::new(app);
// 3. Send requests
let response = client.get("/hello").send().await;
// 4. Assert response
assert_eq!(response.status(), 200);
assert_eq!(response.text().await, "Hello, World!");
}
}
Testing JSON APIs
TestClient has built-in support for JSON serialization and deserialization.
#![allow(unused)]
fn main() {
#[derive(Serialize, Deserialize, PartialEq, Debug)]
struct User {
id: u64,
name: String,
}
#[rustapi_rs::post("/users")]
async fn create_user(Json(user): Json<User>) -> Json<User> {
Json(user)
}
#[tokio::test]
async fn test_create_user() {
let app = RustApi::new().route("/users", post(create_user));
let client = TestClient::new(app);
let new_user = User { id: 1, name: "Alice".into() };
let response = client.post("/users")
.json(&new_user)
.send()
.await;
assert_eq!(response.status(), 200);
let returned_user: User = response.json().await;
assert_eq!(returned_user, new_user);
}
}
Mocking External Services
When your API calls external services (e.g., payment gateways, third-party APIs), you should mock them in tests to avoid network calls and ensure reproducibility.
rustapi-testing provides MockServer for this purpose.
#![allow(unused)]
fn main() {
use rustapi_testing::{MockServer, MockResponse};
#[tokio::test]
async fn test_external_integration() {
// 1. Start a mock server
let mock_server = MockServer::start().await;
// 2. Define an expectation
mock_server.expect(
rustapi_testing::RequestMatcher::new()
.method("GET")
.path("/external-data")
).respond_with(
MockResponse::new()
.status(200)
.body(r#"{"data": "mocked"}"#)
);
// 3. Use the mock server's URL in your app configuration
let mock_url = format!("{}{}", mock_server.base_url(), "/external-data");
// Simulating your app logic calling the external service
let client = reqwest::Client::new();
let res = client.get(&mock_url).send().await.unwrap();
assert_eq!(res.status(), 200);
let body = res.text().await.unwrap();
assert_eq!(body, r#"{"data": "mocked"}"#);
}
}
Testing Authenticated Routes
You can simulate authenticated requests by setting headers directly on the TestClient request builder.
#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_protected_route() {
let app = RustApi::new().route("/protected", get(protected_handler));
let client = TestClient::new(app);
let response = client.get("/protected")
.header("Authorization", "Bearer valid_token")
.send()
.await;
assert_eq!(response.status(), 200);
}
}
Best Practices
- Keep Tests Independent: Each test should setup its own app instance and state.
TestClientis lightweight enough for this. - Mock I/O: Use
MockServerfor HTTP, and in-memory implementations for databases (e.g.,sqlite::memory:) or traits for logic. - Test Edge Cases: Don’t just test the “happy path”. Test validation errors, 404s, and error handling.
File Uploads
Handling file uploads is a common requirement. RustAPI provides a Multipart extractor to parse multipart/form-data requests.
Dependencies
Add uuid and tokio with fs features to your Cargo.toml.
[dependencies]
rustapi-rs = "0.1.335"
tokio = { version = "1", features = ["fs", "io-util"] }
uuid = { version = "1", features = ["v4"] }
Buffered Upload Example
RustAPI’s Multipart extractor currently buffers the entire request body into memory before parsing. This means it is suitable for small to medium file uploads (e.g., images, documents) but care must be taken with very large files to avoid running out of RAM.
use rustapi_rs::prelude::*;
use rustapi_rs::extract::{Multipart, DefaultBodyLimit};
use std::path::Path;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Ensure uploads directory exists
tokio::fs::create_dir_all("./uploads").await?;
println!("Starting Upload Server at http://127.0.0.1:8080");
RustApi::new()
// Increase body limit to 1GB (default is usually 1MB)
.body_limit(1024 * 1024 * 1024)
.route("/upload", post(upload_handler))
// Increase body limit to 50MB (default is usually 2MB)
// ⚠️ IMPORTANT: Since Multipart buffers the whole body,
// setting this too high can exhaust server memory.
.layer(DefaultBodyLimit::max(50 * 1024 * 1024))
.run("127.0.0.1:8080")
.await
}
#[derive(Serialize, Schema)]
struct UploadResponse {
message: String,
files: Vec<FileResult>,
}
#[derive(Serialize, Schema)]
struct FileResult {
original_name: String,
stored_name: String,
content_type: String,
}
async fn upload_handler(mut multipart: Multipart) -> Result<Json<UploadResponse>> {
let mut uploaded_files = Vec::new();
// Iterate over the fields in the multipart form
while let Some(field) = multipart.next_field().await.map_err(|_| ApiError::bad_request("Invalid multipart"))? {
// Skip fields that are not files
if !field.is_file() {
continue;
}
let file_name = field.file_name().unwrap_or("unknown.bin").to_string();
let content_type = field.content_type().unwrap_or("application/octet-stream").to_string();
// ⚠️ Security: Never trust the user-provided filename directly!
// It could contain paths like "../../../etc/passwd".
// Always generate a safe filename or sanitize inputs.
let safe_filename = format!("{}-{}", uuid::Uuid::new_v4(), file_name);
// Option 1: Use the helper method (sanitizes filename automatically)
// field.save_to("./uploads", Some(&safe_filename)).await.map_err(|e| ApiError::internal(e.to_string()))?;
// Option 2: Manual write (gives you full control)
let data = field.bytes().await.map_err(|e| ApiError::internal(e.to_string()))?;
let path = Path::new("./uploads").join(&safe_filename);
tokio::fs::write(&path, &data).await.map_err(|e| ApiError::internal(e.to_string()))?;
println!("Saved file: {} -> {:?}", file_name, path);
uploaded_files.push(FileResult {
original_name: file_name,
stored_name: safe_filename,
content_type,
});
}
Ok(Json(UploadResponse {
message: "Upload successful".into(),
files: uploaded_files,
}))
}
Key Concepts
1. Buffering
RustAPI loads the entire multipart/form-data body into memory.
- Pros: Simple API, easy to work with.
- Cons: High memory usage for concurrent large uploads.
- Mitigation: Set a reasonable
DefaultBodyLimit(e.g., 10MB - 100MB) to prevent DoS attacks.
2. Body Limits
The default request body limit is small (2MB) to prevent attacks. You must explicitly increase this limit for file upload routes using .layer(DefaultBodyLimit::max(size_in_bytes)).
3. Security
- Path Traversal: Malicious users can send filenames like
../../system32/cmd.exe. Always rename files or sanitize filenames strictly. - Content Type Validation: The
Content-Typeheader is client-controlled and can be spoofed. Do not rely on it for security execution checks (e.g., preventing.phpexecution). - Executable Permissions: Store uploads in a directory where script execution is disabled.
Testing with cURL
You can test this endpoint using curl:
curl -X POST http://localhost:8080/upload \
-F "file1=@./image.png" \
-F "file2=@./document.pdf"
Response:
{
"message": "Upload successful",
"files": [
{
"original_name": "image.png",
"stored_name": "550e8400-e29b-41d4-a716-446655440000-image.png",
"content_type": "image/png"
},
...
]
}
Background Jobs
RustAPI provides a robust background job processing system through the rustapi-jobs crate. This allows you to offload time-consuming tasks (like sending emails, processing images, or generating reports) from the main request/response cycle, keeping your API fast and responsive.
Setup
First, add rustapi-jobs to your Cargo.toml. Since rustapi-jobs is not re-exported by the main crate by default, you must include it explicitly.
[dependencies]
rustapi-rs = "0.1"
rustapi-jobs = "0.1"
serde = { version = "1.0", features = ["derive"] }
async-trait = "0.1"
tokio = { version = "1.0", features = ["full"] }
Defining a Job
A job consists of a data structure (the payload) and an implementation of the Job trait.
#![allow(unused)]
fn main() {
use rustapi_jobs::{Job, JobContext, Result};
use serde::{Deserialize, Serialize};
use async_trait::async_trait;
// 1. Define the job payload
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct WelcomeEmailData {
pub user_id: String,
pub email: String,
}
// 2. Define the job handler struct
#[derive(Clone)]
pub struct WelcomeEmailJob;
// 3. Implement the Job trait
#[async_trait]
impl Job for WelcomeEmailJob {
// Unique name for the job type
const NAME: &'static str = "send_welcome_email";
// The payload type
type Data = WelcomeEmailData;
async fn execute(&self, ctx: JobContext, data: Self::Data) -> Result<()> {
println!("Processing job {} (attempt {})", ctx.job_id, ctx.attempt);
println!("Sending welcome email to {} ({})", data.email, data.user_id);
// Simulate work
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(())
}
}
}
Registering and Running the Queue
In your main application setup, you need to:
- Initialize the backend (Memory, Redis, or Postgres).
- Create the
JobQueue. - Register your job handlers.
- Start the worker loop in a background task.
- Add the
JobQueueto your application state so handlers can use it.
use rustapi_rs::prelude::*;
use rustapi_jobs::{JobQueue, InMemoryBackend};
// use crate::jobs::{WelcomeEmailJob, WelcomeEmailData}; // Import your job
#[tokio::main]
async fn main() -> std::io::Result<()> {
// 1. Initialize backend
// For production, use Redis or Postgres backend
let backend = InMemoryBackend::new();
// 2. Create queue
let queue = JobQueue::new(backend);
// 3. Register jobs
// You must register an instance of the job handler
queue.register_job(WelcomeEmailJob).await;
// 4. Start worker in background
let queue_for_worker = queue.clone();
tokio::spawn(async move {
if let Err(e) = queue_for_worker.start_worker().await {
eprintln!("Worker failed: {}", e);
}
});
// 5. Build application
RustApi::auto()
.with_state(queue) // Inject queue into state
.serve("127.0.0.1:3000")
.await
}
Enqueueing Jobs
You can now inject the JobQueue into your request handlers using the State extractor and enqueue jobs.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_jobs::JobQueue;
#[rustapi::post("/register")]
async fn register_user(
State(queue): State<JobQueue>,
Json(payload): Json<RegisterRequest>,
) -> Result<impl IntoResponse, ApiError> {
// ... logic to create user in DB ...
let user_id = "user_123".to_string(); // Simulated ID
// Enqueue the background job
// The queue will handle serialization and persistence
queue.enqueue::<WelcomeEmailJob>(WelcomeEmailData {
user_id,
email: payload.email,
}).await.map_err(|e| ApiError::InternalServerError(e.to_string()))?;
Ok(Json(json!({
"status": "registered",
"message": "Welcome email will be sent shortly"
})))
}
#[derive(Deserialize)]
struct RegisterRequest {
username: String,
email: String,
}
}
Resilience and Retries
rustapi-jobs handles failures automatically. If your execute method returns an Err, the job will be:
- Marked as failed.
- Optionally scheduled for retry with exponential backoff if retries are enabled.
- Retried up to
max_attemptswhen you configure it viaEnqueueOptions.
By default, EnqueueOptions::new() sets max_attempts to 0, so a failed job will not be retried unless you explicitly opt in by calling .max_attempts(...) with a value greater than the current attempts count.
To customize retry behavior, use enqueue_opts:
#![allow(unused)]
fn main() {
use rustapi_jobs::EnqueueOptions;
queue.enqueue_opts::<WelcomeEmailJob>(
data,
EnqueueOptions::new()
.max_attempts(5) // Retry up to 5 times
.delay(std::time::Duration::from_secs(60)) // Initial delay
).await?;
}
Backends
While InMemoryBackend is great for testing and simple apps, production systems should use persistent backends:
- Redis: High performance, good for volatile queues. Enable
redisfeature inrustapi-jobs. - Postgres: Best for reliability and transactional safety. Enable
postgresfeature.
# In Cargo.toml
rustapi-jobs = { version = "0.1", features = ["redis"] }
Custom Extractors
Custom extractors let you move repetitive request parsing out of handlers and into reusable, typed building blocks.
Use them when a handler keeps repeating logic like:
- reading a required header,
- validating a tenant or region identifier,
- parsing a plain-text or binary body,
- loading middleware-injected context from request extensions.
Problem
Inline parsing works for one endpoint, but quickly becomes noisy when multiple handlers repeat the same header/body checks.
Solution
RustAPI exposes two traits for custom extraction:
FromRequestPartsfor headers, path params, query params, extensions, and stateFromRequestfor extractors that must consume the request body
If the extractor does not need the body, prefer FromRequestParts.
Example 1: Header-backed tenant extractor
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Debug, Clone)]
struct TenantId(String);
impl TenantId {
fn as_str(&self) -> &str {
&self.0
}
}
impl FromRequestParts for TenantId {
fn from_request_parts(req: &Request) -> Result<Self> {
let header = HeaderValue::extract(req, "x-tenant-id")
.map_err(|_| ApiError::bad_request("Missing x-tenant-id header"))?;
let tenant = header.value().trim();
if tenant.is_empty() {
return Err(ApiError::bad_request("x-tenant-id cannot be empty"));
}
Ok(TenantId(tenant.to_string()))
}
}
#[derive(Serialize, Schema)]
struct ProjectList {
tenant: String,
items: Vec<String>,
}
#[rustapi_rs::get("/projects")]
async fn list_projects(tenant: TenantId) -> Json<ProjectList> {
Json(ProjectList {
tenant: tenant.as_str().to_string(),
items: vec!["alpha".into(), "beta".into()],
})
}
}
Example 2: Plain-text body extractor
When you need to consume the request body yourself, implement FromRequest instead.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Debug)]
struct PlainTextBody(String);
impl PlainTextBody {
fn into_inner(self) -> String {
self.0
}
}
impl FromRequest for PlainTextBody {
async fn from_request(req: &mut Request) -> Result<Self> {
req.load_body().await?;
let body = req
.take_body()
.ok_or_else(|| ApiError::internal("Body already consumed"))?;
let text = String::from_utf8(body.to_vec())
.map_err(|_| ApiError::bad_request("Request body must be valid UTF-8"))?;
Ok(PlainTextBody(text))
}
}
#[derive(Serialize, Schema)]
struct EchoResponse {
content: String,
}
#[rustapi_rs::post("/echo-text")]
async fn echo_text(body: PlainTextBody) -> Json<EchoResponse> {
Json(EchoResponse {
content: body.into_inner(),
})
}
}
Discussion
Pick the right trait
Use FromRequestParts when you only need request metadata:
- headers,
- query string,
- path parameters,
- request extensions,
- shared state.
Use FromRequest only when you must consume the body.
Body-consuming extractors still must come last
This rule applies to your custom body extractors too.
#![allow(unused)]
fn main() {
async fn create_note(
State(app): State<AppState>,
tenant: TenantId,
body: PlainTextBody, // body-consuming extractor goes last
) -> Result<Json<NoteResponse>> {
let _ = (&app, tenant, body);
todo!()
}
}
Middleware + extractors fit together nicely
If middleware inserts typed data into request extensions, a custom extractor can read it back using the same FromRequestParts pattern. That keeps handlers clean and avoids repeated extension lookups.
Error style
Return ApiError from your extractor when extraction fails. That keeps rejection behavior consistent with built-in extractors.
Testing
Quick manual checks:
curl -i http://127.0.0.1:8080/projects
curl -i -H "x-tenant-id: acme" http://127.0.0.1:8080/projects
curl -i -X POST http://127.0.0.1:8080/echo-text -H "content-type: text/plain" --data "hello"
Expected outcomes:
- missing
x-tenant-idreturns400, - valid header returns a JSON payload containing the tenant,
- plain-text echo returns the posted content as JSON.
Related reading
Custom Middleware
Problem: You need to execute code before or after every request (e.g., logging, authentication, metrics) or modify the response.
Solution
In RustAPI, the idiomatic way to implement custom middleware is by implementing the MiddlewareLayer trait. This trait provides a safe, asynchronous interface for inspecting and modifying requests and responses.
The MiddlewareLayer Trait
The trait is defined in rustapi_core::middleware:
pub trait MiddlewareLayer: Send + Sync + 'static {
fn call(
&self,
req: Request,
next: BoxedNext,
) -> Pin<Box<dyn Future<Output = Response> + Send + 'static>>;
fn clone_box(&self) -> Box<dyn MiddlewareLayer>;
}
Basic Example: Logging Middleware
Here is a simple middleware that logs the incoming request method and URI, calls the next handler, and then logs the response status.
#![allow(unused)]
fn main() {
use rustapi_core::middleware::{MiddlewareLayer, BoxedNext};
use rustapi_core::{Request, Response};
use std::pin::Pin;
use std::future::Future;
#[derive(Clone)]
pub struct SimpleLogger;
impl MiddlewareLayer for SimpleLogger {
fn call(
&self,
req: Request,
next: BoxedNext,
) -> Pin<Box<dyn Future<Output = Response> + Send + 'static>> {
// logic before handling request
let method = req.method().clone();
let uri = req.uri().clone();
println!("Incoming: {} {}", method, uri);
Box::pin(async move {
// call the next middleware/handler
let response = next(req).await;
// logic after handling request
println!("Completed: {} {} -> {}", method, uri, response.status());
response
})
}
fn clone_box(&self) -> Box<dyn MiddlewareLayer> {
Box::new(self.clone())
}
}
}
Applying Middleware
You can apply your custom middleware using .layer():
RustApi::new()
.layer(SimpleLogger)
.route("/", get(handler))
.run("127.0.0.1:8080")
.await?;
Advanced Patterns
Configuration
You can pass configuration to your middleware struct.
#![allow(unused)]
fn main() {
#[derive(Clone)]
pub struct RateLimitLayer {
max_requests: u32,
window_secs: u64,
}
impl RateLimitLayer {
pub fn new(max_requests: u32, window_secs: u64) -> Self {
Self { max_requests, window_secs }
}
}
// impl MiddlewareLayer for RateLimitLayer ...
}
Injecting State (Extensions)
Middleware can inject data into the request’s extensions, which can then be retrieved by handlers (e.g., via FromRequest extractors).
#![allow(unused)]
fn main() {
// In your middleware
fn call(&self, mut req: Request, next: BoxedNext) -> ... {
let user_id = "user_123".to_string();
req.extensions_mut().insert(user_id);
next(req)
}
// In your handler
async fn handler(req: Request) -> ... {
let user_id = req.extensions().get::<String>().unwrap();
// ...
}
}
Short-Circuiting (Authentication)
If a request fails validation (e.g., invalid token), you can return a response immediately without calling next(req).
#![allow(unused)]
fn main() {
fn call(&self, req: Request, next: BoxedNext) -> ... {
if !is_authorized(&req) {
return Box::pin(async {
http::Response::builder()
.status(401)
.body("Unauthorized".into())
.unwrap()
});
}
next(req)
}
}
Modifying the Response
You can inspect and modify the response returned by the handler.
#![allow(unused)]
fn main() {
let response = next(req).await;
let (mut parts, body) = response.into_parts();
parts.headers.insert("X-Custom-Header", "Value".parse().unwrap());
Response::from_parts(parts, body)
}
Error Handling
RustAPI ships with a structured ApiError type and a consistent wire format for error responses. The trick is not just returning errors, but returning the right error to the client while keeping internal details out of production responses.
Problem
Without a clear error strategy, handlers tend to mix:
- business errors,
- validation errors,
- infrastructure errors, and
- internal debugging details.
That usually leads to noisy handlers and accidental leakage of sensitive information.
Solution
Use ApiError at the HTTP boundary and convert your domain/application errors into it.
Basic handler pattern
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Schema)]
struct UserDto {
id: u64,
email: String,
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<u64>) -> Result<Json<UserDto>> {
if id == 0 {
return Err(ApiError::bad_request("id must be greater than zero"));
}
let user = find_user(id)
.await?
.ok_or_else(|| ApiError::not_found(format!("User {} not found", id)))?;
Ok(Json(user))
}
async fn find_user(_id: u64) -> Result<Option<UserDto>> {
Ok(None)
}
}
Mapping application errors into ApiError
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Debug)]
enum AppError {
UserNotFound(u64),
DuplicateEmail,
Storage(std::io::Error),
}
impl From<AppError> for ApiError {
fn from(err: AppError) -> Self {
match err {
AppError::UserNotFound(id) => {
ApiError::not_found(format!("User {} not found", id))
}
AppError::DuplicateEmail => {
ApiError::conflict("A user with that email already exists")
}
AppError::Storage(source) => {
ApiError::internal("Storage error").with_internal(source.to_string())
}
}
}
}
#[derive(Serialize, Schema)]
struct UserDto {
id: u64,
email: String,
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<u64>) -> Result<Json<UserDto>> {
let user = load_user(id).await?;
Ok(Json(user))
}
async fn load_user(id: u64) -> std::result::Result<UserDto, AppError> {
if id == 42 {
return Err(AppError::UserNotFound(id));
}
Ok(UserDto {
id,
email: "demo@example.com".into(),
})
}
}
Validation errors are already normalized
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Deserialize, Validate, Schema)]
struct CreateUser {
#[validate(email)]
email: String,
#[validate(length(min = 8))]
password: String,
}
#[rustapi_rs::post("/users")]
async fn create_user(ValidatedJson(body): ValidatedJson<CreateUser>) -> Result<StatusCode> {
let _ = body;
Ok(StatusCode::CREATED)
}
}
If validation fails, RustAPI returns 422 Unprocessable Entity automatically.
Error response shape
RustAPI serializes errors as JSON like this:
{
"error": {
"type": "not_found",
"message": "User 42 not found"
},
"error_id": "err_a1b2c3d4e5f6..."
}
Validation errors add fields:
{
"error": {
"type": "validation_error",
"message": "Request validation failed",
"fields": [
{
"field": "email",
"code": "email",
"message": "must be a valid email"
}
]
},
"error_id": "err_a1b2c3d4e5f6..."
}
Discussion
Use 4xx for client-facing corrections
Good candidates for direct client messages:
bad_requestunauthorizedforbiddennot_foundconflict- validation failures
Use 5xx for internal failures
For infrastructure or unexpected failures, prefer ApiError::internal(...) and attach private details with .with_internal(...).
That gives operators useful logs without sending those internals to clients.
Production masking
When RUSTAPI_ENV=production, server-side error messages are masked automatically.
Example:
- development 500 message:
Storage error - production 500 message:
An internal error occurred
Validation field details still remain visible.
Error correlation
Every response includes an error_id. Use it to correlate:
- client reports,
- server logs,
- trace/span data,
- audit or replay workflows.
SQLx integration
When the SQLx feature is enabled, sqlx::Error converts into ApiError automatically. That means ? works naturally in many handlers while still mapping common database failures to sensible HTTP responses.
Testing
Manual checks:
curl -i http://127.0.0.1:8080/users/0
curl -i http://127.0.0.1:8080/users/42
curl -i -X POST http://127.0.0.1:8080/users -H "content-type: application/json" --data "{\"email\":\"bad\",\"password\":\"123\"}"
What to verify:
400returns abad_requesterror body404returns anot_founderror body422returnsfieldsentries- every error payload contains
error_id
Related reading
Axum -> RustAPI Migration Guide
If you already know Axum, RustAPI will feel familiar in the right places and pleasantly less repetitive in a few others.
This guide focuses on the migration path for the most common Axum patterns:
- handlers and extractors
- app state
- route registration
- middleware
- testing
- OpenAPI/documentation
What stays familiar
The good news first: most everyday handler code barely changes.
| Axum concept | RustAPI equivalent | Notes |
|---|---|---|
State<T> | State<T> | same mental model |
Path<T> | Path<T> | same purpose |
Query<T> | Query<T> | same purpose |
Json<T> | Json<T> | same purpose |
Router::route() | RustApi::route() | similar registration flow |
| tower layers | .layer(...) | middleware stack support |
| integration testing with service/router | TestClient | in-memory, ergonomic |
The biggest differences are:
- RustAPI encourages using
rustapi-rsas a stable facade. - RustAPI can auto-discover macro-annotated routes with
RustApi::auto(). - OpenAPI support is built directly into the framework flow.
1. Imports: switch to the facade
In Axum projects, imports are often spread across axum, tower, and OpenAPI add-ons.
In RustAPI, start from the facade:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
}
That keeps your application code pinned to the public API surface instead of internal crates.
2. Basic handlers migrate almost directly
Axum
#![allow(unused)]
fn main() {
use axum::{extract::Path, Json};
use serde::{Deserialize, Serialize};
#[derive(Serialize)]
struct User {
id: i64,
name: String,
}
async fn get_user(Path(id): Path<i64>) -> Json<User> {
Json(User {
id,
name: "Alice".into(),
})
}
}
RustAPI
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Schema)]
struct User {
id: i64,
name: String,
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<User> {
Json(User {
id,
name: "Alice".into(),
})
}
}
Migration note
- The extractor shape is essentially the same.
- Add
Schemawhen you want the type represented in generated OpenAPI docs. - RustAPI route macros use
"/users/{id}"path syntax.
3. App bootstrap: Router -> RustApi
Axum
#![allow(unused)]
fn main() {
use axum::{routing::get, Router};
let app = Router::new().route("/users/:id", get(get_user));
}
RustAPI
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
let app = RustApi::new().route("/users/{id}", get(get_user));
}
Migration note
- The conceptual shape is the same.
- Path parameters use
{id}instead of:id. - If you annotate handlers with route macros, you can often skip manual registration and use
RustApi::auto().
4. Auto-registration can replace manual route wiring
This is one of the biggest quality-of-life upgrades when moving from Axum.
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/health")]
async fn health() -> &'static str {
"ok"
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<i64> {
Json(id)
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto().run("127.0.0.1:8080").await
}
If your Axum app has a lot of repetitive Router::new().route(...).route(...).route(...) setup, this is where some boilerplate quietly disappears into the floorboards.
5. State injection is very similar
Axum
#![allow(unused)]
fn main() {
use axum::{extract::State, routing::get, Router};
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
db: Arc<String>,
}
async fn users(State(state): State<AppState>) -> String {
state.db.to_string()
}
let app = Router::new().route("/users", get(users)).with_state(AppState {
db: Arc::new("db".into()),
});
}
RustAPI
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
db: Arc<String>,
}
#[rustapi_rs::get("/users")]
async fn users(State(state): State<AppState>) -> String {
state.db.to_string()
}
let app = RustApi::new()
.state(AppState {
db: Arc::new("db".into()),
})
.route("/users", get(users));
}
Migration note
- Keep your state
Clone + Send + Sync. - The usual Axum pattern of storing cheap-to-clone
Arc<_>fields still applies nicely.
6. Extractor migration map
For common endpoint code, the mapping is straightforward.
| Axum | RustAPI | Notes |
|---|---|---|
State<T> | State<T> | same pattern |
Path<T> | Path<T> | same pattern |
Query<T> | Query<T> | same pattern |
Json<T> | Json<T> | same pattern |
custom FromRequestParts | custom FromRequestParts | same idea for non-body extraction |
custom FromRequest | custom FromRequest | use for body-consuming extractors |
Important RustAPI rule
Body-consuming extractors such as Json<T>, Body, ValidatedJson<T>, and Multipart must be the last handler parameter.
#![allow(unused)]
fn main() {
#[rustapi_rs::post("/users/{id}")]
async fn update_user(
State(_state): State<AppState>,
Path(_id): Path<i64>,
Json(_body): Json<User>,
) -> Result<()> {
Ok(())
}
}
7. Middleware: tower mindset, RustAPI entry point
If you are coming from Axum middleware, the main mental model still fits: request goes in, response comes out, layers wrap handlers.
Apply middleware with:
RustApi::new()
.layer(SimpleLogger)
.route("/users", get(users));
Migration note
- The middleware shape is not a drop-in copy of Axum’s tower APIs.
- For simple request/response transformations, prefer RustAPI interceptors when they are sufficient; they are lighter than a full middleware layer.
- For a dedicated middleware walkthrough, see Custom Middleware.
8. Error handling becomes more uniform
Axum applications often build custom response tuples or custom error enums. That still works conceptually, but RustAPI leans toward ApiError for the common cases.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::errors(404 = "User not found")]
async fn get_user(Path(id): Path<i64>) -> Result<Json<User>> {
if id == 0 {
return Err(ApiError::not_found("User not found"));
}
Ok(Json(User {
id,
name: "Alice".into(),
}))
}
}
Migration note
#[errors(...)]documents the OpenAPI surface.- Your handler still needs to return the actual runtime error.
- In production, RustAPI masks internal 5xx details automatically.
9. OpenAPI is no longer a side quest
In Axum, OpenAPI commonly arrives through extra libraries and extra setup.
In RustAPI, it is part of the main story:
- derive
Schemafor DTOs - annotate handlers with
#[get],#[post], etc. - optionally add
#[tag],#[summary],#[description],#[param], and#[errors] - serve docs automatically through the app flow
#![allow(unused)]
fn main() {
#[derive(Serialize, Schema)]
struct User {
id: i64,
name: String,
}
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::tag("Users")]
#[rustapi_rs::summary("Get user by ID")]
#[rustapi_rs::errors(404 = "User not found")]
async fn get_user(Path(id): Path<i64>) -> Result<Json<User>> {
Ok(Json(User {
id,
name: "Alice".into(),
}))
}
}
If you are migrating from Axum plus a third-party OpenAPI stack, consolidating those concerns in one framework usually makes the codebase easier to explain to Future You™.
10. Testing migration: service tests -> TestClient
RustAPI test style
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[rustapi_rs::get("/hello")]
async fn hello() -> &'static str {
"hello"
}
#[tokio::test]
async fn test_hello() {
let app = RustApi::new().route("/hello", get(hello));
let client = TestClient::new(app);
let response = client.get("/hello").send().await;
assert_eq!(response.status(), 200);
}
}
Migration note
TestClientexercises the app in memory, without binding a socket.- This is a good destination for many Axum integration tests that currently go through a service stack manually.
11. Practical migration checklist
Use this order for a low-drama migration:
- Replace Axum imports with
rustapi_rs::prelude::*where possible. - Change route path syntax from
:idto{id}. - Move shared dependencies into
State<T>. - Convert handlers one endpoint at a time.
- Add
Schemaderives to DTOs that should appear in OpenAPI. - Replace manual route tables with route macros and
RustApi::auto()when it reduces boilerplate. - Port middleware selectively instead of all at once.
- Replace service-level tests with
TestClientwhere it simplifies setup.
12. A small before/after mental model
Axum mindset
- compose a
Router - attach routes manually
- bolt on docs separately
- manage state and layers around the router
RustAPI mindset
- write handler-first code
- annotate routes directly
- let
RustApi::auto()discover them when useful - keep docs and route metadata close to the handler
Related reading
Actix-web -> RustAPI Migration Guide
If you already know Actix-web, RustAPI will feel familiar in a few core areas while removing some of the ceremony around route registration and OpenAPI integration.
This guide focuses on the migration path for the most common Actix-web patterns:
- handlers and extractors
- app state
- route registration
- middleware
- testing
- OpenAPI/documentation
What stays familiar
The good news first: the everyday endpoint concepts map cleanly.
| Actix-web concept | RustAPI equivalent | Notes |
|---|---|---|
web::Data<T> | State<T> | shared application state |
web::Path<T> | Path<T> | typed path extraction |
web::Query<T> | Query<T> | typed query extraction |
web::Json<T> | Json<T> | JSON body extraction |
App::route() / .service() | RustApi::route() / route macros | both support explicit routing |
wrap(...) middleware | .layer(...) | middleware stack support |
actix_web::test helpers | rustapi_testing::TestClient | in-memory HTTP-style tests |
The biggest differences are:
- RustAPI encourages application code to import from the
rustapi-rsfacade. - RustAPI can auto-discover macro-annotated routes with
RustApi::auto(). - OpenAPI support is designed to live close to handlers instead of being bolted on later.
1. Imports: switch to the facade
Actix-web applications usually import directly from actix_web.
In RustAPI, start from the public facade:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
}
That keeps your application code aligned with RustAPI’s stable public surface instead of internal implementation crates.
2. Basic handlers migrate directly
Actix-web
#![allow(unused)]
fn main() {
use actix_web::{get, web, Responder};
use serde::Serialize;
#[derive(Serialize)]
struct User {
id: i64,
name: String,
}
#[get("/users/{id}")]
async fn get_user(id: web::Path<i64>) -> impl Responder {
let id = id.into_inner();
web::Json(User {
id,
name: "Alice".into(),
})
}
}
RustAPI
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Serialize, Schema)]
struct User {
id: i64,
name: String,
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<User> {
Json(User {
id,
name: "Alice".into(),
})
}
}
Migration note
- The path syntax is already
{id}in both ecosystems, so that part stays pleasantly boring. - Add
Schemawhen the type should appear in generated OpenAPI docs. - RustAPI handler signatures stay compact and keep extractor types explicit.
3. App bootstrap: App -> RustApi
Actix-web
use actix_web::{web, App, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().route("/users/{id}", web::get().to(get_user))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
RustAPI
use rustapi_rs::prelude::*;
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::new()
.route("/users/{id}", get(get_user))
.run("127.0.0.1:8080")
.await
}
Migration note
RustApi::new()is the main application entry point.RustApi::route()is the closest equivalent to explicit Actix route registration.- For macro-annotated handlers,
RustApi::auto()can remove repetitive wiring.
4. Auto-registration can replace repetitive .service(...)
If your Actix-web app registers many handlers manually, RustAPI can let the route macros do more of the work.
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/health")]
async fn health() -> &'static str {
"ok"
}
#[rustapi_rs::get("/users/{id}")]
async fn get_user(Path(id): Path<i64>) -> Json<i64> {
Json(id)
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto().run("127.0.0.1:8080").await
}
This is where a wall of .service(...) calls starts to quietly disappear. Your future diff reviews may even send a thank-you card.
5. State injection: web::Data<T> -> State<T>
Actix-web
#![allow(unused)]
fn main() {
use actix_web::{web, App, HttpServer, Responder};
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
db: Arc<String>,
}
async fn users(state: web::Data<AppState>) -> impl Responder {
state.db.to_string()
}
let state = AppState {
db: Arc::new("db".into()),
};
}
RustAPI
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
db: Arc<String>,
}
#[rustapi_rs::get("/users")]
async fn users(State(state): State<AppState>) -> String {
state.db.to_string()
}
let app = RustApi::new()
.state(AppState {
db: Arc::new("db".into()),
})
.route("/users", get(users));
}
Migration note
- Keep shared state
Clone + Send + Sync. - Cheap-to-clone
Arc<_>fields remain the right pattern for shared dependencies. - Instead of wrapping state in
web::Data<T>, RustAPI stores the state directly and extracts it withState<T>.
6. Extractor migration map
| Actix-web | RustAPI | Notes |
|---|---|---|
web::Data<T> | State<T> | shared app state |
web::Path<T> | Path<T> | typed path extraction |
web::Query<T> | Query<T> | typed query extraction |
web::Json<T> | Json<T> | body extraction |
| custom request extractor | FromRequestParts / FromRequest | choose based on body usage |
Important RustAPI rule
Body-consuming extractors such as Json<T>, Body, ValidatedJson<T>, AsyncValidatedJson<T>, and Multipart must be the last handler parameter.
#![allow(unused)]
fn main() {
#[rustapi_rs::post("/users/{id}")]
async fn update_user(
State(_state): State<AppState>,
Path(_id): Path<i64>,
Json(_body): Json<User>,
) -> Result<()> {
Ok(())
}
}
7. Middleware: wrap(...) mindset, RustAPI entry point
Actix-web middleware and RustAPI middleware share the same big-picture mental model: requests go in, responses come out, and the middleware stack wraps the handler.
Apply middleware with:
RustApi::new()
.layer(RequestIdLayer::new())
.layer(TracingLayer::new())
.route("/users", get(users));
Migration note
- Use
.layer(...)for full middleware wrapping behavior. - For lightweight request/response transformations, prefer interceptors when they are sufficient; they are cheaper than full middleware.
- Middleware layering order matters, so keep observability/auth/retry ordering intentional.
8. Error handling becomes more uniform
Actix-web often leans on ResponseError, HttpResponse, or custom response builders. RustAPI keeps the same flexibility, but the common path is ApiError.
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::errors(404 = "User not found")]
async fn get_user(Path(id): Path<i64>) -> Result<Json<User>> {
if id == 0 {
return Err(ApiError::not_found("User not found"));
}
Ok(Json(User {
id,
name: "Alice".into(),
}))
}
}
Migration note
#[errors(...)]documents the OpenAPI response surface.- Your handler still needs to return the matching runtime error.
- In production, RustAPI masks internal 5xx details automatically.
9. OpenAPI moves closer to the handler
In Actix-web projects, OpenAPI is often layered in through separate crates and extra registration code.
In RustAPI, it becomes part of the main handler workflow:
- derive
Schemafor DTOs - annotate handlers with
#[get],#[post], and friends - optionally add
#[tag],#[summary],#[description],#[param], and#[errors] - serve docs through the app configuration
#![allow(unused)]
fn main() {
#[derive(Serialize, Schema)]
struct User {
id: i64,
name: String,
}
#[rustapi_rs::get("/users/{id}")]
#[rustapi_rs::tag("Users")]
#[rustapi_rs::summary("Get user by ID")]
#[rustapi_rs::errors(404 = "User not found")]
async fn get_user(Path(id): Path<i64>) -> Result<Json<User>> {
Ok(Json(User {
id,
name: "Alice".into(),
}))
}
}
Ordinary path and query parameters are inferred into OpenAPI automatically, so #[param(...)] is mainly for path-parameter schema overrides.
10. Testing migration: actix_web::test -> TestClient
RustAPI test style
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
use rustapi_testing::TestClient;
#[rustapi_rs::get("/hello")]
async fn hello() -> &'static str {
"hello"
}
#[tokio::test]
async fn test_hello() {
let app = RustApi::new().route("/hello", get(hello));
let client = TestClient::new(app);
let response = client.get("/hello").send().await;
assert_eq!(response.status(), 200);
}
}
Migration note
TestClientexercises the application in memory without binding a socket.- This is a good replacement for many Actix integration tests that currently build
Appinstances plus test harness glue.
11. Practical migration checklist
Use this order for a low-drama migration:
- Replace handler imports with
use rustapi_rs::prelude::*on the RustAPI side. - Port shared dependencies from
web::Data<T>toState<T>. - Convert handlers one endpoint at a time.
- Add
Schemaderives to DTOs that should appear in OpenAPI. - Replace repetitive
.service(...)registration with route macros andRustApi::auto()when it reduces boilerplate. - Port middleware selectively instead of all at once.
- Replace Actix test harness setup with
TestClientwhere it simplifies coverage. - Add production defaults, tracing, and health probes once the endpoint layer is stable.
12. Mental model shift
Actix-web mindset
- build an
App - register routes and services explicitly
- add middleware with
wrap(...) - extend docs/testing with adjacent tooling
RustAPI mindset
- write handler-first code
- annotate routes directly
- let
RustApi::auto()discover them when useful - keep docs and route metadata close to handlers
Related reading
- Macro Attribute Reference
- Custom Extractors
- Error Handling
- Middleware Debugging
- Recommended Production Baseline
Advanced Middleware: Rate Limiting, Caching, and Deduplication
As your API grows, you’ll need to protect it from abuse and optimize performance. RustAPI provides a suite of advanced middleware in rustapi-extras to handle these concerns efficiently.
These patterns are essential for the “Enterprise Platform” learning path and high-traffic services.
Prerequisites
Add the rustapi-extras crate with the necessary features to your Cargo.toml.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# OR cherry-pick features
# rustapi-extras = { version = "0.1.335", features = ["rate-limit", "dedup", "cache"] }
Rate Limiting
Rate limiting protects your API from being overwhelmed by too many requests from a single client. It uses a “Token Bucket” or “Fixed Window” algorithm to enforce limits.
How it works
The RateLimitLayer tracks request counts per IP address. When a limit is exceeded, it returns 429 Too Many Requests with a Retry-After header.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::rate_limit::RateLimitLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
RateLimitLayer::new(100, Duration::from_secs(60)) // 100 requests per minute
)
.route("/", get(handler));
// ... run app
}
The middleware automatically adds standard headers to responses:
X-RateLimit-Limit: The maximum number of requests allowed.X-RateLimit-Remaining: The number of requests remaining in the current window.X-RateLimit-Reset: The timestamp when the window resets.
Request Deduplication
In distributed systems, clients may retry requests that have already been processed (e.g., due to network timeouts). Deduplication ensures that non-idempotent operations (like payments) are processed only once.
How it works
The DedupLayer checks for an Idempotency-Key header. If a request with the same key is seen within the TTL window, it returns 409 Conflict.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::dedup::DedupLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
DedupLayer::new()
.header_name("X-Idempotency-Key") // Optional: Custom header name
.ttl(Duration::from_secs(300)) // 5 minutes TTL
)
.route("/payments", post(payment_handler));
// ... run app
}
Clients should generate a unique UUID for each operation and send it in the Idempotency-Key header.
Response Caching
Caching can significantly reduce load on your servers by serving stored responses for identical requests.
How it works
The CacheLayer stores successful responses in memory based on the request method and URI. Subsequent requests are served from the cache until the TTL expires.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::cache::CacheLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
CacheLayer::new()
.ttl(Duration::from_secs(60)) // Cache for 60 seconds
.add_method("GET") // Cache GET requests
.add_method("HEAD") // Cache HEAD requests
)
.route("/heavy-computation", get(heavy_handler));
// ... run app
}
Cached responses include an X-Cache: HIT header. Original responses have X-Cache: MISS.
Combining Middleware
You can combine these layers to create a robust defense-in-depth strategy.
#![allow(unused)]
fn main() {
let app = RustApi::new()
// 1. Rate Limit (Outer): Reject excessive traffic first
.layer(RateLimitLayer::new(1000, Duration::from_secs(60)))
// 2. Deduplication: Prevent double-processing
.layer(DedupLayer::new())
// 3. Cache: Serve static/computed content quickly
.layer(CacheLayer::new().ttl(Duration::from_secs(30)))
.route("/", get(handler));
}
Note: Order matters! Placing Rate Limit first saves resources by rejecting requests before they hit the cache or application logic.
Real-time Chat (WebSockets)
WebSockets allow full-duplex communication between the client and server. RustAPI leverages the rustapi-ws crate (based on tungstenite and tokio) to make this easy.
Dependencies
[dependencies]
rustapi-ws = "0.1.335"
tokio = { version = "1", features = ["sync", "macros"] }
futures = "0.3"
The Upgrade Handler
WebSocket connections start as HTTP requests. We “upgrade” them using the WebSocket extractor.
#![allow(unused)]
fn main() {
use rustapi_ws::{WebSocket, WebSocketStream, Message};
use rustapi_rs::prelude::*;
use std::sync::Arc;
use tokio::sync::broadcast;
use futures::stream::StreamExt;
// Shared state for broadcasting messages to all connected clients
pub struct AppState {
pub tx: broadcast::Sender<String>,
}
async fn ws_handler(
ws: WebSocket,
State(state): State<Arc<AppState>>,
) -> impl IntoResponse {
// Finalize the upgrade and spawn the socket handler
ws.on_upgrade(|socket| handle_socket(socket, state))
}
}
Handling the Connection
#![allow(unused)]
fn main() {
async fn handle_socket(socket: WebSocketStream, state: Arc<AppState>) {
// Split the socket into a sender and receiver
let (mut sender, mut receiver) = socket.split();
// Subscribe to the global broadcast channel
let mut rx = state.tx.subscribe();
// Spawn a task to forward broadcast messages to this client
let mut send_task = tokio::spawn(async move {
while let Ok(msg) = rx.recv().await {
// If the client disconnects, this will fail and we break
if sender.send(Message::text(msg)).await.is_err() {
break;
}
}
});
// Handle incoming messages from THIS client
let mut recv_task = tokio::spawn(async move {
while let Some(Ok(msg)) = receiver.next().await {
match msg {
Message::Text(text) => {
println!("Received message: {}", text);
// Broadcast it to everyone else
let _ = state.tx.send(format!("User says: {}", text));
}
Message::Close(_) => break,
_ => {}
}
}
});
// Wait for either task to finish (disconnection)
tokio::select! {
_ = (&mut send_task) => recv_task.abort(),
_ = (&mut recv_task) => send_task.abort(),
};
}
}
Initialization
#[tokio::main]
async fn main() {
// Create a broadcast channel with capacity of 100 messages
let (tx, _rx) = broadcast::channel(100);
let state = Arc::new(AppState { tx });
let app = RustApi::new()
.state(state)
.route("/ws", get(ws_handler));
app.run("0.0.0.0:3000").await.unwrap();
}
Client-Side Testing
You can simply use JavaScript in the browser console:
let ws = new WebSocket("ws://localhost:3000/ws");
ws.onmessage = (event) => {
console.log("Message from server:", event.data);
};
ws.send("Hello from JS!");
Advanced Patterns
- User Authentication: Use the same
AuthUserextractor in thews_handler. If authentication fails, return an error before callingws.on_upgrade. - Ping/Pong: Browsers and Load Balancers kill idle connections. Implement a heartbeat mechanism to keep the connection alive.
rustapi-wshandles low-level ping/pong frames automatically in many cases, but application-level pings are also robust.
Server-Side Rendering (SSR)
While RustAPI excels at building JSON APIs, it also supports server-side rendering using the rustapi-view crate, which leverages the Tera template engine (inspired by Jinja2).
Dependencies
Add the following to your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["view"] }
serde = { version = "1.0", features = ["derive"] }
Creating Templates
Create a templates directory in your project root.
templates/base.html (The layout):
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>{% block title %}My App{% endblock %}</title>
</head>
<body>
<nav>
<a href="/">Home</a>
<a href="/about">About</a>
</nav>
<main>
{% block content %}{% endblock %}
</main>
<footer>
© 2026 RustAPI
</footer>
</body>
</html>
templates/index.html (The page):
{% extends "base.html" %}
{% block title %}Home - {{ app_name }}{% endblock %}
{% block content %}
<h1>Welcome, {{ user.name }}!</h1>
{% if user.is_admin %}
<p>You have admin privileges.</p>
{% endif %}
<h2>Latest Items</h2>
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
</ul>
{% endblock %}
Handling Requests
In your main.rs, initialize the Templates engine and inject it into the application state. Handlers can then extract it using State<Templates>.
use rustapi_rs::prelude::*;
use rustapi_view::{View, Templates};
use serde::Serialize;
#[derive(Serialize)]
struct User {
name: String,
is_admin: bool,
}
#[derive(Serialize)]
struct HomeContext {
app_name: String,
user: User,
items: Vec<String>,
}
#[rustapi_rs::get("/")]
async fn index(templates: State<Templates>) -> View<HomeContext> {
let context = HomeContext {
app_name: "My Awesome App".to_string(),
user: User {
name: "Alice".to_string(),
is_admin: true,
},
items: vec!["Apple".to_string(), "Banana".to_string(), "Cherry".to_string()],
};
// Render the "index.html" template with the context
View::render(&templates, "index.html", context).await
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Initialize Template Engine
// Loads all .html files from the "templates" directory
let templates = Templates::new("templates/**/*.html")?;
// 2. Add to State
let app = RustApi::new()
.state(templates)
.route("/", get(index));
println!("Listening on http://localhost:3000");
app.run("0.0.0.0:3000").await.unwrap();
Ok(())
}
Template Reloading
In Debug mode (cargo run), rustapi-view automatically reloads templates from disk on every request. This means you can edit your .html files and refresh the browser to see changes instantly without recompiling.
In Release mode (cargo run --release), templates are compiled and cached for maximum performance.
Asset Serving
To serve CSS, JS, and images, use serve_static on the RustApi builder.
let app = RustApi::new()
.state(templates)
.route("/", get(index))
.serve_static("/assets", "assets"); // Serves files from ./assets at /assets
AI Integration
RustAPI offers native support for building AI-friendly APIs using the rustapi-toon crate. This allows you to serve optimized content for Large Language Models (LLMs) while maintaining standard JSON responses for traditional clients.
The Problem: Token Costs
LLMs like GPT-4, Claude, and Gemini charge by the token. Standard JSON is verbose, containing many structural characters (", :, {, }) that count towards this limit.
JSON (55 tokens):
[
{"id": 1, "role": "admin", "active": true},
{"id": 2, "role": "user", "active": true}
]
TOON (32 tokens):
users[2]{id,role,active}:
1,admin,true
2,user,true
The Solution: Content Negotiation
RustAPI uses the Accept header to decide which format to return.
Accept: application/json-> Returns JSON.Accept: application/toon-> Returns TOON.Accept: application/llm(custom) -> Returns TOON.
This is handled automatically by the LlmResponse<T> type.
Dependencies
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["toon"] }
serde = { version = "1.0", features = ["derive"] }
Implementation
use rustapi_rs::prelude::*;
use rustapi_toon::LlmResponse; // Handles negotiation
use serde::Serialize;
#[derive(Serialize)]
struct User {
id: u32,
username: String,
role: String,
}
// Simple handler returning a list of users
#[rustapi_rs::get("/users")]
async fn get_users() -> LlmResponse<Vec<User>> {
let users = vec![
User { id: 1, username: "Alice".into(), role: "admin".into() },
User { id: 2, username: "Bob".into(), role: "editor".into() },
];
// LlmResponse automatically serializes to JSON or TOON
LlmResponse(users)
}
#[tokio::main]
async fn main() {
let app = RustApi::new().route("/users", get(get_users));
println!("Server running on http://127.0.0.1:3000");
app.run("127.0.0.1:3000").await.unwrap();
}
Testing
Standard Browser / Client:
curl http://localhost:3000/users
# Returns: [{"id":1,"username":"Alice",...}]
AI Agent / LLM:
curl -H "Accept: application/toon" http://localhost:3000/users
# Returns:
# users[2]{id,username,role}:
# 1,Alice,admin
# 2,Bob,editor
Providing Context to AI
When building an MCP (Model Context Protocol) server or simply feeding data to an LLM, use the TOON format to maximize the context window.
// Example: Generating a prompt with TOON data
let data = get_system_status().await;
let toon_string = rustapi_toon::to_string(&data).unwrap();
let prompt = format!(
"Analyze the following system status and report anomalies:\n\n{}",
toon_string
);
// Send `prompt` to OpenAI API...
Production Tuning
Problem: Your API needs to handle extreme load (10k+ requests per second).
Solution
1. Release Profile
Ensure Cargo.toml has optimal settings:
[profile.release]
lto = "fat"
codegen-units = 1
panic = "abort"
strip = true
2. Runtime Config
Configure the Tokio runtime for high throughput in main.rs:
#[tokio::main(worker_threads = num_cpus::get())]
async fn main() {
// ...
}
3. File Descriptors (Linux)
Increase the limit before running:
ulimit -n 100000
Discussion
RustAPI is fast by default, but the OS often becomes the bottleneck using default settings. panic = "abort" reduces binary size and slightly improves performance by removing unwinding tables.
Response Compression
RustAPI supports automatic response compression (Gzip, Deflate, Brotli) via the CompressionLayer. This middleware negotiates the best compression algorithm based on the client’s Accept-Encoding header.
Dependencies
To use compression, you must enable the compression feature in rustapi-core (or rustapi-rs). For Brotli support, enable compression-brotli.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["compression", "compression-brotli"] }
Basic Usage
The simplest way to enable compression is to add the layer to your application:
use rustapi_rs::prelude::*;
use rustapi_core::middleware::CompressionLayer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::new()
.layer(CompressionLayer::new())
.route("/", get(hello))
.run("127.0.0.1:8080")
.await
}
async fn hello() -> &'static str {
"Hello, World! This response will be compressed if the client supports it."
}
Configuration
You can customize the compression behavior using CompressionConfig:
use rustapi_rs::prelude::*;
use rustapi_core::middleware::{CompressionLayer, CompressionConfig};
#[tokio::main]
async fn main() -> Result<()> {
let config = CompressionConfig::new()
.min_size(1024) // Only compress responses larger than 1KB
.level(6) // Compression level (0-9)
.gzip(true) // Enable Gzip
.deflate(false) // Disable Deflate
.brotli(true) // Enable Brotli (if feature enabled)
.add_content_type("application/custom-json"); // Add custom type
RustApi::new()
.layer(CompressionLayer::with_config(config))
.route("/data", get(get_large_data))
.run("127.0.0.1:8080")
.await
}
Default Configuration
By default, CompressionLayer is configured with:
min_size: 1024 bytes (1KB)level: 6gzip: enableddeflate: enabledbrotli: enabled (if feature is present)content_types:text/*,application/json,application/javascript,application/xml,image/svg+xml
Best Practices
1. Don’t Compress Already Compressed Data
Images (JPEG, PNG), Videos, and Archives (ZIP) are already compressed. Compressing them again wastes CPU cycles and might even increase the file size. The default configuration excludes most binary formats, but be careful with custom types.
2. Set Minimum Size
Compressing very small responses (e.g., “OK”) can actually make them larger due to framing overhead. The default 1KB threshold is a good starting point.
3. Order of Middleware
Compression should usually be one of the last layers added (outermost), so it compresses the final response after other middleware (like logging or headers) have run.
#![allow(unused)]
fn main() {
RustApi::new()
.layer(CompressionLayer::new()) // Runs last on response (first on request)
.layer(LoggingLayer::new()) // Runs before compression on response
}
Resilience Patterns
Building robust applications requires handling failures gracefully. RustAPI provides a suite of middleware to help your service survive partial outages, latency spikes, and transient errors.
These patterns are essential for the “Enterprise Platform” learning path and microservices architectures.
Prerequisites
Add the resilience features to your Cargo.toml. For example:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# OR cherry-pick features
# rustapi-extras = { version = "0.1.335", features = ["circuit-breaker", "retry", "timeout"] }
Circuit Breaker
The Circuit Breaker pattern prevents your application from repeatedly trying to execute an operation that’s likely to fail. It gives the failing service time to recover.
How it works
- Closed: Requests flow normally.
- Open: After
failure_thresholdis reached, requests fail immediately with503 Service Unavailable. - Half-Open: After
timeoutpasses, a limited number of test requests are allowed. If they succeed, the circuit closes.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::circuit_breaker::CircuitBreakerLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
CircuitBreakerLayer::new()
.failure_threshold(5) // Open after 5 failures
.timeout(Duration::from_secs(30)) // Wait 30s before retrying
.success_threshold(2) // Require 2 successes to close
)
.route("/", get(handler));
// ... run app
}
Retry with Backoff
Transient failures (network blips, temporary timeouts) can often be resolved by simply retrying the request. The RetryLayer handles this automatically with configurable backoff strategies.
Strategies
- Exponential:
base * 2^attempt(Recommended for most cases) - Linear:
base * attempt - Fixed: Constant delay
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::retry::{RetryLayer, RetryStrategy};
use std::time::Duration;
fn main() {
let app = RustApi::new()
.layer(
RetryLayer::new()
.max_attempts(3)
.initial_backoff(Duration::from_millis(100))
.max_backoff(Duration::from_secs(5))
.strategy(RetryStrategy::Exponential)
.retryable_statuses(vec![500, 502, 503, 504, 429])
)
.route("/", get(handler));
// ... run app
}
Warning: Be careful when combining Retries with non-idempotent operations (like
POSTrequests that charge a credit card). The middleware safely handles cloning requests, but your business logic must support it.
Timeouts
Never let a request hang indefinitely. The TimeoutLayer enforces a hard limit on request duration, returning 408 Request Timeout if exceeded.
Usage
use rustapi_rs::prelude::*;
use rustapi_extras::timeout::TimeoutLayer;
use std::time::Duration;
fn main() {
let app = RustApi::new()
// Fail if handler takes longer than 5 seconds
.layer(TimeoutLayer::from_secs(5))
.route("/", get(slow_handler));
// ... run app
}
Combining Layers (The Resilience Stack)
Order matters! Timeout should be the “outermost” constraint, followed by Circuit Breaker, then Retry.
In RustAPI (Tower) middleware, layers wrap around each other. The order you call .layer() wraps the previous service.
Recommended Order:
- Retry (Inner): Retries specific failures from the handler.
- Circuit Breaker (Middle): Stops retrying if the system is overloaded.
- Timeout (Outer): Enforces global time limit including all retries.
#![allow(unused)]
fn main() {
let app = RustApi::new()
// 1. Retry (handles transient errors)
.layer(RetryLayer::new())
// 2. Circuit Breaker (protects upstream)
.layer(CircuitBreakerLayer::new())
// 3. Timeout (applies to the whole operation)
.layer(TimeoutLayer::from_secs(10))
.route("/", get(handler));
}
Observability
Production services need more than “logs exist somewhere”. A healthy RustAPI observability setup should let you answer three questions quickly:
- What failed?
- Which request or trace did it belong to?
- Is this isolated or systemic?
This recipe shows a pragmatic observability stack using:
production_defaults(...)for request IDs and request tracing,OtelLayerfor distributed traces,StructuredLoggingLayerfor machine-readable logs, andInsightLayerfor in-process traffic analytics.
Prerequisites
Enable the relevant features:
[dependencies]
rustapi-rs = { version = "0.1.335", features = [
"core",
"extras-otel",
"extras-structured-logging",
"extras-insight"
] }
Basic Usage
use rustapi_rs::prelude::*;
use rustapi_rs::extras::{
insight::{InsightConfig, InsightLayer},
otel::{OtelConfig, OtelLayer},
structured_logging::{LogOutputFormat, StructuredLoggingConfig, StructuredLoggingLayer},
};
#[rustapi_rs::get("/")]
async fn hello() -> &'static str {
"hello"
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let environment = std::env::var("RUSTAPI_ENV")
.unwrap_or_else(|_| "development".to_string());
RustApi::auto()
.production_defaults("billing-api")
.layer(OtelLayer::new(
OtelConfig::builder()
.service_name("billing-api")
.service_version(env!("CARGO_PKG_VERSION"))
.deployment_environment(environment.clone())
.endpoint("http://otel-collector:4317")
.exclude_paths(vec![
"/health".to_string(),
"/ready".to_string(),
"/live".to_string(),
])
.build(),
))
.layer(StructuredLoggingLayer::new(
StructuredLoggingConfig::builder()
.format(LogOutputFormat::Json)
.service_name("billing-api")
.service_version(env!("CARGO_PKG_VERSION"))
.environment(environment)
.correlation_id_header("x-request-id")
.exclude_paths(vec![
"/health".to_string(),
"/ready".to_string(),
"/live".to_string(),
])
.build(),
))
.layer(InsightLayer::with_config(
InsightConfig::new()
.sample_rate(0.20)
.skip_paths(["/health", "/ready", "/live"])
.header_whitelist(["content-type", "user-agent", "x-request-id"])
.response_header_whitelist(["content-type", "x-request-id"])
.dashboard_path(Some("/admin/insights"))
.stats_path(Some("/admin/insights/stats")),
))
.run("0.0.0.0:8080")
.await
}
The recommended “golden config”
For most APIs, the following defaults work well:
1. Request correlation everywhere
Use the production preset so every request already carries a request ID and tracing span. This gives you a stable correlation key before you add any external observability backend.
2. JSON logs in production
Prefer StructuredLoggingLayer with:
LogOutputFormat::Jsonservice_nameservice_versionenvironmentcorrelation_id_header("x-request-id")
That makes it easy to join app logs with request IDs emitted by the built-in preset.
3. OTel for distributed traces
Use OtelLayer when your service participates in a larger system. Set:
- service name,
- service version,
- deployment environment,
- collector endpoint,
- excluded probe paths.
4. Insight for local traffic intelligence
InsightLayer is useful for:
- endpoint hot spots,
- latency outliers,
- lightweight internal dashboards,
- short-term debugging without a full external analytics platform.
Use sampling in production and keep the dashboard on a private/admin route.
What each layer is responsible for
| Layer | Purpose |
|---|---|
TracingLayer (via production preset) | Request-scoped tracing spans with service metadata |
OtelLayer | Distributed trace export and propagation |
StructuredLoggingLayer | Machine-readable application/request logs |
InsightLayer | In-process request analytics and dashboards |
These tools complement each other rather than replace each other.
Noise control
Probe routes can dominate dashboards and logs in busy clusters. A good default is to exclude /health, /ready, and /live from:
- OTel export,
- structured logs, and
- insight capture.
If you need probe telemetry for a specific incident, re-enable it deliberately rather than keeping it on all the time.
Sensitive data guidance
- Leave request/response body capture off unless debugging requires it.
- Whitelist only the headers you actually need.
- Keep
authorization,cookie, and API-key style headers redacted. - Treat admin insight endpoints as internal surfaces.
Operational tips
- Include
env!("CARGO_PKG_VERSION")in logs and traces. - Make dashboards searchable by
x-request-id,trace_id, anderror_id. - Keep observability config close to your app bootstrap, not hidden in scattered helpers.
- Validate the full path with one real request before rollout:
- response has
X-Request-ID, - logs include the correlation ID,
- traces reach the collector,
- insight dashboard records traffic if enabled.
- response has
Related guides
- Recommended Production Baseline
- Production Checklist
- Adaptive Execution Debug Plan
- Graceful Shutdown
Middleware Debugging
Middleware bugs are rarely glamorous. They usually look like:
- a handler never running,
- a missing
x-request-id, - tracing spans without correlation,
- an extractor failing because middleware never inserted the expected extension,
- a response being transformed by the “wrong” layer.
This guide focuses on debugging the middleware you already have in your stack.
Problem
Middleware wraps handlers from the outside in, so when something goes wrong the visible symptom is often far away from the actual cause.
Solution
Start with a minimal, observable stack and verify one layer at a time.
Understand execution order first
RustAPI executes layers in the order they are added:
- the first
.layer(...)sees the request first, - the last
.layer(...)sees the response first on the way back out.
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/")]
async fn index() -> &'static str {
"ok"
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto()
.layer(RequestIdLayer::new())
.layer(
TracingLayer::new()
.with_field("service", "debug-demo")
.with_field("environment", "development"),
)
.run("127.0.0.1:8080")
.await
}
For the request path, the order is:
RequestIdLayerTracingLayer- handler
For the response path, it unwinds in reverse.
A practical debugging workflow
1. Verify request correlation
Start by confirming RequestIdLayer is active.
curl -i http://127.0.0.1:8080/
If the response does not include x-request-id, either:
RequestIdLayeris missing,- the request never reached that layer, or
- another layer or proxy is mutating headers unexpectedly.
2. Verify tracing sees the request ID
TracingLayer reads the request ID from request extensions. If it runs without RequestIdLayer, the span records request_id = "unknown".
That makes the pairing easy to diagnose:
x-request-idpresent + trace has request ID → good- no
x-request-id+ trace showsunknown→ missing request ID layer
3. Reduce the stack
If a handler is not reached, strip the app down to the smallest reproducer:
#![allow(unused)]
fn main() {
RustApi::new()
.layer(RequestIdLayer::new())
.route("/", get(index));
}
Then add layers back one by one until the failure returns. It is boring, but boring debugging is usually the fastest debugging.
4. Watch for short-circuiting
Some middleware returns a response early and never calls downstream layers or the handler. Common examples include:
- auth failures,
- timeout layers,
- CORS preflight handling,
- rate limits,
- custom guards.
If a request fails before the handler runs, suspect an outer layer first.
Common failure modes
RequestId extractor fails inside a handler
Symptom:
- handler returns an internal error saying the request ID was not found.
Likely cause:
RequestIdLayerwas not added.
Extension<T> extractor fails
Symptom:
- handler says an extension was not found.
Likely cause:
- the middleware that should insert that extension never ran,
- it short-circuited before insertion,
- or the inserted type does not match the extracted type exactly.
Logs exist but are hard to correlate
Add RequestIdLayer and keep TracingLayer close to the edge of the stack so every request has a stable identifier early.
Response looks modified “too late”
Remember response processing unwinds in reverse. The last layer added has the first chance to modify the outgoing response.
Built-in tools that help
Status page
The built-in status page helps answer whether traffic is reaching the service and which endpoints are hot.
#![allow(unused)]
fn main() {
RustApi::auto().status_page();
}
Observability stack
If the issue spans multiple services, combine:
RequestIdLayerTracingLayerOtelLayerStructuredLoggingLayer
See the Observability recipe for the recommended baseline.
TestClient
For reproducible debugging, build a small app and exercise it with the in-memory test client. That way you can inspect middleware behavior without involving a real network hop.
Debug checklist
- Does the response include
x-request-id? - Does tracing log the same request ID instead of
unknown? - Is the handler actually being reached?
- Could an outer middleware be short-circuiting?
- Is layer order what you think it is?
- If using
Extension<T>, does the inserted type exactly match the extracted type? - Have you reproduced the issue with a minimal stack?
Related reading
Graceful Shutdown
Graceful shutdown lets your API stop accepting new work, drain in-flight requests, and clean up resources before the process exits. In production, the missing piece is usually draining: marking the instance unready so upstream load balancers stop sending traffic before shutdown completes.
Problem
When you stop a server (for example with Ctrl+C or SIGTERM), you usually want all of the following:
- The process stops receiving new traffic.
- Existing requests are allowed to finish.
- Readiness flips to unhealthy during the drain window.
- Cleanup hooks run in a predictable order.
Solution
RustAPI provides run_with_shutdown(...), which accepts a future. When that future resolves, the server begins graceful shutdown. If you also wire readiness to shared state, you can make the instance report 503 during the drain window before the future returns.
Basic Example
use rustapi_rs::prelude::*;
use tokio::signal;
#[tokio::main]
async fn main() -> Result<()> {
let app = RustApi::new().route("/", get(hello));
let shutdown_signal = async {
signal::ctrl_c()
.await
.expect("failed to install CTRL+C handler");
};
println!("Server running... Press CTRL+C to stop.");
app.run_with_shutdown("127.0.0.1:3000", shutdown_signal).await?;
println!("Server stopped gracefully.");
Ok(())
}
async fn hello() -> &'static str {
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
"Hello, World!"
}
Production Example with Draining
In orchestrated environments you usually want to:
- listen for
SIGTERMas well asCtrl+C, - mark the instance as draining,
- wait for a short drain window, and only then
- let
run_with_shutdown(...)finish the shutdown.
use rustapi_rs::prelude::*;
use std::sync::{
Arc,
atomic::{AtomicBool, Ordering},
};
use tokio::{
signal,
time::{sleep, Duration},
};
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let draining = Arc::new(AtomicBool::new(false));
let readiness_flag = draining.clone();
let health = HealthCheckBuilder::new(true)
.add_check("draining", move || {
let readiness_flag = readiness_flag.clone();
async move {
if readiness_flag.load(Ordering::SeqCst) {
HealthStatus::unhealthy("draining")
} else {
HealthStatus::healthy()
}
}
})
.build();
let app = RustApi::new()
.with_health_check(health)
.on_shutdown(|| async {
tracing::info!("shutdown cleanup finished");
})
.route("/", get(hello));
app.run_with_shutdown("0.0.0.0:3000", shutdown_signal(draining)).await?;
Ok(())
}
async fn shutdown_signal(draining: Arc<AtomicBool>) {
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("failed to install Ctrl+C handler");
};
#[cfg(unix)]
let terminate = async {
signal::unix::signal(signal::unix::SignalKind::terminate())
.expect("failed to install signal handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => println!("Received Ctrl+C"),
_ = terminate => println!("Received SIGTERM"),
}
draining.store(true, Ordering::SeqCst);
sleep(Duration::from_secs(15)).await;
}
async fn hello() -> &'static str {
sleep(Duration::from_secs(2)).await;
"Hello, World!"
}
Discussion
- Active requests: RustAPI waits for in-flight requests to complete as shutdown proceeds.
- Drain window: The sleep inside
shutdown_signal(...)gives your ingress or load balancer time to observe readiness failure and stop sending new traffic. - Readiness semantics: By wiring readiness to shared state,
/readycan return503 Service Unavailablewhile/livestill reports that the process is alive. - Cleanup hooks:
on_shutdown(...)hooks are executed after the shutdown signal future resolves, making them a good place for final flush/cleanup work. - Detached tasks:
tokio::spawntasks are still detached. For critical work, coordinate them explicitly or move the work into a durable queue such asrustapi-jobs. - Forceful shutdown: If your platform requires a hard upper bound, combine this approach with a platform-level termination grace period and an application-level timeout policy.
Recommended production pattern
For most deployments:
- Receive
SIGTERM. - Mark the instance as draining.
- Let readiness fail.
- Wait 10–30 seconds, depending on your proxy and traffic pattern.
- Allow graceful shutdown to complete.
- Run shutdown hooks.
Pair this with the cookbook Deployment recipe and the docs Production Checklist.
Audit Logging & Compliance
In many enterprise applications, maintaining a detailed audit trail is crucial for security, compliance (GDPR, SOC2), and troubleshooting. RustAPI provides a comprehensive audit logging system in rustapi-extras.
This recipe covers how to create, log, and query audit events.
Prerequisites
Add rustapi-extras with the audit feature to your Cargo.toml.
[dependencies]
rustapi-extras = { version = "0.1.335", features = ["audit"] }
Core Concepts
The audit system is built around three main components:
- AuditEvent: Represents a single action performed by a user or system.
- AuditStore: Interface for persisting events (e.g.,
InMemoryAuditStore,FileAuditStore). - ComplianceInfo: Additional metadata for regulatory requirements.
Basic Usage
Log a simple event when a user is created.
use rustapi_extras::audit::{AuditEvent, AuditAction, InMemoryAuditStore, AuditStore};
#[tokio::main]
async fn main() {
// Initialize the store (could be FileAuditStore for persistence)
let store = InMemoryAuditStore::new();
// Create an event
let event = AuditEvent::new(AuditAction::Create)
.resource("users", "user-123") // Resource type & ID
.actor("admin@example.com") // Who performed the action
.ip_address("192.168.1.1".parse().unwrap())
.success(true); // Outcome
// Log it asynchronously
store.log(event);
// ... later, query events
let recent_logs = store.query().limit(10).execute().await;
println!("Recent logs: {:?}", recent_logs);
}
Compliance Features (GDPR & SOC2)
RustAPI’s audit system includes dedicated fields for compliance tracking.
GDPR Relevance
Events involving personal data can be flagged with legal basis and retention policies.
#![allow(unused)]
fn main() {
use rustapi_extras::audit::{ComplianceInfo, AuditEvent, AuditAction};
let compliance = ComplianceInfo::new()
.personal_data(true) // Involves PII
.data_subject("user-123") // The person the data belongs to
.legal_basis("consent") // Article 6 basis
.retention("30_days"); // Retention policy
let event = AuditEvent::new(AuditAction::Update)
.compliance(compliance)
.resource("profile", "user-123");
}
SOC2 Controls
Link events to specific security controls.
#![allow(unused)]
fn main() {
let compliance = ComplianceInfo::new()
.soc2_control("CC6.1"); // Access Control
let event = AuditEvent::new(AuditAction::Login)
.compliance(compliance)
.actor("employee@company.com");
}
Tracking Changes
For updates, it’s often useful to record what changed.
#![allow(unused)]
fn main() {
use rustapi_extras::audit::AuditChanges;
let changes = AuditChanges::new()
.field("email", "old@example.com", "new@example.com")
.field("role", "user", "admin");
let event = AuditEvent::new(AuditAction::Update)
.changes(changes)
.resource("users", "user-123");
}
Best Practices
- Log All Security Events: Logins (success/failure), permission changes, and API key management should always be audited.
- Include Context: Add
request_idorsession_idto correlate logs with tracing data. - Use Asynchronous Logging: The
AuditStoreis designed to be non-blocking. Use it in a background task ortokio::spawnif needed for heavy writes. - Secure the Logs: Ensure that the storage backend (file, database) is protected from tampering.
Replay workflow: time-travel debugging
Record HTTP request/response pairs in a controlled environment, inspect a captured request, replay it against another target, and diff the result before promoting a fix.
Security notice Replay is intended for development, staging, canary, and incident-response environments. Do not expose the admin endpoints publicly on the open internet.
When to use it
Replay is most useful when:
- behavior differs between staging and local
- you need to reproduce a regression using a real traffic sample
- you want to rerun critical requests before promoting a new version to canary
- you are asking, “why did this request work yesterday but break today?” and want a time-machine-style answer
Prerequisites
Enable the canonical replay feature in your application:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["extras-replay"] }
On the CLI side, cargo-rustapi is enough; replay commands are part of the default installation:
cargo install cargo-rustapi
1) Enable replay recording
For the smallest practical setup, start with an in-memory store:
use rustapi_rs::extras::replay::{InMemoryReplayStore, ReplayConfig, ReplayLayer};
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/api/users")]
async fn list_users() -> Json<Vec<&'static str>> {
Json(vec!["Alice", "Bob"])
}
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let replay = ReplayLayer::new(
ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token")
.ttl_secs(900)
.skip_path("/health")
.skip_path("/ready")
.skip_path("/live"),
)
.with_store(InMemoryReplayStore::new(200));
RustApi::auto()
.layer(replay)
.run("127.0.0.1:8080")
.await
}
This setup:
- enables replay recording
- protects the admin endpoints with a bearer token
- excludes probe endpoints from recording
- keeps entries for 15 minutes
- stores at most 200 records in memory
2) Generate target traffic
Now send requests to the application as usual. The replay middleware captures request/response pairs without changing your application code.
The recording flow looks like this:
- the request passes through
- request metadata and eligible body fields are stored
- response status, headers, and capturable body content are stored
- the record becomes accessible through the admin API and CLI
3) List recordings and find the right entry
For a first look, the CLI is the easiest path:
# List recent replay entries
cargo rustapi replay list -s http://localhost:8080 -t local-replay-token
# Filter to a specific endpoint only
cargo rustapi replay list -s http://localhost:8080 -t local-replay-token --method GET --path /api/users --limit 20
The list output shows these fields:
- replay ID
- HTTP method
- path
- original response status code
- total duration
4) Inspect a single entry
Once you find the suspicious request, open the full record:
cargo rustapi replay show <id> -s http://localhost:8080 -t local-replay-token
This command typically shows:
- the original request method and URI
- stored headers
- the captured request body
- the original response status/body
- metadata such as duration, client IP, and request ID
5) Replay the same request against another environment
You can now run the same request against your local fix, staging, or canary environment:
cargo rustapi replay run <id> -s http://localhost:8080 -t local-replay-token -T http://localhost:3000
Practical uses include:
- verifying that the local fix really resolves the incident
- checking whether staging still matches the previous production behavior
- replaying critical endpoints as a pre-deploy smoke test
6) Generate diffs automatically
This is where the real magic happens: compare the replayed response with the original response.
cargo rustapi replay diff <id> -s http://localhost:8080 -t local-replay-token -T http://staging:8080
The diff output looks for differences in:
- status code
- response headers
- JSON body fields
That lets you catch subtler regressions too, such as “it still returned 200, but the payload changed.”
Recommended workflow
During an incident or regression, the recommended flow is:
- Start recording: enable replay in staging/canary with a short TTL.
- Capture the example: replay the real request that triggers the problem.
- List: find the right entry with
cargo rustapi replay list. - Inspect: validate the request/response pair with
cargo rustapi replay show. - Try the fix: rerun the entry against your local build or release candidate with
run. - Diff it: use
diffto confirm the behavior changed as expected. - Turn it off: disable replay recording after the incident or keep the TTL short.
In short: capture → inspect → replay → diff → promote.
Admin API reference
All admin endpoints require this header:
Authorization: Bearer <admin_token>
| Method | Path | Description |
|---|---|---|
| GET | /__rustapi/replays | List recordings |
| GET | /__rustapi/replays/{id} | Show a single entry |
| POST | /__rustapi/replays/{id}/run?target=URL | Replay the request against another target |
| POST | /__rustapi/replays/{id}/diff?target=URL | Replay the request and generate a diff |
| DELETE | /__rustapi/replays/{id} | Delete an entry |
cURL examples
curl -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays?limit=10"
curl -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>"
curl -X POST -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>/run?target=http://staging:8080"
curl -X POST -H "Authorization: Bearer local-replay-token" \
"http://localhost:8080/__rustapi/replays/<id>/diff?target=http://staging:8080"
Configuration notes
These are the ReplayConfig options you will adjust most often:
use rustapi_rs::extras::replay::ReplayConfig;
let config = ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token")
.store_capacity(1_000)
.ttl_secs(7_200)
.sample_rate(0.5)
.max_request_body(131_072)
.max_response_body(524_288)
.record_path("/api/orders")
.record_path("/api/users")
.skip_path("/health")
.skip_path("/metrics")
.redact_header("x-custom-secret")
.redact_body_field("password")
.redact_body_field("credit_card")
.admin_route_prefix("/__admin/replays");
By default, these headers are stored as [REDACTED]:
authorizationcookiex-api-keyx-auth-token
JSON body redaction works recursively; for example, a password field is masked even inside nested objects.
Filesystem store for persistent retention
If you want the records to survive a developer-machine restart, use the filesystem store:
use rustapi_rs::extras::replay::{
FsReplayStore, FsReplayStoreConfig, ReplayConfig, ReplayLayer,
};
let config = ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token");
let fs_store = FsReplayStore::new(FsReplayStoreConfig {
directory: "./replay-data".into(),
max_file_size: Some(10 * 1024 * 1024),
create_if_missing: true,
});
let replay = ReplayLayer::new(config).with_store(fs_store);
If you want to write a custom backend
If you want to use Redis, object storage, or an enterprise audit backend, implement the ReplayStore trait:
use async_trait::async_trait;
use rustapi_rs::extras::replay::{
ReplayEntry, ReplayQuery, ReplayStore, ReplayStoreResult,
};
#[derive(Clone)]
struct MyCustomStore;
#[async_trait]
impl ReplayStore for MyCustomStore {
async fn store(&self, entry: ReplayEntry) -> ReplayStoreResult<()> {
let _ = entry;
Ok(())
}
async fn get(&self, id: &str) -> ReplayStoreResult<Option<ReplayEntry>> {
let _ = id;
Ok(None)
}
async fn list(&self, query: &ReplayQuery) -> ReplayStoreResult<Vec<ReplayEntry>> {
let _ = query;
Ok(vec![])
}
async fn delete(&self, id: &str) -> ReplayStoreResult<bool> {
let _ = id;
Ok(false)
}
async fn count(&self) -> ReplayStoreResult<usize> {
Ok(0)
}
async fn clear(&self) -> ReplayStoreResult<()> {
Ok(())
}
async fn delete_before(&self, timestamp_ms: u64) -> ReplayStoreResult<usize> {
let _ = timestamp_ms;
Ok(0)
}
fn clone_store(&self) -> Box<dyn ReplayStore> {
Box::new(self.clone())
}
}
Verification checklist
After setting up replay, run this short check:
- send a request to the application
- use
cargo rustapi replay list -t <token>to confirm the entry appears - use
cargo rustapi replay show <id> -t <token>to verify the stored body/header data - use
cargo rustapi replay diff <id> -t <token> -T <target>to compare the results
If these four steps succeed, the workflow is ready.
Security summary
The replay system includes several safeguards:
- Disabled by default: it starts with
enabled(false). - Admin token required: admin endpoints require a bearer token.
- Header redaction: sensitive headers are masked.
- Body field redaction: JSON fields can be selectively masked.
- TTL enforced: old records are cleaned up automatically.
- Body size limits: request/response capture is size-limited.
- Bounded storage: the in-memory store is limited with FIFO eviction.
Recommendations:
- do not enable replay behind a publicly exposed production ingress
- use a short TTL
- add application-specific secret fields to the redaction list
- monitor memory usage if you use a large-capacity in-memory store
- consider turning replay recording off after the incident
Deployment
RustAPI includes built-in deployment tooling to help you ship applications, but production deployment is more than generating a config file. This guide covers both the CLI-assisted setup and the operational recommendations for health, readiness, liveness, and rollout safety.
Supported Platforms
- Docker: Generate a production-ready
Dockerfile. - Fly.io: Generate
fly.tomland deploy instructions. - Railway: Generate
railway.tomland project setup. - Shuttle.rs: Generate
Shuttle.tomland setup instructions.
Usage
Docker
Generate a Dockerfile optimized for RustAPI applications:
cargo rustapi deploy docker
Options:
--output <path>: Output path (default:./Dockerfile)--rust-version <ver>: Rust version (default: 1.78)--port <port>: Port to expose (default: 8080)--binary <name>: Binary name (default: package name)
Fly.io
Prepare your application for Fly.io:
cargo rustapi deploy fly
Options:
--app <name>: Application name--region <region>: Fly.io region (default: iad)--init_only: Only generate config, don’t show deployment steps
Railway
Prepare your application for Railway:
cargo rustapi deploy railway
Options:
--project <name>: Project name--environment <env>: Environment name (default: production)
Shuttle.rs
Prepare your application for Shuttle.rs serverless deployment:
cargo rustapi deploy shuttle
Options:
--project <name>: Project name--init_only: Only generate config
Note: Shuttle.rs requires some code changes to use their runtime macro
#[shuttle_runtime::main]. The deploy command generates the configuration but you will need to adjust yourmain.rsto use their attributes if you are deploying to their platform.
Probe recommendations
RustAPI has first-class built-in probe endpoints:
/health— aggregate service and dependency health/ready— readiness for load balancers and orchestrators/live— lightweight liveness probe
You can enable them via:
.health_endpoints().with_health_check(...).production_defaults("service-name")
Recommended semantics
- Liveness should answer: “Is the process alive?”
- Readiness should answer: “Should this instance receive traffic right now?”
- Health should answer: “What is the aggregate state of the service and its dependencies?”
In practice:
- let
/livestay lightweight, - let
/readyfail when critical dependencies fail, - let
/readyalso fail during drain/shutdown windows, - use
/healthfor richer diagnostics and dashboards.
Kubernetes example
livenessProbe:
httpGet:
path: /live
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 2
periodSeconds: 5
startupProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 30
periodSeconds: 2
If you customize the paths with HealthEndpointConfig, update the probe configuration to match.
Load balancer and ingress guidance
- Point traffic-routing health checks at
/ready, not/live. - Keep the drain window consistent with your termination grace period.
- Avoid routing public traffic to admin/debug surfaces such as
/status,/docs, or/admin/insightsunless intentionally protected. - If auth middleware protects most routes, make sure probe routes remain reachable.
Minimal production bootstrap
use rustapi_rs::prelude::*;
#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto()
.production_defaults("users-api")
.run("0.0.0.0:8080")
.await
}
If you need dependency-aware readiness, supply your own HealthCheck:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
let health = HealthCheckBuilder::new(true)
.add_check("database", || async {
HealthStatus::healthy()
})
.build();
let app = RustApi::new().with_health_check(health);
}
Rollout checklist
Before sending real traffic:
GET /livereturns200.GET /readyreturns200.GET /healthshows expected dependency state.- At least one business endpoint succeeds.
- Logs and traces contain request IDs and service metadata.
For the full operational list, see Production Checklist.
HTTP/3 (QUIC) Support
RustAPI supports HTTP/3 (QUIC), the next generation of the HTTP protocol, providing lower latency, better performance over unstable networks, and improved security.
Enabling HTTP/3
HTTP/3 support is optional and can be enabled via feature flags in Cargo.toml.
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["http3"] }
# For development with self-signed certificates
rustapi-rs = { version = "0.1.335", features = ["http3", "http3-dev"] }
Running an HTTP/3 Server
Since HTTP/3 requires TLS (even for local development), RustAPI provides helpers to make this easy.
Development (Self-Signed Certs)
For local development, you can use run_http3_dev which automatically generates self-signed certificates.
use rustapi_rs::prelude::*;
#[rustapi_rs::get("/")]
async fn hello() -> &'static str {
"Hello from HTTP/3!"
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Requires "http3-dev" feature
RustApi::auto()
.run_http3_dev("127.0.0.1:8080")
.await
}
Production (QUIC)
For production, you should provide valid certificates.
use rustapi_rs::prelude::*;
use rustapi_core::http3::Http3Config;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let config = Http3Config::new("cert.pem", "key.pem");
RustApi::auto()
.run_http3(config)
.await
}
Dual Stack (HTTP/1.1 + HTTP/3)
You can serve both HTTP/1.1 and HTTP/3 on the same port (via Alt-Svc header promotion) or different ports.
use rustapi_rs::prelude::*;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Run HTTP/1.1 on port 8080 and HTTP/3 on port 4433 (or same port if supported)
RustApi::auto()
.run_dual_stack("127.0.0.1:8080")
.await
}
How It Works
HTTP/3 in RustAPI is built on top of quinn and h3. When enabled:
- UDP Binding: The server binds to a UDP socket (in addition to TCP if dual-stack).
- TLS: QUIC requires TLS 1.3. RustAPI handles the TLS configuration.
- Optimization: Responses are optimized for QUIC streams.
Testing
You can test HTTP/3 support using curl with HTTP/3 support:
curl --http3 -k https://localhost:8080/
Or using online tools like http3check.net.
gRPC Integration
RustAPI allows you to seamlessly integrate gRPC services alongside your HTTP API, running both on the same Tokio runtime or even the same port (with proper multiplexing, though separate ports are simpler). We use the rustapi-grpc crate, which provides helpers for Tonic.
Dependencies
Add the following to your Cargo.toml:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["grpc"] }
tonic = "0.10"
prost = "0.12"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
[build-dependencies]
tonic-build = "0.10"
Defining the Service (Proto)
Create a proto/helloworld.proto file:
syntax = "proto3";
package helloworld;
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply);
}
message HelloRequest {
string name = 1;
}
message HelloReply {
string message = 1;
}
The Build Script
In build.rs:
fn main() -> Result<(), Box<dyn std::error::Error>> {
tonic_build::compile_protos("proto/helloworld.proto")?;
Ok(())
}
Implementation
Here is how to run both servers concurrently with shared shutdown.
use rustapi_rs::prelude::*;
use rustapi_rs::grpc::{run_rustapi_and_grpc_with_shutdown, tonic};
use tonic::{Request, Response, Status};
// Import generated proto code (simplified for example)
pub mod hello_world {
tonic::include_proto!("helloworld");
}
use hello_world::greeter_server::{Greeter, GreeterServer};
use hello_world::{HelloReply, HelloRequest};
// --- gRPC Implementation ---
#[derive(Default)]
pub struct MyGreeter {}
#[tonic::async_trait]
impl Greeter for MyGreeter {
async fn say_hello(
&self,
request: Request<HelloRequest>,
) -> Result<Response<HelloReply>, Status> {
let name = request.into_inner().name;
let reply = hello_world::HelloReply {
message: format!("Hello {} from gRPC!", name),
};
Ok(Response::new(reply))
}
}
// --- HTTP Implementation ---
#[rustapi_rs::get("/health")]
async fn health() -> Json<&'static str> {
Json("OK")
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Define HTTP App
let http_app = RustApi::new().route("/health", get(health));
let http_addr = "0.0.0.0:3000";
// 2. Define gRPC Service
let grpc_addr = "0.0.0.0:50051".parse()?;
let greeter = MyGreeter::default();
println!("HTTP listening on http://{}", http_addr);
println!("gRPC listening on grpc://{}", grpc_addr);
// 3. Run both with shared shutdown (Ctrl+C)
run_rustapi_and_grpc_with_shutdown(
http_app,
http_addr,
tokio::signal::ctrl_c(),
move |shutdown| {
tonic::transport::Server::builder()
.add_service(GreeterServer::new(greeter))
.serve_with_shutdown(grpc_addr, shutdown)
},
).await?;
Ok(())
}
How It Works
- Shared Runtime: Both servers run on the same Tokio runtime, sharing thread pool resources efficiently.
- Graceful Shutdown: When
Ctrl+Cis pressed,run_rustapi_and_grpc_with_shutdownsignals both the HTTP server and the gRPC server to stop accepting new connections and finish pending requests. - Simplicity: You don’t need to manually spawn tasks or manage channels for shutdown signals.
Advanced: Multiplexing
To run both HTTP and gRPC on the same port, you would typically use a library like tower to inspect the Content-Type header (application/grpc vs others) and route accordingly. However, running on separate ports (e.g., 8080 for HTTP, 50051 for gRPC) is standard practice in Kubernetes and most deployment environments.
Automatic Status Page
RustAPI comes with a built-in, zero-configuration status page that gives you instant visibility into your application’s health and performance.
Enabling the Status Page
To enable the status page, simply call .status_page() on your RustApi builder:
use rustapi_rs::prelude::*;
#[rustapi_rs::main]
async fn main() -> Result<()> {
RustApi::auto()
.status_page() // <--- Enable Status Page
.run("127.0.0.1:8080")
.await
}
By default, the status page is available at /status.
Full Example
Here is a complete, runnable example that demonstrates how to set up the status page and generate some traffic to see the metrics in action.
You can find this example in crates/rustapi-rs/examples/status_demo.rs.
use rustapi_rs::prelude::*;
use std::time::Duration;
use tokio::time::sleep;
/// A simple demo to showcase the RustAPI Status Page.
///
/// Run with: `cargo run -p rustapi-rs --example status_demo`
/// Then verify:
/// - Status Page: http://127.0.0.1:3000/status
/// - Generate Traffic: http://127.0.0.1:3000/api/fast
/// - Generate Errors: http://127.0.0.1:3000/api/slow
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// 1. Define some handlers to generate metrics
// A fast endpoint
async fn fast_handler() -> &'static str {
"Fast response!"
}
// A slow endpoint with random delay to show latency
async fn slow_handler() -> &'static str {
sleep(Duration::from_millis(500)).await;
"Slow response... sleepy..."
}
// An endpoint that sometimes fails
async fn flaky_handler() -> Result<&'static str, rustapi_rs::Response> {
use std::sync::atomic::{AtomicBool, Ordering};
static FAILURE: AtomicBool = AtomicBool::new(false);
// Toggle failure every call
let fail = FAILURE.fetch_xor(true, Ordering::Relaxed);
if !fail {
Ok("Success!")
} else {
Err(rustapi_rs::StatusCode::INTERNAL_SERVER_ERROR.into_response())
}
}
// 2. Build the app with status page enabled
println!("Starting Status Page Demo...");
println!(" -> Open http://127.0.0.1:3000/status to see the dashboard");
println!(" -> Visit http://127.0.0.1:3000/fast to generate traffic");
println!(" -> Visit http://127.0.0.1:3000/slow to generate latency");
println!(" -> Visit http://127.0.0.1:3000/flaky to generate errors");
RustApi::auto()
.status_page() // <--- Enable Status Page
.route("/fast", get(fast_handler))
.route("/slow", get(slow_handler))
.route("/flaky", get(flaky_handler))
.run("127.0.0.1:3000")
.await
}
Dashboard Overview
The status page provides a comprehensive real-time view of your system.
1. Global System Stats
At the top of the dashboard, you’ll see high-level metrics for the entire application:
- System Uptime: How long the server has been running.
- Total Requests: The aggregate number of requests served across all endpoints.
- Active Endpoints: The number of distinct routes that have received traffic.
- Auto-Refresh: The page automatically updates every 5 seconds, so you can keep it open on a second monitor.
2. Endpoint Metrics Grid
The main section is a detailed table showing granular performance data for every endpoint:
| Metric | Description |
|---|---|
| Endpoint | The path of the route (e.g., /api/users). |
| Requests | Total number of hits this specific route has received. |
| Success Rate | Visual indicator of health. 🟢 Green: ≥95% success 🔴 Red: <95% success |
| Avg Latency | The average time (in milliseconds) it takes to serve a request. |
| Last Access | Timestamp of the most recent request to this endpoint. |
3. Visual Design
The dashboard is built with a “zero-dependency” philosophy. It renders a single, self-contained HTML page directly from the binary.
- Modern UI: Clean, card-based layout using system fonts.
- Responsive: Adapts perfectly to mobile and desktop screens.
- Lightweight: No external CSS/JS files to manage or load.
Custom Configuration
If you need more control, you can customize the path and title of the status page:
use rustapi_rs::prelude::*;
use rustapi_rs::status::StatusConfig;
#[rustapi_rs::main]
async fn main() -> Result<()> {
// Configure the status page
let config = StatusConfig::new()
.path("/admin/health") // Change URL to /admin/health
.title("Production Node 1"); // Custom title for easy identification
RustApi::auto()
.status_page_with_config(config)
.run("127.0.0.1:8080")
.await
}
Troubleshooting: Common Gotchas
This guide covers frequently encountered issues that can be confusing when working with RustAPI. If you’re stuck on a cryptic error, chances are the solution is here.
1. Missing Schema Derive on Extractor Types
Symptom:
error[E0277]: the trait bound `...: Handler<_>` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize)]
pub struct ListParams {
pub page: Option<u32>,
}
}
Solution:
Add the Schema derive macro to any struct used with extractors (Query<T>, Path<T>, Json<T>):
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, Schema)] // ✅ Schema added
pub struct ListParams {
pub page: Option<u32>,
}
}
Why?
- RustAPI generates OpenAPI documentation automatically
- All extractors require
T: RustApiSchematrait bound - The
Schemaderive macro implements this trait for you
2. Don’t Add External OpenAPI Generators Directly
Wrong:
[dependencies]
utoipa = "4.2" # ❌ Don't add this
Correct:
[dependencies]
rustapi-rs = { version = "0.1.335", features = ["full"] }
# rustapi-openapi is re-exported through rustapi-rs
Why?
- RustAPI has its own OpenAPI implementation (
rustapi-openapi) - External OpenAPI derive/macros are not part of RustAPI’s public API surface
- The
Schemaderive macro is already inrustapi_rs::prelude::*
3. Use rustapi_rs, Not Internal Crates
Symptom:
error[E0432]: unresolved import `rustapi_extras`
error[E0433]: failed to resolve: use of unresolved module `rustapi_core`
error[E0433]: failed to resolve: use of unresolved module `rustapi_macros`
Problem:
#![allow(unused)]
fn main() {
use rustapi_extras::SqlxErrorExt; // ❌ Old module name
use rustapi_core::RustApi; // ❌ Internal crate
use rustapi_macros::get; // ❌ Internal crate
}
Solution:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*; // ✅ Everything you need
use rustapi_rs::SqlxErrorExt; // ✅ Correct path for extras
}
For macros:
#![allow(unused)]
fn main() {
// ❌ Wrong (doesn't work)
#[rustapi_macros::get("/")]
async fn index() -> &'static str { "Hello" }
// ✅ Correct
#[rustapi_rs::get("/")]
async fn index() -> &'static str { "Hello" }
}
Why?
rustapi_core,rustapi_macros,rustapi_extrasare internal implementation crates- All public APIs are re-exported through the
rustapi-rsfacade crate - This follows the Facade Architecture pattern for API stability
4. Don’t Use IntoParams or #[param(...)]
Wrong:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, IntoParams)] // ❌ IntoParams is from utoipa
pub struct ListParams {
#[param(minimum = 1)] // ❌ This attribute doesn't exist
pub page: Option<u32>,
}
}
Correct:
#![allow(unused)]
fn main() {
#[derive(Debug, Deserialize, Schema)] // ✅ Use Schema
pub struct ListParams {
/// Page number (1-indexed) // ✅ Doc comments become OpenAPI descriptions
pub page: Option<u32>,
}
}
For validation, use RustAPI’s built-in system:
#![allow(unused)]
fn main() {
use rustapi_rs::prelude::*;
#[derive(Debug, Deserialize, Validate, Schema)]
pub struct CreateTask {
#[validate(length(min = 1, max = 200))]
pub title: String,
#[validate(email)]
pub email: String,
}
// Use ValidatedJson for automatic validation
async fn create_task(
ValidatedJson(task): ValidatedJson<CreateTask>
) -> Result<Json<Task>> {
// Validation runs automatically, returns 422 on failure
Ok(Json(task))
}
}
5. serde_json::Value Has No Schema
Symptom:
error: the trait `RustApiSchema` is not implemented for `serde_json::Value`
Problem:
#![allow(unused)]
fn main() {
async fn handler() -> Json<serde_json::Value> { // ❌ No schema
Json(json!({ "key": "value" }))
}
}
Solution - Use a typed struct (recommended):
#![allow(unused)]
fn main() {
#[derive(Serialize, Schema)]
struct MyResponse {
key: String,
}
async fn handler() -> Json<MyResponse> { // ✅ Type-safe
Json(MyResponse {
key: "value".to_string(),
})
}
}
Why?
serde_json::Valuedoesn’t implementRustApiSchema- OpenAPI spec requires concrete types for documentation
- Type-safe structs catch errors at compile time
6. DateTime<Utc> Has No Schema
Symptom:
error[E0277]: the trait bound `DateTime<Utc>: RustApiSchema` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub id: u64,
pub created_at: DateTime<Utc>, // ❌ No RustApiSchema impl
}
}
Solution - Use String with RFC3339 format:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub id: u64,
pub created_at: String, // ✅ Use String
}
impl From<&Bookmark> for BookmarkResponse {
fn from(b: &Bookmark) -> Self {
Self {
id: b.id,
created_at: b.created_at.to_rfc3339(), // DateTime -> String
}
}
}
}
Alternative - Unix timestamp:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct BookmarkResponse {
pub created_at: i64, // Unix timestamp (seconds)
}
}
Best Practice:
- Use
DateTime<Utc>in your internal domain models - Use
String(RFC3339) in response DTOs - Convert using
From/Intotraits
7. Generic Types Need Schema Trait Bounds
Symptom:
error[E0277]: the trait bound `T: RustApiSchema` is not satisfied
Problem:
#![allow(unused)]
fn main() {
#[derive(Debug, Serialize, Schema)]
pub struct PaginatedResponse<T> { // ❌ Missing trait bound
pub items: Vec<T>,
pub total: usize,
}
}
Solution:
#![allow(unused)]
fn main() {
use rustapi_openapi::schema::RustApiSchema;
#[derive(Debug, Serialize, Schema)]
pub struct PaginatedResponse<T: RustApiSchema> { // ✅ Trait bound added
pub items: Vec<T>,
pub total: usize,
pub page: u32,
pub limit: u32,
}
}
Alternative - Type aliases for concrete types:
#![allow(unused)]
fn main() {
pub type BookmarkList = PaginatedResponse<BookmarkResponse>;
pub type CategoryList = PaginatedResponse<CategoryResponse>;
async fn list_bookmarks() -> Json<BookmarkList> {
// ...
}
}
8. impl IntoResponse Return Type Issues
Problem:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/")]
async fn handler() -> impl IntoResponse { // ❌ May cause Handler trait errors
Html("<h1>Hello</h1>")
}
}
Solution - Use concrete types:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/")]
async fn handler() -> Html<String> { // ✅ Concrete type
Html("<h1>Hello</h1>".to_string())
}
}
Common Response Types:
| Type | Use Case |
|---|---|
Html<String> | HTML content |
Json<T> | JSON response (T must impl Schema) |
String | Plain text |
StatusCode | Status code only |
(StatusCode, Json<T>) | Status + JSON |
Result<T, ApiError> | Fallible responses |
9. State Not Found at Runtime
Symptom:
panic: State not found in request extensions
Problem:
#![allow(unused)]
fn main() {
#[rustapi_rs::get("/users")]
async fn list_users(State(db): State<Database>) -> Json<Vec<User>> {
// ...
}
// main.rs
RustApi::auto()
// ❌ Forgot to add .state(...)
.run("0.0.0.0:8080")
.await
}
Solution:
#![allow(unused)]
fn main() {
RustApi::auto()
.state(database) // ✅ Add the state!
.run("0.0.0.0:8080")
.await
}
10. Extractor Order Matters
Rule: Body-consuming extractors (Json<T>, Body) must come last.
Wrong:
#![allow(unused)]
fn main() {
async fn handler(
Json(body): Json<CreateUser>, // ❌ Body extractor first
State(db): State<Database>,
) -> Result<Json<User>> { ... }
}
Correct:
#![allow(unused)]
fn main() {
async fn handler(
State(db): State<Database>, // ✅ Non-body extractors first
Query(params): Query<Params>,
Json(body): Json<CreateUser>, // ✅ Body extractor last
) -> Result<Json<User>> { ... }
}
Why?
State,Query,Pathextract from request parts (headers, URL)Json,Bodyconsume the request body (can only be read once)
Quick Checklist: Adding a New Handler
- Add
Schemaderive to all extractor structs (Query<T>,Path<T>,Json<T>) - Add
Schemaderive to response structs - Use
#[rustapi_rs::get/post/...]macros (notrustapi_macros) - Add validation with
Validatederive if needed - Register state with
.state(...)onRustApi - Put body extractors (
Json<T>) last in parameter list - Run
cargo checkto verify - Test in Swagger UI at
http://localhost:8080/docs
The Golden Rules
- Add
Schemaderive to any struct used with extractors or responses - Don’t add external OpenAPI crates directly -
rustapi-openapiis already included - Import from
rustapi_rsonly - never use internal crates directly - Use
RustApi::auto()with handler macros for automatic route discovery
Follow these rules and you’ll have a smooth experience with RustAPI! 🚀
Learning & Examples
Welcome to the RustAPI learning resources! This section provides structured learning paths and links to comprehensive real-world examples to help you master the framework.
🎓 Structured Curriculum
New to RustAPI? Follow our step-by-step Structured Learning Path to go from beginner to production-ready.
📚 Learning Resources
Official Examples Repository
We maintain a comprehensive examples repository with 18 real-world projects demonstrating RustAPI’s full capabilities:
🔗 rustapi-rs-examples - Complete examples from hello-world to production microservices
Cookbook Internal Path
If you prefer reading through documentation first, follow this path through the cookbook:
- Foundations: Start with Handlers & Extractors and System Overview.
- Core Crates: Read about rustapi-core and rustapi-macros.
- Building Blocks: Try the Creating Resources recipe.
- Security: Implement JWT Authentication and CSRF Protection.
- Advanced: Explore Performance Tuning and HTTP/3.
- Background Jobs: Master rustapi-jobs for async processing.
Why Use the Examples Repository?
| Benefit | Description |
|---|---|
| Structured Learning | Progress from beginner → intermediate → advanced |
| Real-world Patterns | Production-ready implementations you can adapt |
| Feature Discovery | Find examples by the features you want to learn |
| AI-Friendly | Module-level docs help AI assistants understand your code |
🎯 Learning Paths
Choose a learning path based on your goals:
🚀 Path 1: REST API Developer
Build production-ready REST APIs with RustAPI.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | hello-world | Basic routing, handlers, server setup |
| 2 | crud-api | CRUD operations, extractors, error handling |
| 3 | auth-api | JWT authentication, protected routes |
| 4 | middleware-chain | Custom middleware, logging, CORS |
| 5 | sqlx-crud | Database integration, async queries |
Related Cookbook Recipes:
🏗️ Path 2: Microservices Architect
Design and build distributed systems with RustAPI.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | crud-api | Service fundamentals |
| 2 | middleware-chain | Cross-cutting concerns |
| 3 | rate-limit-demo | API protection, throttling |
| 4 | microservices | Service communication patterns |
| 5 | microservices-advanced | Service discovery, Consul integration |
| 6 | Service Mocking | Testing microservices with MockServer from rustapi-testing |
| 7 | Background jobs (conceptual) | Background processing with rustapi-jobs, Redis/Postgres backends |
Note: The Background jobs (conceptual) step refers to using the
rustapi-jobscrate rather than a standalone example project. Related Cookbook Recipes:
⚡ Path 3: Real-time Applications
Build interactive, real-time features with WebSockets.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | hello-world | Framework basics |
| 2 | websocket | WebSocket connections, message handling |
| 3 | middleware-chain | Connection middleware |
| 4 | graphql-api | Subscriptions, real-time queries |
Related Cookbook Recipes:
🤖 Path 4: AI/LLM Integration
Build AI-friendly APIs with TOON format and MCP support.
| Step | Example | Skills Learned |
|---|---|---|
| 1 | crud-api | API fundamentals |
| 2 | toon-api | TOON format for LLM-friendly responses |
| 3 | mcp-server | Model Context Protocol implementation |
| 4 | proof-of-concept | Combining multiple AI features |
Related Cookbook Recipes:
🏢 Path 5: Enterprise Platform
Build robust, observable, and secure systems.
| Step | Feature | Description |
|---|---|---|
| 1 | Observability | Set up OpenTelemetry and Structured Logging |
| 2 | Resilience | Implement Circuit Breakers and Retries |
| 3 | Advanced Security | Add OAuth2 and Security Headers |
| 4 | Optimization | Configure Caching and Deduplication |
| 5 | Background Jobs | Implement Reliable Job Queues |
| 6 | Debugging | Set up Time-Travel Debugging |
| 7 | Reliable Testing | Master Mocking and Integration Testing |
Related Cookbook Recipes:
- rustapi-testing: The Auditor
- rustapi-extras: The Toolbox
- Time-Travel Debugging
- rustapi-jobs: The Workhorse
- Resilience Patterns
📦 Examples by Category
Getting Started
| Example | Description | Difficulty |
|---|---|---|
hello-world | Minimal RustAPI server | ⭐ Beginner |
crud-api | Complete CRUD operations | ⭐ Beginner |
Authentication & Security
| Example | Description | Difficulty |
|---|---|---|
auth-api | JWT authentication flow | ⭐⭐ Intermediate |
middleware-chain | Middleware composition | ⭐⭐ Intermediate |
rate-limit-demo | API rate limiting | ⭐⭐ Intermediate |
Database Integration
| Example | Description | Difficulty |
|---|---|---|
sqlx-crud | SQLx with PostgreSQL/SQLite | ⭐⭐ Intermediate |
event-sourcing | Event sourcing patterns | ⭐⭐⭐ Advanced |
AI & LLM
| Example | Description | Difficulty |
|---|---|---|
toon-api | TOON format responses | ⭐⭐ Intermediate |
mcp-server | Model Context Protocol | ⭐⭐⭐ Advanced |
Real-time & GraphQL
| Example | Description | Difficulty |
|---|---|---|
websocket | WebSocket chat example | ⭐⭐ Intermediate |
graphql-api | GraphQL with async-graphql | ⭐⭐⭐ Advanced |
Production Patterns
| Example | Description | Difficulty |
|---|---|---|
microservices | Basic service communication | ⭐⭐⭐ Advanced |
microservices-advanced | Consul service discovery | ⭐⭐⭐ Advanced |
serverless-lambda | AWS Lambda deployment | ⭐⭐⭐ Advanced |
🔧 Feature Matrix
Find examples by the RustAPI features they demonstrate:
| Feature | Examples |
|---|---|
#[get], #[post] macros | All examples |
State<T> extractor | crud-api, auth-api, sqlx-crud |
Json<T> extractor | crud-api, auth-api, graphql-api |
ValidatedJson<T> | auth-api, crud-api |
JWT (extras-jwt feature) | auth-api, microservices |
CORS (extras-cors feature) | middleware-chain, auth-api |
| Rate Limiting | rate-limit-demo, auth-api |
WebSockets (protocol-ws feature) | websocket, graphql-api |
TOON (protocol-toon feature) | toon-api, mcp-server |
OAuth2 (oauth2-client) | auth-api (extended) |
| Circuit Breaker | microservices |
Replay (extras-replay feature) | microservices (conceptual) |
OpenTelemetry (otel) | microservices-advanced |
| OpenAPI/Swagger | All examples |
🚦 Getting Started with Examples
Clone the Repository
git clone https://github.com/Tuntii/rustapi-rs-examples.git
cd rustapi-rs-examples
Run an Example
cd hello-world
cargo run
Test an Example
# Most examples have tests
cargo test
# Or use the TestClient
cd ../crud-api
cargo test
Explore the Structure
Each example includes:
README.md- Detailed documentation with API endpointssrc/main.rs- Entry point with server setupsrc/handlers.rs- Request handlers (where applicable)Cargo.toml- Dependencies and feature flags- Tests demonstrating the TestClient
📖 Cross-Reference: Cookbook ↔ Examples
| Cookbook Recipe | Related Examples |
|---|---|
| Creating Resources | crud-api, sqlx-crud |
| JWT Authentication | auth-api |
| CSRF Protection | auth-api, middleware-chain |
| Database Integration | sqlx-crud, event-sourcing |
| File Uploads | file-upload (planned) |
| Custom Middleware | middleware-chain |
| Real-time Chat | websocket |
| Production Tuning | microservices-advanced |
| Resilience Patterns | microservices |
| Time-Travel Debugging | microservices |
| Deployment | serverless-lambda |
💡 Contributing Examples
Have a great example to share? We welcome contributions!
- Fork the rustapi-rs-examples repository
- Create your example following our structure guidelines
- Add comprehensive documentation in README.md
- Submit a pull request
Example Guidelines
- Include a clear README with prerequisites and API endpoints
- Add code comments explaining RustAPI-specific patterns
- Include working tests using
rustapi-testing - List the feature flags used
🔗 Additional Resources
- RustAPI GitHub - Framework source code
- API Reference - Generated documentation
- Feature Flags Reference - All available features
- Architecture Guide - How RustAPI works internally
💬 Need help? Open an issue in the examples repository or join our community discussions!
Structured Learning Path
This curriculum is designed to take you from a RustAPI beginner to an advanced user capable of building production-grade microservices.
Phase 1: Foundations
Goal: Build a simple CRUD API and understand the core request/response cycle.
Module 1: Introduction & Setup
- Prerequisites: Rust installed, basic Cargo knowledge.
- Reading: Installation, Project Structure.
- Task: Create a new project using
cargo rustapi new my-api. - Expected Output: A running server that responds to
GET /with “Hello World”. - Pitfalls: Not enabling
tokiofeatures if setting up manually.
🛠️ Mini Project: “The Echo Server”
Create a new endpoint POST /echo that accepts any text body and returns it back to the client. This verifies your setup handles basic I/O correctly.
🧠 Knowledge Check
- What command scaffolds a new RustAPI project?
- Which feature flag is required for the async runtime?
- Where is the main entry point of the application typically located?
Module 2: Routing & Handlers
- Prerequisites: Module 1.
- Reading: Handlers & Extractors.
- Task: Create routes for
GET /users,POST /users,GET /users/{id}. - Expected Output: Endpoints that return static JSON data.
- Pitfalls: Forgetting to register routes in
main.rsif not using auto-discovery.
🛠️ Mini Project: “The Calculator”
Create an endpoint GET /add?a=5&b=10 that returns {"result": 15}. This practices query parameter extraction and JSON responses.
🧠 Knowledge Check
- Which macro is used to define a GET handler?
- How do you return a JSON response from a handler?
- What is the return type of a typical handler function?
Module 3: Extractors
- Prerequisites: Module 2.
- Reading: Handlers & Extractors.
- Task: Use
Path,Query, andJsonextractors to handle dynamic input. - Expected Output:
GET /users/{id}returns the ID.POST /usersechoes the JSON body. - Pitfalls: Consuming the body twice (e.g., using
JsonandBodyin the same handler).
🛠️ Mini Project: “The User Registry”
Create a POST /register endpoint that accepts a JSON body {"username": "...", "age": ...} and returns a welcome message using the username. Use the Json extractor.
🧠 Knowledge Check
- Which extractor is used for URL parameters like
/users/:id? - Which extractor parses the request body as JSON?
- Can you use multiple extractors in a single handler?
🏆 Phase 1 Capstone: “The Todo List API”
Objective: Build a simple in-memory Todo List API. Requirements:
GET /todos: List all todos.POST /todos: Create a new todo.GET /todos/:id: Get a specific todo.DELETE /todos/:id: Delete a todo.- Use
Stateto store the list in aMutex<Vec<Todo>>.
Phase 2: Core Development
Goal: Add real logic, validation, and documentation.
Module 4: State Management
- Prerequisites: Phase 1.
- Reading: State Extractor.
- Task: Create an
AppStatestruct with aMutex<Vec<User>>. Inject it into handlers. - Expected Output: A stateful API where POST adds a user and GET retrieves it (in-memory).
- Pitfalls: Using
std::sync::Mutexinstead oftokio::sync::Mutexin async code (thoughstdis fine for simple data).
🧠 Knowledge Check
- How do you inject global state into the application?
- Which extractor retrieves the application state?
- Why should you use
Arcfor shared state?
Module 4.5: Database Integration
- Prerequisites: Module 4.
- Reading: Database Integration.
- Task: Replace the in-memory
Mutex<Vec<User>>with a PostgreSQL connection pool (sqlx::PgPool). - Expected Output: Data persists across server restarts.
- Pitfalls: Blocking the async runtime with synchronous DB drivers (use
sqlxortokio-postgres).
🧠 Knowledge Check
- Why is connection pooling important?
- How do you share a DB pool across handlers?
- What is the benefit of compile-time query checking in SQLx?
Module 5: Validation
- Prerequisites: Module 4.
- Reading: Validation.
- Task: Add
#[derive(Validate)]to yourUserstruct. UseValidatedJson. - Expected Output: Requests with invalid email or short password return
422 Unprocessable Entity. - Pitfalls: Forgetting to add
#[validate]attributes to struct fields.
🧠 Knowledge Check
- Which trait must a struct implement to be validatable?
- What HTTP status code is returned on validation failure?
- How do you combine JSON extraction and validation?
Module 5.5: Error Handling
- Prerequisites: Module 5.
- Reading: Error Handling.
- Task: Create a custom
ApiErrorenum and implementIntoResponse. Return robust error messages. - Expected Output:
GET /users/999returns404 Not Foundwith a structured JSON error body. - Pitfalls: Exposing internal database errors (like SQL strings) to the client.
🧠 Knowledge Check
- What is the standard error type in RustAPI?
- How do you mask internal errors in production?
- What is the purpose of the
error_idfield?
Module 6: OpenAPI & HATEOAS
- Prerequisites: Module 5.
- Reading: OpenAPI, OpenAPI Refs, Pagination Recipe.
- Task: Add
#[derive(Schema)]to all DTOs. Use#[derive(Schema)]on a shared struct and reference it in multiple places. - Expected Output: Swagger UI at
/docsshowing full schema with shared components. - Pitfalls: Recursive schemas without
BoxorOption.
🧠 Knowledge Check
- What does
#[derive(Schema)]do? - How does RustAPI handle shared schema components?
- What is HATEOAS and why is it useful?
Module 6.5: File Uploads & Multipart
- Prerequisites: Module 6.
- Reading: File Uploads.
- Task: Create an endpoint
POST /uploadthat accepts a file and saves it to disk. - Expected Output:
curl -F file=@image.pnguploads the file. - Pitfalls: Loading large files entirely into memory (use streaming).
🧠 Knowledge Check
- Which extractor is used for file uploads?
- Why should you use
field.chunk()instead offield.bytes()? - How do you increase the request body size limit?
🏆 Phase 2 Capstone: “The Secure Blog Engine”
Objective: Enhance the Todo API into a Blog Engine. Requirements:
- Add
Postresource with title, content, and author. - Validate that titles are not empty and content is at least 10 chars.
- Add pagination to
GET /posts. - Enable Swagger UI to visualize the API.
Phase 3: Advanced Features
Goal: Security, Real-time, and Production readiness.
Module 7: Authentication (JWT & OAuth2)
- Prerequisites: Phase 2.
- Reading: JWT Auth Recipe, OAuth2 Client.
- Task:
- Implement a login route that returns a JWT.
- Protect user routes with
AuthUserextractor. - (Optional) Implement “Login with Google” using
OAuth2Client.
- Expected Output: Protected routes return
401 Unauthorizedwithout a valid token. - Pitfalls: Hardcoding secrets. Not checking token expiration.
🧠 Knowledge Check
- What is the role of the
AuthUserextractor? - How does OAuth2 PKCE improve security?
- Where should you store the JWT secret?
Module 8: Advanced Middleware
- Prerequisites: Module 7.
- Reading: Advanced Middleware.
- Task:
- Apply
RateLimitLayerto your login endpoint (10 requests/minute). - Add
DedupLayerto a payment endpoint. - Cache the response of a public “stats” endpoint.
- Apply
- Expected Output: Sending 11 login attempts results in
429 Too Many Requests. - Pitfalls: Caching responses that contain user-specific data.
🧠 Knowledge Check
- What header indicates when the rate limit resets?
- Why is request deduplication important for payments?
- Which requests are typically safe to cache?
Module 9: WebSockets & Real-time
- Prerequisites: Phase 2.
- Reading: WebSockets Recipe.
- Task: Create a chat endpoint where users can broadcast messages.
- Expected Output: Multiple clients connected via WS receiving messages in real-time.
- Pitfalls: Blocking the WebSocket loop with long-running synchronous tasks.
🧠 Knowledge Check
- How do you upgrade an HTTP request to a WebSocket connection?
- Can you share state between HTTP handlers and WebSocket handlers?
- What happens if a WebSocket handler panics?
Module 10: Production Readiness & Deployment
- Prerequisites: Phase 3.
- Reading: Production Tuning, Resilience, Deployment.
- Task:
- Add
CompressionLayer, andTimeoutLayer. - Use
cargo rustapi deploy dockerto generate a Dockerfile.
- Add
- Expected Output: A resilient API ready for deployment.
- Pitfalls: Setting timeouts too low for slow operations.
🧠 Knowledge Check
- Why is timeout middleware important?
- What command generates a production Dockerfile?
- How do you enable compression for responses?
Module 11: Background Jobs & Testing
- Prerequisites: Phase 3.
- Reading: Background Jobs Recipe, Testing Strategy.
- Task:
- Implement a job
WelcomeEmailJobthat sends a “Welcome” email (simulated withtokio::time::sleep). - Enqueue this job inside your
POST /registerhandler. - Write an integration test using
TestClientto verify the registration endpoint.
- Implement a job
- Expected Output: Registration returns 200 immediately (low latency); console logs show “Sending welcome email to …” shortly after (asynchronous). Tests pass.
- Pitfalls: Forgetting to start the job worker loop (
JobWorker::new(queue).run().await).
🛠️ Mini Project: “The Email Worker”
Create a system where users can request a “Report”.
POST /reports: Enqueues aGenerateReportJob. Returns{"job_id": "..."}immediately.- The job simulates 5 seconds of work and then writes “Report Generated” to a file or log.
- (Bonus) Use Redis backend for persistence.
🧠 Knowledge Check
- Why should you offload email sending to a background job?
- Which backend is suitable for local development vs production?
- How do you enqueue a job from a handler?
- How can you test that a job was enqueued without actually running it?
🏆 Phase 3 Capstone: “The Real-Time Collaboration Tool”
Objective: Build a real-time collaborative note-taking app. Requirements:
- Auth: Users must log in (JWT or OAuth2) to edit notes.
- Real-time: Changes to a note are broadcast to all viewers via WebSockets.
- Jobs: When a note is deleted, schedule a background job to archive it (simulate archive).
- Resilience: Rate limit API requests to prevent abuse.
- Deployment: specify a
Dockerfilefor the application.
Phase 4: Enterprise Scale
Goal: Build observable, resilient, and high-performance distributed systems.
Module 12: Observability & Auditing
- Prerequisites: Phase 3.
- Reading: Observability (Extras), Audit Logging.
- Task:
- Enable
structured-loggingandotel. - Configure tracing to export spans.
- Implement
AuditStoreand log a “User Login” event with IP address.
- Enable
- Expected Output: Logs are JSON formatted. Audit log contains a new entry for every login.
- Pitfalls: High cardinality in metric labels.
🧠 Knowledge Check
- What is the difference between logging and auditing?
- Which fields are required in an
AuditEvent? - How does structured logging aid debugging?
Module 13: Resilience & Security
- Prerequisites: Phase 3.
- Reading: Resilience Patterns, Time-Travel Debugging.
- Task:
- Wrap an external API call with a
CircuitBreaker. - Implement
RetryLayerfor transient failures. - (Optional) Use
ReplayLayerto record and replay a tricky bug scenario.
- Wrap an external API call with a
- Expected Output: System degrades gracefully when external service is down. Replay file captures the exact request sequence.
- Pitfalls: Infinite retry loops or retrying non-idempotent operations.
🧠 Knowledge Check
- What state does a Circuit Breaker have when it stops traffic?
- Why is jitter important in retry strategies?
- How does Time-Travel Debugging help with “Heisenbugs”?
Module 14: High Performance
- Prerequisites: Phase 3.
- Reading: HTTP/3 (QUIC), Performance Tuning, Compression.
- Task:
- Enable
http3feature and generate self-signed certs. - Serve traffic over QUIC.
- Add
CompressionLayerto compress large responses.
- Enable
- Expected Output: Browser/Client connects via HTTP/3. Responses have
content-encoding: gzip. - Pitfalls: Compressing small responses (waste of CPU) or already compressed data (images).
🧠 Knowledge Check
- What transport protocol does HTTP/3 use?
- How does
simd-jsonimprove performance? - Why shouldn’t you compress JPEG images?
🏆 Phase 4 Capstone: “The High-Scale Event Platform”
Objective: Architect a system capable of handling thousands of events per second. Requirements:
- Ingestion: HTTP/3 endpoint receiving JSON events.
- Processing: Push events to a
rustapi-jobsqueue (Redis backend). - Storage: Workers process events and store aggregates in a database.
- Observability: Full tracing from ingestion to storage.
- Audit: Log all configuration changes to the system.
- Resilience: Circuit breakers on database writes.
- Testing: Load test the ingestion endpoint (e.g., with k6 or similar) and observe metrics.
Phase 5: Specialized Skills
Goal: Master integration with AI, gRPC, and server-side rendering.
Module 15: Server-Side Rendering (SSR)
- Prerequisites: Phase 2.
- Reading: SSR Recipe.
- Task: Create a dashboard showing system status using
rustapi-view. - Expected Output: HTML page rendered with Tera templates, displaying dynamic data.
- Pitfalls: Forgetting to create the
templates/directory.
🧠 Knowledge Check
- Which template engine does RustAPI use?
- How do you pass data to a template?
- How does template reloading work in debug mode?
Module 16: gRPC Microservices
- Prerequisites: Phase 3.
- Reading: gRPC Recipe.
- Task: Run a gRPC service alongside your HTTP API that handles internal user lookups.
- Expected Output: Both servers running; HTTP endpoint calls gRPC method (simulated).
- Pitfalls: Port conflicts if not configured correctly.
🧠 Knowledge Check
- Which crate provides gRPC helpers for RustAPI?
- Can HTTP and gRPC share the same Tokio runtime?
- Why might you want to run both in the same process?
Module 17: AI Integration (TOON)
- Prerequisites: Phase 2.
- Reading: AI Integration Recipe.
- Task: Create an endpoint that returns standard JSON for browsers but TOON for
Accept: application/toon. - Expected Output:
curlrequests with different headers return different formats. - Pitfalls: Not checking the
Acceptheader in client code.
🧠 Knowledge Check
- What is TOON and why is it useful for LLMs?
- How does
LlmResponsedecide which format to return? - How much token usage can TOON save on average?
🏆 Phase 5 Capstone: “The Intelligent Dashboard”
Objective: Combine SSR, gRPC, and AI features. Requirements:
- Backend: Retrieve stats via gRPC from a “worker” service.
- Frontend: Render a dashboard using SSR.
- AI Agent: Expose a TOON endpoint for an AI agent to query the system status.
Next Steps
- Explore the Examples Repository.
- Contribute a new recipe to the Cookbook!