Over the past few months, we built a framework to run custom code embedded within the Grafbase GraphQL Federation Gateway. Our customers frequently request specific customizations that don't fit neatly into our generic feature set. For example, large enterprises and financial services often need complex authorization strategies and have high security demands.
These customers share a desire for custom solutions, sometimes utilizing proprietary services without an open protocol. Instead of implementing these features for everyone within the gateway—which could become overwhelming and yield diminishing returns—we take a different approach.
Some of these customer needs include:
- Custom authentication mechanisms that do not conform to the JWT protocol.
- Data authorization based on input or output data, using internal services to determine which fields to show or hide.
- Proprietary caching services with unique protocols and implementations not publicly available.
- Specific logging requirements in custom formats that do not adhere to common logging patterns.
How can we provide tooling for our customers to meet their unique needs in a user-friendly way while allowing for more complex use cases?
The Grafbase Gateway operates as a native Rust server. However, Rust currently lacks a static application binary interface, which restricts dynamic loading of customer code compiled with different versions of rustc
. To run native modules within the gateway would necessitate compiling the entire gateway with each custom module, complicating management for users during gateway updates.
An alternative approach involves interpreting a scripting language, such as Python, JavaScript, RHAI, or Steel. Each of these languages offers distinct advantages. For instance, JavaScript and Python have robust runtimes in Rust and are well-known in the developer community. However, embedding a full runtime for any scripting language would significantly increase the binary size of the gateway. Additionally, users might encounter compatibility issues with our chosen runtime, leading to challenges when their code runs successfully in their environment but fails in our version of the runtime.
RHAI and Steel are scripting languages designed for embedding within a Rust host. This characteristic ensures that they will run smoothly within the environment. However, both languages are relatively unfamiliar to many developers; RHAI resembles Rust but has its unique quirks, while Steel, being a Scheme implementation, may attract more advanced developers interested in programming languages. Moreover, neither RHAI nor Steel offers a well-established ecosystem of libraries for common functionalities such as network I/O and HTTP.
Apollo tackled this problem in stages. They allow attaching to the request lifecycle using RHAI scripts, enabling input investigation and potential execution prevention. Unfortunately, customers can't perform network I/O from RHAI scripts. Recognizing this limitation, Apollo offers external coprocessing with a sidecar service for enterprise license holders. Customers implement a network service and configure the Apollo router to communicate via a predefined protocol. This adds an extra network call to the request lifecycle and increases complexity when facing RHAI limitations.
Apollo's final option is writing native Rust plugins. Customers download the Apollo router source code, implement a complex Rust module with their business logic, and license their modifications under the Elastic License v2.0. Refactoring in Apollo's Router codebase might make rebasing custom Rust logic challenging, preventing installation of later router versions.
The RHAI scripts and co-processor can answer only a certain set of problems. For example, how do we implement a custom cache storage utilizing these tools? Eventually it forces the customer to either pay Apollo to implement the needed code, or the customer to modify and build a custom version of the router serving their needs.
For much of our company's history, we have relied on compiling targets for WebAssembly, deploying our gateway into Cloudflare Workers. This setup effectively runs Rust code within a JavaScript runtime. However, running a WebAssembly module within a Rust host poses multiple challenges:
- Communication between the guest and the host can only occur with integers or floats.
- Programmers must manage memory allocations for shared data and ensure proper memory deallocation.
- WebAssembly lacks support for many system functions, including file and network I/O.
If the host needs to call a guest function written in WebAssembly with data types more complex than integers and floats, it has to store the shared data in memory accessible to both the host and guest. This necessitates implementing a layer to relay memory addresses and lengths to the guest while ensuring the host cleans up memory no longer in use.
The absence of system calls has also been a significant hurdle for developers interested in using WebAssembly. The only network call available has been HTTP fetch. This limitation means that many commonly used libraries, such as database clients, simply won’t compile or may throw runtime errors when trying to access the network. Although Cloudflare has implemented its own TCP socket functionality for Cloudflare Workers, it is not generally usable outside their platform.
To address these challenges, the WebAssembly System Interface (WASI) Subgroup has worked on developing a platform that allows the seamless embedding and execution of WebAssembly code within a host. In January 2024, the WASI Subgroup launched WASI Preview 2, introducing definitions for standardized communication between the guest and host, as well as modularizing and versioning the interface definitions.
With WASI, it becomes possible to send complex structures and strings across the WebAssembly boundary using standardized definitions. Runtimes such as Wasmtime can provide tooling for hosts to execute WebAssembly components written in various languages, all while maintaining a common understanding of how complex types are handled.
For Grafbase, the introduction of WebAssembly components presents a sweet spot for enabling customer code within the gateway. WASI's application binary interface allows users to leverage the tools of their choice to compile additional functionality into the gateway.
The second WASI preview introduced the WIT interface language, which defines the boundaries between the host and the guest. WIT is solely an interface definition language, not intended for implementation:
world example {
export hello: func(name: string) -> string;
}
A "world" defines the functionality that either the guest or host provides, including how the host can call guest functions and what return values to expect. Now, the guest can utilize a string type of their choice directly, and the host receives strings in a format native to its language.
In addition to worlds, WIT definitions can also define interfaces:
interface first {
hello: func(name: string) -> string;
}
interface second {
sum: func(x: i64, y: i64) -> i64;
}
world example {
export first;
export second;
}
The guest only needs to implement the interfaces exported in the world. These interfaces can be packaged into separate files and versioned, making component distribution easier.
We chose Rust as the guest language due to its robust tooling and support for WASI. Although other languages, such as Go and JavaScript, are gaining traction within the WASI ecosystem, Rust remains the most equipped currently.
A user implementing a guest must define functions to match those specified in the WIT definition. The tool cargo-component simplifies this task by generating Rust bindings based on the WIT, compiling the Rust program for the wasm32-wasip1
target, and generating shims to ensure compatibility with the wasm32-wasip2
standard.
To create a new WebAssembly component, run:
cargo component --new request-hook
This command generates a file structure consisting of:
.
├── Cargo.lock
├── Cargo.toml
├── src
│ └── lib.rs
└── wit
└── world.wit
Replace the WIT definition in wit/world.wit
by copying the Grafbase WIT definition from our GitHub repository:
curl -o wit/world.wit https://raw.githubusercontent.com/grafbase/grafbase/main/engine/crates/wasi-component-loader/gateway-hooks.wit
Next, run cargo component check
to reveal errors regarding missing function definitions. The compiler guides you to implement everything exported in the world. Leveraging Rust tooling, such as rust-analyzer, automates much of the boilerplate code.
Here’s a look at the guest implementation:
#[allow(warnings)]
mod bindings;
struct Component;
bindings::export!(Component with_types_in bindings);
We export the Component
struct but need to implement the functions defined in the WIT. After running cargo component
at least once, it generates a file named src/bindings.rs
, where a trait Guest
specifies the functions we promise to implement in the WIT. We can implement the Guest
trait for our component as follows:
impl Guest for Component {
}
Utilizing a code action can add the needed imports and generate the necessary functions automatically:
use bindings::exports::component::grafbase::gateway_request::{Context, Error, Guest, Headers};
#[allow(warnings)]
mod bindings;
struct Component;
impl Guest for Component {
fn on_gateway_request(_: Context, headers: Headers) -> Result<(), Error> {
todo!()
}
}
The hook takes context and headers as input parameters. The context provides a key-value store shared by all subsequent hooks during the request, along with a few system functions. The headers collect all request headers for reading or modification. If we return an Ok
response, the operation continues; if we return an Error
, it stops execution.
While this function currently panics when called due to the todo!()
macro, we need to return a valid response:
use bindings::exports::component::grafbase::gateway_request::{Context, Error, Guest, Headers};
#[allow(warnings)]
mod bindings;
struct Component;
impl Guest for Component {
match headers.get("x-custom").as_deref() {
Some("secret") => Ok(()),
_ => Err(Error {
message: String::from("access denied"),
extensions: Vec::new(),
}),
}
}
The code examines the value of the x-custom
header. If the value is secret
, the operation continues. Otherwise, we send an error response.
This covers everything needed for a simple implementation. Running cargo component build --release
generates the bytecode you can embed into the Grafbase Gateway through the TOML configuration. The compiler outputs the WASM component bytecode in target/wasm32-wasip1/debug/request_hook.wasm
.
[hooks]
location = "target/wasm32-wasip1/release/request_hook.wasm"
The target remains as wasm32-wasip1
due to Rust’s limited support for Preview 2 at present. The Cargo component generates corresponding shims to the bindings, ensuring that despite the target being Preview 1, the bytecode output aligns with WASI Preview 2 standards. As of version 1.81.0, the Rust compiler has gained tier-2 status for the WebAssembly System Interface Preview 2. This improvement means that rustc
can now directly generate valid wasip2
output. With this development, cargo-component
will soon compile directly to this target, simplifying shims and making the output more comprehensible.
Start the gateway with this configuration, along with a simple federated graph:
grafbase-gateway --schema federated-schema.graphql --config grafbase.toml
To observe the request hook in action, execute the following queries against the federated graph.
curl -X POST 'http://127.0.0.1:5000/graphql' \
--data '{"query": "query { user(id: 1) { id name address { street } } }"}' \
-H "Content-Type: application/json" \
-H "x-custom: secret"
The above request correctly sets the value of the x-custom
header, and the query should return data:
{
"data": {
"user": {
"id": 1,
"name": "Alice",
"address": {
"street": "123 Folsom"
}
}
}
}
However, changing the header value leads to a denial of access:
curl -X POST 'http://127.0.0.1:5000/graphql' \
--data '{"query": "query { user(id: 1) { id name address { street } } }"}' \
-H "Content-Type: application/json" \
-H "x-custom: wrong"
This hook prevents the operation and returns an error:
{
"errors": [
{
"message": "access denied",
"extensions": {
"code": "BAD_REQUEST"
}
}
]
}
You can find the example component in the Grafbase GitHub repository.
Several runtimes can execute WASI from a Rust host, with Wasmtime currently being the most prominent.
A WebAssembly component operates within a sandbox environment, with boundaries defined by the runtime. The runtime can expose system functionality for the guest in pieces. For example, a guest may have access to standard output but not to the filesystem or network.
A hook function instantiates in memory alongside a resource table and the required components from the host. The resource table serves as shared memory where the host stores data the guest can access through shared resources. A hook call can execute only one request at a time. Calling the hook again will expose data from previous calls. Thus, our first approach was to instantiate a new component for each request, but this approach added almost a millisecond to total response time.
Instead, we implemented a connection pool to recycle the component after finishing a hook call without triggering runtime errors. This method significantly reduced execution time to under a hundred microseconds, which, although still noticeable, becomes manageable compared to the duration of network calls within the request lifecycle.
The Grafbase Gateway runtime is executing requests in a multi-threaded asynchronous environment. The second preview of WASI doesn't yet provide asynchronous calls to guest functions and calling a hook would block the current thread. This induces a bottleneck if the hook execution ends up executing I/O such as network requests.
The Wasmtime runtime enables co-operative execution of blocking function calls. Being a runtime, it can stop the hook execution at any point. The library provides a few clever techniques on how to yield from a blocking function call.
The store may hold a resource known as fuel. If fuel consumption is active, the compiled code becomes instrumented, ensuring that each operation expends a predetermined amount of fuel. The store maintains a limit on fuel consumption, pausing the execution when this limit is reached and yielding control back to the calling thread. This architecture allows the thread to perform other tasks, and when the future is polled again, it resumes execution until it exhausts the defined fuel limit.
If the guest exceeds the allowed fuel usage, the function's execution traps and returns an error to the host. This design prevents issues such as deadlocks or infinite loops from blocking the host thread. While this fuel management approach introduces additional complexity due to instrumentation, it remains straightforward to set up, ensuring that yielding occurs in a predictable manner.
An alternative yielding solution involves configuring the engine to utilize epoch interruption, setting a deadline for the store's epoch. Yielding occurs every time this deadline is reached, although this method complicates the host-side implementation. A separate thread must spawn, holding a weak reference to the engine and invoking the increment_epoch
function continuously until the host shuts down.
The epoch interruption strategy simplifies yielding within the guest runtime while potentially enhancing performance. However, managing a ticker thread can be tricky, leading to nondeterministic yielding occurrences.
While guest functions themselves cannot be asynchronous, they may trigger asynchronous actions if powered by a runtime capable of async function execution. This means that a JavaScript guest can run its event loop, Go can use channels for async calls, and a Rust guest can utilize a single-threaded Tokio runtime to await futures similarly to the host. Each of these interactions appears blocking from the host's perspective, yet the async engine permits cooperative pausing of guest execution. Consequently, the host thread can handle multiple tasks simultaneously.
The working group is actively pursuing support for futures in the next iteration of WASM components. This enhancement would allow the host runtime to handle guest futures more efficiently, resulting in faster operations and smaller guest sizes since they wouldn’t require an included runtime.
The hooks feature has been available for a few months and comes integrated into the Grafbase Gateway, which is free and licensed under the Mozilla Public License Version 2.0. We remain committed to providing all Gateway features under an OSI-approved license.
The Grafbase Gateway fully supports Apollo federation and can operate any valid federated schema. To learn more about running the Grafbase Gateway, refer to our documentation on the gateway and hooks.
The source code of the Grafbase WASI loader is available in our GitHub repository.