Chapter 05 of 8
How functions are invoked synchronously and asynchronously, and where queueing happens.
OpenFaaS supports two main invocation patterns: synchronous (sync) and asynchronous (async), each serving different use cases and requirements.
The client waits for the function to complete and receives the response immediately.
The client receives an acknowledgment and the function executes in the background.
The Gateway handles function invocation through a sophisticated process that determines the best way to execute each request based on configuration and requirements.
Gateway analyzes the request to determine invocation type and requirements
System looks up the target function and its current state
Decides whether to execute immediately or queue the request
Function is executed either directly or through the queue system
Response is returned to the client or stored for later retrieval
Queueing in OpenFaaS happens at multiple levels to handle high load, manage resources efficiently, and provide reliable message delivery.
When the Gateway receives more requests than it can process immediately, requests are queued in memory or persistent storage.
Individual functions can have their own queues to handle bursts of requests and manage execution order.
The Kubernetes provider maintains queues for function deployment, scaling, and lifecycle management.
External message brokers like NATS or Redis can be integrated for advanced queueing scenarios.
OpenFaaS implements queueing using various strategies depending on the deployment configuration and requirements.
Fast but not persistent, suitable for development and testing
Stored on disk or in databases, survive restarts and failures
Process high-priority requests before lower-priority ones
Store failed requests for later analysis and retry
Here's how request handling and queueing is implemented in the OpenFaaS Gateway.
func (h *Handler) InvokeFunction(w http.ResponseWriter, r *http.Request) { // Parse function name from URL functionName := mux.Vars(r)["name"] // Check if async invocation is requested if r.Header.Get("X-Invoke-Mode") == "async" { // Queue the request for async processing queueRequest(functionName, r) w.WriteHeader(http.StatusAccepted) return } // Synchronous invocation result, err := invokeFunctionSync(functionName, r) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } w.Write(result)}
Understanding the performance implications of different invocation patterns helps in designing efficient serverless applications.
Sync calls have higher latency but immediate feedback, while async calls improve throughput but require separate result handling.
Queueing helps manage resource spikes and provides better resource utilization across the system.
Sync calls provide immediate error feedback, while async calls require robust error handling and retry mechanisms.
Both patterns require different monitoring approaches to track performance and identify bottlenecks.
Now that you understand how requests are handled and queued, let's explore how OpenFaaS automatically scales functions from 0 to N instances.