← Back to Tutorial Chapter 5 of 8
1 2 3 4 5 6 7 8
Chapter 5

Request Handling

Discover how OpenFaaS functions are invoked synchronously and asynchronously, and understand where queueing happens in the system.

Sync/Async Queueing Function Invocation

⚡ Synchronous vs Asynchronous Invocation

OpenFaaS supports two main invocation patterns: synchronous (sync) and asynchronous (async), each serving different use cases and requirements.

🔄 Synchronous Invocation

The client waits for the function to complete and receives the response immediately.

  • • Real-time responses
  • • Direct error handling
  • • Suitable for user-facing APIs
  • • Higher latency expectations

📬 Asynchronous Invocation

The client receives an acknowledgment and the function executes in the background.

  • • Fire-and-forget operations
  • • Better for batch processing
  • • Improved system responsiveness
  • • Requires separate result handling

🚀 How Are Functions Invoked?

The Gateway handles function invocation through a sophisticated process that determines the best way to execute each request based on configuration and requirements.

Invocation Process:

1.

Request Analysis

Gateway analyzes the request to determine invocation type and requirements

2.

Function Lookup

System looks up the target function and its current state

3.

Execution Decision

Decides whether to execute immediately or queue the request

4.

Function Execution

Function is executed either directly or through the queue system

5.

Response Handling

Response is returned to the client or stored for later retrieval

📋 Where Does Queueing Happen?

Queueing in OpenFaaS happens at multiple levels to handle high load, manage resources efficiently, and provide reliable message delivery.

Gateway Level Queueing

When the Gateway receives more requests than it can process immediately, requests are queued in memory or persistent storage.

Function Level Queueing

Individual functions can have their own queues to handle bursts of requests and manage execution order.

Provider Level Queueing

The Kubernetes provider maintains queues for function deployment, scaling, and lifecycle management.

Message Broker Integration

External message brokers like NATS or Redis can be integrated for advanced queueing scenarios.

🏗️ Queue Implementation Details

OpenFaaS implements queueing using various strategies depending on the deployment configuration and requirements.

Queue Types:

In-Memory Queues

Fast but not persistent, suitable for development and testing

Persistent Queues

Stored on disk or in databases, survive restarts and failures

Priority Queues

Process high-priority requests before lower-priority ones

Dead Letter Queues

Store failed requests for later analysis and retry

💻 Request Handling Code Example

Here's how request handling and queueing is implemented in the OpenFaaS Gateway.

// Function invocation handler
func (h *Handler) InvokeFunction(w http.ResponseWriter, r *http.Request) {

// Parse function name from URL
functionName := mux.Vars(r)["name"]

// Check if async invocation is requested
if r.Header.Get("X-Invoke-Mode") == "async" {
// Queue the request for async processing
queueRequest(functionName, r)
w.WriteHeader(http.StatusAccepted)
return
}

// Synchronous invocation
result, err := invokeFunctionSync(functionName, r)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}

w.Write(result)
}

⚡ Performance Considerations

Understanding the performance implications of different invocation patterns helps in designing efficient serverless applications.

Latency vs Throughput

Sync calls have higher latency but immediate feedback, while async calls improve throughput but require separate result handling.

Resource Utilization

Queueing helps manage resource spikes and provides better resource utilization across the system.

Error Handling

Sync calls provide immediate error feedback, while async calls require robust error handling and retry mechanisms.

Monitoring & Observability

Both patterns require different monitoring approaches to track performance and identify bottlenecks.

➡️ What's Next?

Now that you understand how requests are handled and queued, let's explore how OpenFaaS automatically scales functions from 0 to N instances.