-
Notifications
You must be signed in to change notification settings - Fork 0
ThrottleX Examples
Welcome to the ThrottleX Examples page! Here you'll find practical code snippets and step-by-step guides for integrating ThrottleX with different APIs and services in Go. From RESTful APIs to gRPC and GraphQL, we've got you covered.
- 1. Getting Started with ThrottleX
- 2. Integrating with REST APIs
- 3. Integrating with gRPC Services
- 4. Integrating with GraphQL APIs
- 5. Running Tests
To get started with ThrottleX, first install the package using the following command:
go get -u github.com/neelp03/throttlex
Then, import ThrottleX into your Go project:
import (
"github.com/neelp03/throttlex/ratelimiter"
"github.com/neelp03/throttlex/store"
)
ThrottleX provides multiple rate limiting policies such as Fixed Window, Sliding Window, and Token Bucket to suit your use case.
Here’s how to integrate ThrottleX with a REST API using Go's net/http
package.
The Fixed Window algorithm allows you to specify a limit for a given time frame. In this example, each client is allowed up to 100 requests per minute.
package main
import (
"fmt"
"net/http"
"net"
"time"
"github.com/neelp03/throttlex/ratelimiter"
"github.com/neelp03/throttlex/store"
)
func main() {
// Initialize an in-memory store
memStore := store.NewMemoryStore()
// Create a Fixed Window rate limiter
limiter, err := ratelimiter.NewFixedWindowLimiter(memStore, 100, time.Minute)
if err != nil {
fmt.Printf("Failed to create rate limiter: %v\n", err)
return
}
// Define rate-limiting middleware
rateLimitMiddleware := func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Extract client IP
ip, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
key := ip
// Check if request is allowed
allowed, err := limiter.Allow(key)
if err != nil {
http.Error(w, "Service Unavailable", http.StatusServiceUnavailable)
return
}
if !allowed {
http.Error(w, "Too Many Requests", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
// Define your HTTP handler
helloHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello, World!")
})
// Apply the rate limiting middleware
http.Handle("/", rateLimitMiddleware(helloHandler))
// Start the server
fmt.Println("Server is running on http://localhost:8080")
http.ListenAndServe(":8080", nil)
}
In the example above, a Fixed Window rate limiter is used to allow 100 requests per minute for each client IP. Any requests beyond that limit will receive a 429 Too Many Requests response.
ThrottleX can also be integrated with a gRPC service to rate limit client requests.
This example demonstrates how to use the Token Bucket algorithm to allow bursts of traffic, while ensuring a steady request rate.
package main
import (
"context"
"fmt"
"net"
"google.golang.org/grpc"
"google.golang.org/grpc/peer"
"github.com/neelp03/throttlex/ratelimiter"
"github.com/neelp03/throttlex/store"
)
func main() {
// Initialize a memory-backed store
memStore := store.NewMemoryStore()
// Create a Token Bucket limiter
limiter, err := ratelimiter.NewTokenBucketLimiter(memStore, 20, 10)
if err != nil {
fmt.Printf("Failed to create rate limiter: %v\n", err)
return
}
// Define rate limiting interceptor
rateLimitInterceptor := func(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
// Extract client identifier (e.g., IP address)
p, ok := peer.FromContext(ctx)
if !ok {
return nil, fmt.Errorf("Unable to retrieve peer info from context")
}
key := p.Addr.String()
allowed, err := limiter.Allow(key)
if err != nil {
return nil, grpc.Errorf(grpc.Code(grpc.Internal), "Internal Server Error")
}
if !allowed {
return nil, grpc.Errorf(grpc.Code(grpc.ResourceExhausted), "Too Many Requests")
}
return handler(ctx, req)
}
// Set up gRPC server with interceptor
serverOptions := []grpc.ServerOption{
grpc.UnaryInterceptor(rateLimitInterceptor),
}
grpcServer := grpc.NewServer(serverOptions...)
// Start listening on port 50051
lis, err := net.Listen("tcp", ":50051")
if err != nil {
fmt.Printf("Failed to listen: %v\n", err)
return
}
fmt.Println("gRPC server is running on port 50051")
grpcServer.Serve(lis)
}
In this example, the Token Bucket rate limiter allows a burst of 20 requests with a refill rate of 10 tokens per second.
ThrottleX can also be used with GraphQL APIs to rate limit GraphQL queries.
The Sliding Window algorithm allows smoother rate limiting by using a sliding time frame. Here, we allow 100 requests per minute.
package main
import (
"fmt"
"net"
"net/http"
"time"
"github.com/graphql-go/graphql"
"github.com/graphql-go/handler"
"github.com/neelp03/throttlex/ratelimiter"
"github.com/neelp03/throttlex/store"
)
func main() {
// Define GraphQL schema
fields := graphql.Fields{
"hello": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "Hello, World!", nil
},
},
}
rootQuery := graphql.ObjectConfig{Name: "RootQuery", Fields: fields}
schemaConfig := graphql.SchemaConfig{Query: graphql.NewObject(rootQuery)}
schema, err := graphql.NewSchema(schemaConfig)
if err != nil {
fmt.Printf("Failed to create schema: %v\n", err)
return
}
// Initialize an in-memory store and Sliding Window rate limiter
memStore := store.NewMemoryStore()
limiter, err := ratelimiter.NewSlidingWindowLimiter(memStore, 100, time.Minute)
if err != nil {
fmt.Printf("Failed to create rate limiter: %v\n", err)
return
}
// Define rate limiting middleware
rateLimitMiddleware := func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ip, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
key := ip
allowed, err := limiter.Allow(key)
if err != nil {
http.Error(w, "Service Unavailable", http.StatusServiceUnavailable)
return
}
if !allowed {
http.Error(w, "Too Many Requests", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
// Create GraphQL handler
h := handler.New(&handler.Config{
Schema: &schema,
Pretty: true,
})
// Apply middleware and start HTTP server
http.Handle("/graphql", rateLimitMiddleware(h))
fmt.Println("GraphQL server is running on http://localhost:8080/graphql")
http.ListenAndServe(":8080", nil)
}
In this example, a Sliding Window rate limiter is used to allow up to 100 requests per minute for each client IP, providing smoother enforcement compared to Fixed Window.
To ensure everything works as expected, you can run tests using:
go test -race -v ./...
ThrottleX includes a variety of unit tests that cover each rate limiting algorithm and storage backend. Make sure all tests pass before making any contributions.
Feel free to experiment with these examples to get a better understanding of how ThrottleX works and how to adapt it to your own applications! For more detailed information, visit the ThrottleX Wiki.