Train reinforcement learning agents at scale by combining OpenFaaS with OpenAI Gym. Modular, elastic, and cost-efficient training pipelines built from serverless functions.
Operationalize RL workloads using a serverless control planeārapid iteration, elastic scaling, and simpler ops without managing long-lived trainers.
Burst data collection and evaluation across many function instances during peak demand.
Compose training stagesāenv step, rollout, reward calc, policy updateāinto functions.
Pay for compute only when work runs; scale-to-zero outside training bursts.
Kubernetes-native functions with metrics and logs through the OpenFaaS stack.
A serverless MAPE-inspired loop for RL: Measure rollouts, Analyze rewards, Plan updates, Execute policy changesāimplemented via OpenFaaS functions.
Orchestrate pipelines via Gateway; queue rollouts; fan-out invocations; collect metrics.
Gym episodes/steps executed in parallel stateless workers; artifacts stored externally.
Persist replay buffers, model checkpoints, and episode summaries in S3/DB.
Workloads benefit most when data collection and evaluation dominate compute and parallelism matters.
Scale out environment interaction across hundreds of short-lived workers.
Spin up competing configs/policies as independent function graphs.
Plug into CI/CD, store artifacts centrally, and drive experiments by events.
Explore the code, open an issue, or suggest integrationsāFaaSTrainGym is a foundation for scalable RL.
š» Go to GitHub