PyWorkflow includes built-in safeguards to prevent runaway workflows from consuming excessive resources. These limits help ensure system stability and predictable behavior.
Since PyWorkflow uses event sourcing, every workflow action is recorded as an event. To prevent unbounded growth and memory issues, there are limits on the number of events a workflow can generate.
When a workflow reaches 50,000 events, it is terminated with an EventLimitExceededError:
from pyworkflow.core.exceptions import EventLimitExceededError# This error is raised when hard limit is reached# EventLimitExceededError: Workflow run_abc123 exceeded maximum event limit: 50000 >= 50000
The hard limit is a safety mechanism. If your workflow is hitting this limit, it likely indicates a design issue such as an infinite loop or processing too many items in a single workflow run.
# BAD: Processing millions of items in one workflow@workflow()async def process_all_orders(): orders = await get_all_orders() # Could be millions! for order in orders: await process_order(order) # Each creates events# GOOD: Process in batches with separate workflow runs@workflow()async def process_order_batch(batch_ids: list[str]): for order_id in batch_ids[:100]: # Bounded batch size await process_order(order_id)# Orchestrate batches externallyfor batch in chunk_list(all_order_ids, 100): await start(process_order_batch, batch)
Modifying event limits is not recommended. The defaults are carefully chosen to balance flexibility with safety. Only change these if you fully understand the implications.
If you must change the limits:
import pyworkflow# This will emit a UserWarning - proceed with cautionpyworkflow.configure( event_soft_limit=20_000, # Warning at 20K events event_hard_limit=100_000, # Fail at 100K events event_warning_interval=200, # Warn every 200 events after soft limit)