Cadence Worker is a new role for Cadence service used for hosting any components responsible for performing background processing on the Cadence cluster.
Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster.
- Start dependency using docker if you don't have one running:
docker-compose -f docker/dev/cassandra.yml up
Then install the schemas:
make install-schema-xdc
- Start Cadence development server for cluster0, cluster1 and cluster2:
./cadence-server --zone xdc_cluster0 start
./cadence-server --zone xdc_cluster1 start
./cadence-server --zone xdc_cluster2 start
- Create a global Cadence domain that replicates data across clusters
cadence --do sample domain register --ac cluster0 --cl cluster0 cluster1 cluster2
Then run a helloworld from Go Client Sample or Java Client Sample
- Failover a domain between clusters:
Failover to cluster1:
cadence --do samples-domain domain update --ac cluster1
or failover to cluster2:
cadence --do samples-domain domain update --ac cluster2
Failback to cluster0:
cadence --do sample samples-domain update --ac cluster0
In a multiple region setup, use another set of config instead.
./cadence-server --zone cross_region_cluster0 start
./cadence-server --zone cross_region_cluster1 start
./cadence-server --zone cross_region_cluster2 start
Right now the only difference is at clusterGroupMetadata.clusterRedirectionPolicy. In multiple region setup, network communication overhead between clusters is high so should use "selected-apis-forwarding". workflow/activity workers need to be connected to each cluster to keep high availability.
Archiver is used to handle archival of workflow execution histories. It does this by hosting a cadence client worker and running an archival system workflow. The archival client gets used to initiate archival through signal sending. The archiver shards work across several workflows.