This project contains applications required to load Snowplow data into various data warehouses.
It consists of two types of applications: Transformers and Loaders
Transformers read Snowplow enriched events, transform them to a format ready to be loaded to a data warehouse, then write them to respective blob storage.
There are two types of Transformers: Batch and Streaming
Stream Transformers read enriched events from respective stream service, transform them, then write transformed events to specified blob storage path. They write transformed events in periodic windows.
There are two different Stream Transformer applications: Transformer Kinesis and Transformer Pubsub. As one can predict, they are different variants for GCP, AWS and Azure.
It is a Spark job. It only works with AWS services. It reads enriched events from a given S3 path, transforms them, then writes transformed events to a specified S3 path.
Transformers send a message to a message queue after they are finished with transforming some batch and writing it to blob storage. This message contains information about transformed data such as where it is stored and what it looks like.
Loaders subscribe to the message queue. After a message is received, it is parsed, and necessary bits are extracted to load transformed events to the destination. Loaders construct necessary SQL statements to load transformed events then they send these SQL statements to the specified destination.
At the moment, we have loader applications for Redshift, Databricks and Snowflake.
Technical Docs | Setup Guide | Roadmap & Contributing |
---|---|---|
Technical Docs | Setup Guide | Roadmap |
Copyright (c) 2012-present Snowplow Analytics Ltd. All rights reserved.
Licensed under the Snowplow Limited Use License Agreement. (If you are uncertain how it applies to your use case, check our answers to frequently asked questions.)