A Terraform module which deploys a Snowplow S3 Loader application on AWS running on top of EC2. If you want to use a custom AMI for this deployment you will need to ensure it is based on top of Amazon Linux 2.
This module by default collects and forwards telemetry information to Snowplow to understand how our applications are being used. No identifying information about your sub-account or account fingerprints are ever forwarded to us - it is very simple information about what modules and applications are deployed and active.
If you wish to subscribe to our mailing list for updates to these modules or security advisories please set the user_provided_id
variable to include a valid email address which we can reach you at.
To disable telemetry simply set variable telemetry_enabled = false
.
For details on what information is collected please see this module: https://github.com/snowplow-devops/terraform-snowplow-telemetry
An S3 Loader requires an input stream, an output stream for corrupted data that cannot be saved to S3 and an S3 bucket to save the data into.
module "s3_pipeline_bucket" {
source = "snowplow-devops/s3-bucket/aws"
version = "0.2.0"
bucket_name = "your-bucket-name"
}
module "enriched_stream" {
source = "snowplow-devops/kinesis-stream/aws"
version = "0.2.0"
name = "enriched-stream"
}
module "bad_1_stream" {
source = "snowplow-devops/kinesis-stream/aws"
version = "0.2.0"
name = "bad-1-stream"
}
module "s3_loader_enriched" {
source = "snowplow-devops/s3-loader-kinesis-ec2/aws"
accept_limited_use_license = true
name = "s3-loader-enriched-server"
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
in_stream_name = module.enriched_stream.name
bad_stream_name = module.bad_1_stream.name
s3_bucket_name = module.s3_pipeline_bucket.id
s3_object_prefix = "enriched/"
ssh_key_name = "your-key-name"
ssh_ip_allowlist = ["0.0.0.0/0"]
}
Name | Version |
---|---|
terraform | >= 1.0.0 |
aws | >= 3.72.0 |
Name | Version |
---|---|
aws | >= 3.72.0 |
Name | Source | Version |
---|---|---|
instance_type_metrics | snowplow-devops/ec2-instance-type-metrics/aws | 0.1.2 |
kcl_autoscaling | snowplow-devops/dynamodb-autoscaling/aws | 0.2.0 |
service | snowplow-devops/service-ec2/aws | 0.2.1 |
telemetry | snowplow-devops/telemetry/snowplow | 0.5.0 |
Name | Type |
---|---|
aws_cloudwatch_log_group.log_group | resource |
aws_dynamodb_table.kcl | resource |
aws_iam_instance_profile.instance_profile | resource |
aws_iam_policy.iam_policy | resource |
aws_iam_role.iam_role | resource |
aws_iam_role_policy_attachment.policy_attachment | resource |
aws_security_group.sg | resource |
aws_security_group_rule.egress_tcp_443 | resource |
aws_security_group_rule.egress_tcp_80 | resource |
aws_security_group_rule.egress_udp_123 | resource |
aws_security_group_rule.ingress_tcp_22 | resource |
aws_caller_identity.current | data source |
aws_region.current | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
bad_stream_name | The name of the bad kinesis stream that the S3 Loader will insert bad data into | string |
n/a | yes |
in_stream_name | The name of the input kinesis stream that the S3 Loader will pull data from | string |
n/a | yes |
name | A name which will be pre-pended to the resources created | string |
n/a | yes |
s3_bucket_name | The name of the S3 Bucket to load data into | string |
n/a | yes |
s3_object_prefix | The prefix to store objects within (e.g. good) (Note: trailing forward slashes are automatically removed) | string |
n/a | yes |
ssh_key_name | The name of the SSH key-pair to attach to all EC2 nodes deployed | string |
n/a | yes |
subnet_ids | The list of subnets to deploy the S3 Loader across | list(string) |
n/a | yes |
vpc_id | The VPC to deploy the S3 Loader within | string |
n/a | yes |
accept_limited_use_license | Acceptance of the SLULA terms (https://docs.snowplow.io/limited-use-license-1.0/) | bool |
false |
no |
amazon_linux_2_ami_id | The AMI ID to use which must be based of of Amazon Linux 2; by default the latest community version is used | string |
"" |
no |
app_version | App version to use. This variable facilitates dev flow, the modules may not work with anything other than the default value. | string |
"2.2.8" |
no |
associate_public_ip_address | Whether to assign a public ip address to this instance | bool |
true |
no |
byte_limit | The amount of bytes to buffer events before pushing them to S3 | number |
25000000 |
no |
cloudwatch_logs_enabled | Whether application logs should be reported to CloudWatch | bool |
true |
no |
cloudwatch_logs_retention_days | The length of time in days to retain logs for | number |
7 |
no |
enable_auto_scaling | Whether to enable auto-scaling policies for the service | bool |
true |
no |
iam_permissions_boundary | The permissions boundary ARN to set on IAM roles created | string |
"" |
no |
initial_position | Where to start processing the input Kinesis Stream from (TRIM_HORIZON or LATEST) | string |
"TRIM_HORIZON" |
no |
instance_type | The instance type to use | string |
"t3a.micro" |
no |
java_opts | Custom JAVA Options | string |
"-XX:InitialRAMPercentage=75 -XX:MaxRAMPercentage=75" |
no |
kcl_read_max_capacity | The maximum READ capacity for the KCL DynamoDB table | number |
10 |
no |
kcl_read_min_capacity | The minimum READ capacity for the KCL DynamoDB table | number |
1 |
no |
kcl_write_max_capacity | The maximum WRITE capacity for the KCL DynamoDB table | number |
10 |
no |
kcl_write_min_capacity | The minimum WRITE capacity for the KCL DynamoDB table | number |
1 |
no |
max_size | The maximum number of servers in this server-group | number |
2 |
no |
min_size | The minimum number of servers in this server-group | number |
1 |
no |
partition_format | The pattern to partition data saved to S3 (e.g. https://github.com/snowplow/snowplow-s3-loader/blob/master/config/config.hocon.sample#L28-L31) | string |
"" |
no |
private_ecr_registry | The URL of an ECR registry that the sub-account has access to (e.g. '000000000000.dkr.ecr.cn-north-1.amazonaws.com.cn/') | string |
"" |
no |
purpose | Describes the purpose which this S3 loader is being used for (RAW, ENRICHED_EVENTS or JSON). RAW simply sinks data 1:1, ENRICHED_EVENTS work with monitoring.statsd to report metrics (identical to RAW otherwise), SELF_DESCRIBING partitions self-describing data (such as JSON) by its schema | string |
"RAW" |
no |
record_limit | The number of events to buffer before pushing them to S3 | number |
100000 |
no |
s3_format | The format of the data to be loaded into S3 ('gzip' or 'lzo') | string |
"gzip" |
no |
scale_down_cooldown_sec | Time (in seconds) until another scale-down action can occur | number |
600 |
no |
scale_down_cpu_threshold_percentage | The average CPU percentage that we must be below to scale-down | number |
20 |
no |
scale_down_eval_minutes | The number of consecutive minutes that we must be below the threshold to scale-down | number |
60 |
no |
scale_up_cooldown_sec | Time (in seconds) until another scale-up action can occur | number |
180 |
no |
scale_up_cpu_threshold_percentage | The average CPU percentage that must be exceeded to scale-up | number |
60 |
no |
scale_up_eval_minutes | The number of consecutive minutes that the threshold must be breached to scale-up | number |
5 |
no |
ssh_ip_allowlist | The list of CIDR ranges to allow SSH traffic from | list(any) |
[ |
no |
tags | The tags to append to this resource | map(string) |
{} |
no |
telemetry_enabled | Whether or not to send telemetry information back to Snowplow Analytics Ltd | bool |
true |
no |
time_limit_ms | The amount of time to buffer events before pushing them to S3 | number |
180000 |
no |
user_provided_id | An optional unique identifier to identify the telemetry events emitted by this stack | string |
"" |
no |
Name | Description |
---|---|
asg_id | ID of the ASG |
asg_name | Name of the ASG |
sg_id | ID of the security group attached to the S3 Loader servers |
Copyright 2021-current Snowplow Analytics Ltd.
Licensed under the Snowplow Limited Use License Agreement. (If you are uncertain how it applies to your use case, check our answers to frequently asked questions.)