Skip to content

Commit e4cb403

Browse files
committed
Initial commit
1 parent 44067c9 commit e4cb403

33 files changed

+353
-559
lines changed

.github/workflows/trivy.yaml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ jobs:
4949
exit-code: 1
5050
format: sarif
5151
output: trivy-results.sarif
52+
5253
- name: Run Trivy vulnerability scanner json
5354
uses: aquasecurity/trivy-action@0.30.0
5455
continue-on-error: true
@@ -64,6 +65,7 @@ jobs:
6465
with:
6566
name: trivy-scan-report
6667
path: trivy-results.json
68+
6769
- name: Generate Security Hub findings
6870
if: always()
6971
run: |
@@ -174,4 +176,4 @@ jobs:
174176
echo "📄 File size: $(wc -c < findings.json) bytes"
175177
else
176178
echo "❌ Findings file not found"
177-
fi
179+
fi

.terraform.lock.hcl

Lines changed: 36 additions & 17 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
## Overview
1111

12-
The ARC Terraform-aws-arc-redshift module provides a comprehensive and unified solution for deploying Amazon Redshift data warehousing infrastructure on AWS. This versatile module supports both traditional Amazon Redshift provisioned clusters and the newer Redshift Serverless workgroups, allowing you to choose the
12+
The ARC Terraform-aws-arc-redshift module provides a comprehensive and unified solution for deploying Amazon Redshift data warehousing infrastructure on AWS. This versatile module supports both traditional Amazon Redshift provisioned clusters and the newer Redshift Serverless workgroups, allowing you to choose the
1313
deployment model that best fits your workload requirements and cost optimization needs.
1414

1515
### Prerequisites
@@ -148,7 +148,6 @@ Destroy Terraform
148148
```shell
149149
terraform destroy -var-file dev.tfvars
150150

151-
152151
<!-- BEGIN_TF_DOCS -->
153152
## Requirements
154153

@@ -299,4 +298,4 @@ By specifying this , it will bump the version and if you don't specify this in y
299298
## Authors
300299
301300
This project is authored by:
302-
- SourceFuse ARC Team
301+
- SourceFuse ARC Team

examples/cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,4 +143,4 @@ After deploying this basic example, you might want to:
143143
| <a name="output_redshift_cluster_endpoint"></a> [redshift\_cluster\_endpoint](#output\_redshift\_cluster\_endpoint) | The connection endpoint for the Redshift cluster |
144144
| <a name="output_redshift_cluster_id"></a> [redshift\_cluster\_id](#output\_redshift\_cluster\_id) | The ID of the Redshift cluster |
145145
| <a name="output_redshift_endpoint"></a> [redshift\_endpoint](#output\_redshift\_endpoint) | The endpoint of the Redshift deployment (either cluster or serverless) |
146-
<!-- END_TF_DOCS -->
146+
<!-- END_TF_DOCS -->

examples/cluster/data.tf

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ data "aws_vpc" "vpc" {
77
values = ["arc-poc-vpc"]
88
}
99
}
10-
1110
data "aws_subnets" "private" {
1211
filter {
1312
name = "tag:Name"

examples/cluster/main.tf

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,12 @@ terraform {
99
source = "hashicorp/aws"
1010
version = "~> 5.0"
1111
}
12+
random = {
13+
source = "hashicorp/random"
14+
version = "~> 3.0"
15+
}
1216
}
1317
}
14-
1518
provider "aws" {
1619
region = var.region
1720
}
@@ -46,8 +49,8 @@ module "redshift" {
4649
database_name = var.database_name
4750
master_username = var.master_username
4851
manage_user_password = var.manage_user_password
49-
security_group_data = var.security_group_data
50-
security_group_name = var.security_group_name
52+
security_group_data = var.security_group_data
53+
security_group_name = var.security_group_name
5154
node_type = var.node_type
5255
number_of_nodes = var.node_count
5356
cluster_type = var.node_count > 1 ? "multi-node" : "single-node"

examples/cluster/outputs.tf

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,3 @@ output "redshift_endpoint" {
1212
description = "The endpoint of the Redshift deployment (either cluster or serverless)"
1313
value = module.redshift.redshift_endpoint
1414
}
15-

examples/cluster/terraform.tfvars

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
region = "us-east-1"
2-
namespace = "arc"
3-
environment = "poc"
4-
name = "analytics"
5-
database_name = "analytics"
6-
master_username = "admin"
7-
node_type = "ra3.xlplus"
8-
node_count = 2
1+
region = "us-east-1"
2+
namespace = "arc"
3+
environment = "poc"
4+
name = "analytics"
5+
database_name = "analytics"
6+
master_username = "admin"
7+
node_type = "ra3.xlplus"
8+
node_count = 2
99
security_group_name = "arc-redshift-sg"
1010
security_group_data = {
1111
create = true
@@ -35,4 +35,4 @@ security_group_data = {
3535
to_port = -1
3636
}
3737
]
38-
}
38+
}

examples/cluster/variables.tf

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,6 @@ variable "master_username" {
3434
default = "admin"
3535
}
3636

37-
variable "master_password" {
38-
description = "Password for the master DB user. If null, a random password will be generated"
39-
type = string
40-
sensitive = true
41-
default = null
42-
}
43-
4437
variable "manage_user_password" {
4538
description = "Set to true to allow RDS to manage the master user password in Secrets Manager"
4639
type = bool
@@ -58,6 +51,7 @@ variable "node_count" {
5851
type = number
5952
default = 2
6053
}
54+
6155
variable "security_group_data" {
6256
type = object({
6357
security_group_ids_to_attach = optional(list(string), [])
@@ -91,4 +85,4 @@ variable "security_group_name" {
9185
type = string
9286
description = "Redshift Serverless resourcesr security group name"
9387
default = "Redshift-Serverless-sg"
94-
}
88+
}

examples/rds-redshift-zero-etl/README.md

Lines changed: 1 addition & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -66,42 +66,8 @@ After the infrastructure is created, you can verify the integration:
6666
5. Connect to your database
6767
6. Run a query to verify the external schema:
6868

69-
```sql
70-
SELECT * FROM rds_schema.your_table;
71-
```
72-
73-
### 4. Working with Zero-ETL Data
74-
75-
Once the integration is set up, you can:
76-
77-
1. Query RDS data directly from Redshift:
7869

79-
```sql
80-
SELECT * FROM rds_schema.your_table;
81-
```
82-
83-
2. Create materialized views for better performance:
84-
85-
```sql
86-
CREATE MATERIALIZED VIEW mv_your_table AS
87-
SELECT * FROM rds_schema.your_table;
88-
```
8970

90-
3. Join RDS data with Redshift data:
91-
92-
```sql
93-
SELECT r.*, d.additional_column
94-
FROM rds_schema.your_table r
95-
JOIN redshift_table d ON r.id = d.id;
96-
```
97-
98-
## Clean Up
99-
100-
To destroy all resources created by this example:
101-
102-
```bash
103-
terraform destroy
104-
```
10571

10672
## References
10773

@@ -162,4 +128,4 @@ terraform destroy
162128
## Outputs
163129

164130
No outputs.
165-
<!-- END_TF_DOCS -->
131+
<!-- END_TF_DOCS -->

0 commit comments

Comments
 (0)