Skip to content

tarrantro/cowork-space

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Coworking Space

A dummy project for Udacity session

Getting Started

Dependencies

Local Environment

  1. Python Environment - run Python 3.6+ applications and install Python dependencies via pip
  2. Docker CLI - build and run Docker images locally
  3. kubectl - run commands against a Kubernetes cluster
  4. helm - apply Helm Charts to a Kubernetes cluster

Remote Resources

  1. AWS CodeBuild - build Docker images remotely
  2. AWS ECR - host Docker images
  3. Kubernetes Environment with AWS EKS - run applications in k8s
  4. AWS CloudWatch - monitor activity and logs in EKS
  5. GitHub - pull and clone code

Setup

1. Configure a Database

Set up a Postgres database using a Helm Chart.

  1. Install the Postgres with password setup, no persistence
helm upgrade -i my-release oci://registry-1.docker.io/bitnamicharts/postgresql -f v.yaml 

This should set up a Postgre deployment at my-release-postgresql.default.svc.cluster.local in your Kubernetes cluster. You can verify it by running kubectl svc

By default, it will create a username postgres. The password can be retrieved with the following command:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default my-release-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
echo ${POSTGRES_PASSWORD}
  1. Test Database Connection The database is accessible within the cluster. This means that when you will have some issues connecting to it via your local environment. You can either connect to a pod that has access to the cluster or connect remotely via Port Forwarding
  • Connecting Via Port Forwarding
kubectl port-forward --namespace default services/my-release-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
  1. Run Seed Files We will need to run the seed files in db/ in order to create the tables and populate them with data.
kubectl port-forward --namespace default svc/<SERVICE_NAME>-postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432 < <FILE_NAME.sql>

2. Running the Analytics Application Locally

In the analytics/ directory:

  1. Install dependencies
pip install -r requirements.txt
  1. Run the application (see below regarding environment variables)
<ENV_VARS> python app.py

There are multiple ways to set environment variables in a command. They can be set per session by running export KEY=VAL in the command line or they can be prepended into your command.

  • DB_USERNAME
  • DB_PASSWORD
  • DB_HOST (defaults to 127.0.0.1)
  • DB_PORT (defaults to 5432)
  • DB_NAME (defaults to postgres)

If we set the environment variables by prepending them, it would look like the following:

DB_USERNAME=username_here DB_PASSWORD=password_here python app.py
  1. Verifying The Application
  • Generate report for check-ins grouped by dates curl <BASE_URL>/api/reports/daily_usage

  • Generate report for check-ins grouped by users curl <BASE_URL>/api/reports/user_visits

3. Build your image and push to public ECR

This assume you already login aws cli

  1. get cred
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
  1. Build image with proper tag
docker build . -t public.ecr.aws/<registry_alias>/cowork:latest
  1. Push image to ECR
docker push public.ecr.aws/<registry_alias>/cowork:latest

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published