Check prerequisites section before you start.
This deployment comprises:
- Alfresco Process Services 1.10
- Alfresco Process Services Admin 1.10
- Alfresco Content Repository 6.2.1-RC4
- Alfresco Content Share 6.2.1-RC1
- Alfresco Governance Services 3.3.0
- Alfresco Digital Workspace 1.3.0
- Alfresco Process Workspace 1.3.4
- Alfresco Sync Service 3.1.2
- Alfresco Identity Management Service 1.2.0
- Alfresco Shared File Store 0.5.3
- Alfresco Tika 2.0.17
- Alfresco LibreOffice 2.0.17
- Alfresco Search Services 1.4.0
- Alfresco PDF Renderer 2.0.17
- Alfresco Transform Router 1.0.2.1
- Alfresco Imagemagick 2.0.17
- Alfresco Event Gateway 0.3
For a more detailed list see this diagram
The Alfresco Digital Business Platform can be deployed to different environments such as AWS or locally.
- Deploy to AWS using KOPS
- Deploy to AWS using EKS
- Deploy to Docker for Desktop - Mac
- Deploy to Docker for Desktop - Windows
Note: You do not need to clone this repo to deploy the dbp.
For more information please check the Anaxes Shipyard documentation on running a cluster.
Resource requirements for AWS:
- A VPC and cluster with 5 nodes. Each node should be a m4.xlarge EC2 instance.
Initialize the Helm Tiller:
helm init
kubectl create clusterrolebinding tiller-clusterrole-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Note: This setup will deploy the helm server component and will give helm access to the whole cluster. For a more secure, customized setup, please read -> https://helm.sh/docs/using_helm/#role-based-access-control
As mentioned as part of the Anaxes Shipyard guidelines, you should deploy into a separate namespace in the cluster to avoid conflicts (create the namespace only if it does not already exist):
export DESIREDNAMESPACE=example
kubectl create namespace $DESIREDNAMESPACE
This environment variable will be used in the deployment steps.
If a Helm chart needs to pull a protected image, instructions on how to create and use a secret can be found here. For example, the following code would create a quay.io secret called quay-registry-secret:
kubectl create secret docker-registry quay-registry-secret --docker-server=quay.io --docker-username=<your-name> --docker-password=<your-pword> --namespace $DESIREDNAMESPACE
For routing the components of the DBP deployment outside the k8s cluster we use nginx-ingress. For your deployment to function properly you must have a route53 DNSZone and you will need to create a route53 record set in the following steps.
For more options on configuring the ingress controller that is deployed through the alfresco-infrastructure chart, please check the Alfresco Infrastructure chart Readme.
When deploying to cloud environments like AWS and Azure you should consider using native database services from those providers rather than deploying Postgres within the Kubernetes cluster.
Create an EFS storage on AWS and make sure it is in the same VPC as your cluster. Make sure you open inbound traffic in the security group to allow NFS traffic.
Save the name of the server as in this example:
export NFSSERVER=fs-d660549f.efs.us-east-1.amazonaws.com
Then install a nfs client service to create a dynamic storage class in kubernetes. This can be used by multiple deployments.
helm install stable/nfs-client-provisioner \
--name $DESIREDNAMESPACE \
--set nfs.server="$NFSSERVER" \
--set nfs.path="/" \
--set storageClass.reclaimPolicy="Delete" \
--set storageClass.name="$DESIREDNAMESPACE-sc" \
--namespace $DESIREDNAMESPACE
Note! The Persistent volume created with NFS to store the data on the created EFS has the ReclaimPolicy set to Delete. This means that by default, when you delete the release the saved data is deleted automatically.
To change this behaviour and keep the data you can set the storageClass.reclaimPolicy value to Retain.
For more Information on Reclaim Policies checkout the official K8S documentation here -> https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy
We don't advise you to use the same EFS instance for persisting the data from multiple dbp deployments.
helm repo add alfresco-incubator https://kubernetes-charts.alfresco.com/incubator
helm repo add alfresco-stable https://kubernetes-charts.alfresco.com/stable
helm repo add codecentric https://codecentric.github.io/helm-charts
Depending on the dnszone you have configured in our aws account, define an entry you would like to use for your deployment.
export DNSZONE=YourDesiredCname.YourRoute53DnsZone
Afterwards pull the helm values file from the current repo:
curl -O https://raw.githubusercontent.com/Alfresco/alfresco-dbp-deployment/master/helm/alfresco-dbp/values.yaml
sed -i s/REPLACEME/$DNSZONE/g values.yaml
Note! The name for the DNS entry you are defining here will be set in route53 later on.
# From within the same folder as your values file
helm install alfresco-incubator/alfresco-dbp -f values.yaml \
--set alfresco-infrastructure.persistence.storageClass.enabled=true \
--set alfresco-infrastructure.persistence.storageClass.name="$DESIREDNAMESPACE-sc" \
--namespace=$DESIREDNAMESPACE
You can either deploy the dbp fully or choose the components you need for your specific case. By default the dbp chart will deploy fully.
To disable specific components you can set the following values to false when deploying:
alfresco-content-services.enabled
alfresco-process-services.enabled
alfresco-sync-service.enabled
alfresco-infrastructure.nginx-ingress.enabled
Example: For disabling sync-service you will need to append the following subcommand to the helm install command:
--set alfresco-sync-service.enabled=false
If you are using https
you should include the following setting in your helm install command:
--set alfresco-content-services.externalProtocol="https" \
If you want to include multiple uris for alfresco client redirect uris check this guide.
export DBPRELEASE=littering-lizzard
export ELBADDRESS=$(kubectl get services $DBPRELEASE-nginx-ingress-controller --namespace=$DESIREDNAMESPACE -o jsonpath={.status.loadBalancer.ingress[0].hostname})
echo $ELBADDRESS
- Go to AWS Management Console and open the Route 53 console.
- Click Hosted Zones in the left navigation panel, then Create Record Set.
- In the Name field, enter your dns name defined in step 3 prefixed by "*." , for example: "
*.YourDesiredCname.YourRoute53DnsZone
". - In the Alias Target, select your ELB address ("
$ELBADDRESS
"). - Click Create.
You may need to wait a couple of minutes before the record set propagates around the world.
Note: When checking status, your pods should be READY x/x
helm status $DBPRELEASE
If you want to see the full list of values that have been applied to the deployment you can run:
helm get values -a $DBPRELEASE
helm delete --purge $DBPRELEASE
kubectl delete namespace $DESIREDNAMESPACE
Depending on your cluster type you should be able to also delete it if you want.
For more information on running and tearing down k8s environments, follow this guide.
Notes
Because some of our modules pass headers bigger than 4k we had to increase the default value of the proxy buffer size for nginx. We also enable the CORS header for the applications that need it through the Ingress Rule.
Check recommended version here.
In the 'Kubernetes' tab of the Docker preferences, click the 'Enable Kubernetes' checkbox.
In the Advanced tab of the Docker preferences, set 'CPUs' to 4.
While Alfresco Digital Business Platform installs and runs with only 10 GiB allocated to Docker, for better performance we recommend that 'Memory' value be set slightly higher, to at least 14 GiB (depending on the size of RAM in your workstation).
If you have previously deployed the DBP to AWS or minikube you will need to change/verify that the docker-for-desktop
context is being used.
kubectl config current-context # Display the current context
kubectl config use-context docker-for-desktop # Set the default context if needed
brew update; brew install kubernetes-helm
Note that you can also install a specific version from a commit reference and roll back the Helm client and server to a previous version if need be. For example, to roll back to Helm version 2.14.3:
brew uninstall kubernetes-helm
brew install https://github.com/Homebrew/homebrew-core/raw/0a17b8e50963de12e8ab3de22e53fccddbe8a226/Formula/kubernetes-helm.rb
helm init --upgrade --force-upgrade
sleep 10 # It takes a few seconds to upgrade tiller
helm version
helm init
kubectl create clusterrolebinding tiller-clusterrole-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Note: This setup will deploy the helm server component and will give helm access to the whole cluster. For a more secure, customized setup, please read -> https://helm.sh/docs/using_helm/#role-based-access-control
helm repo add alfresco-incubator https://kubernetes-charts.alfresco.com/incubator
helm repo add alfresco-stable https://kubernetes-charts.alfresco.com/stable
helm repo add codecentric https://codecentric.github.io/helm-charts
We will be forming a local dns with the use of nip.io. All you have to do it get your ip using the following command.
export LOCALIP=$(ipconfig getifaddr en0)
Note: Save this ip for later use.
If a Helm chart needs to pull a protected image, instructions on how to create and use a secret can be found here. For example, the following code would create a quay.io secret called quay-registry-secret:
kubectl create secret docker-registry quay-registry-secret --docker-server=quay.io --docker-username=<your-name> --docker-password=<your-pword>
Note: You can reuse the secrets.yaml file from helm
directory.
The minimal-values.yaml file contains values for local only development and multiple components are disabled with the purpose of reducing the memory footprint of the Digital Business Platform. This should not be used as a starting point for production use.
Pull the minimal values file from the current repo:
curl -O https://raw.githubusercontent.com/Alfresco/alfresco-dbp-deployment/master/helm/alfresco-dbp/minimal-values.yaml
sed -i '' 's/REPLACEME/'"$LOCALIP"'/g' minimal-values.yaml
# From within the same folder as your minimal-values file
helm install alfresco-incubator/alfresco-dbp -f minimal-values.yaml
kubectl get pods
Note: When checking status, your pods should be READY x/x
and STATUS Running
You can access DBP components at the following URLs:
Alfresco Digital Workspace: http://alfresco-cs-repository.YOURIP.nip.io/workspace/
Content: http://alfresco-cs-repository.YOURIP.nip.io/alfresco
Share: http://alfresco-cs-repository.YOURIP.nip.io/share
Alfresco Identity Service: http://alfresco-identity-service.YOURIP.nip.io/auth
APS: http://alfresco-cs-repository.YOURIP.nip.io/activiti-app
APS Admin: http://alfresco-cs-repository.YOURIP.nip.io/activiti-admin
helm ls
Use the name of the DBP release found above as DBPRELEASE
helm delete --purge <DBPRELEASE>
In some cases, after installing Docker for Desktop and enabling Kubernetes, the kubectl
command may not be found.
Docker for Desktop also installs the command as kubectl.docker
.
We would recommend using this command over installing the kubernetes cli which may not match the version of kubernetes that Docker for Desktop is using.
If you are deploying multiple projects in your Docker for Desktop Kuberenetes Cluster you may find it useful to use namespaces to segment the projects.
To create a namespace:
export DESIREDNAMESPACE=example
kubectl create namespace $DESIREDNAMESPACE
You can then use this environment variable DESIREDNAMESPACE
in the deployment steps by appending --namespace $DESIREDNAMESPACE
to the helm
and kubectl
commands.
You may also need to remove this namespace when you no longer need it.
kubectl delete namespace $DESIREDNAMESPACE
You may find it helpful to see the Kubernetes resources visually which can be achieved by installing the Kubernetes Dashboard: https://github.com/kubernetes/dashboard/wiki/Installation
Error: Invalid parameter: redirect_uri
After deploying the DBP, when accesing one of the applications, for example Process Services, if you receive the error message We're sorry Invalid parameter: redirect_uri, the redirectUris
parameter provided for deployment is invalid. Make sure the alfresco-infrastructure.alfresco-identity-service.client.alfresco.redirectUris
parameter has a valid value when installing the chart. For more details on how to configure it, check this guide.
No Digital Business Platform components can be accessed
Please make sure that you are not running any server locally that would occupy port 80. For example macOS Server runs specifically on this port, please disable it before deploying the Digital Business Platform.
Digital Business Platfrom components fail to start because of Database Connection failure
Please make sure that the Databases used by the Digital Business Platform components start up correctly. Before deploying the DBP please make sure that you do not have persistent volumes from previous instalations still on your cluster. This is happening because on localSetup kubernetes stores volume data locally on your drive. For more information on Persistent Volumes please refer to the kubernetes documentation. You can check and delete these volumes using the following commands:
kubectl get pvc -n $DESIREDNAMESPACE
kubectl delete pvc {oldpvc} -n $DESIREDNAMESPACE
kubectl get pv -n $DESIREDNAMESPACE
kubectl delete pv {oldpv} -n $DESIREDNAMESPACE
Note: All of the following commands will be using PowerShell, and these instructions have only been tested by Windows 10 users.
Check recommended version here.
In the 'Kubernetes' tab of the Docker settings, click the 'Enable Kubernetes' checkbox.
Run Command Prompt as an administrator.
Enter the following commands to delete the storageClass hostpath and set up the hostpath provisioner:
kubectl delete storageclass hostpath
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/rbac.yaml
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/deployment.yaml
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/storageclass.yaml
In the Advanced tab of the Docker preferences, set 'CPUs' to 4.
While Alfresco Digital Business Platform installs and runs with only 8 GiB allocated to Docker, for better performance we recommend that 'Memory' value be set slightly higher, to at least 10 - 12 GiB (depending on the size of RAM in your workstation).
If you have previously deployed the DBP to AWS or minikube you will need to change/verify that the docker-for-desktop
context is being used.
kubectl config current-context # Display the current context
kubectl config use-context docker-for-desktop # Set the default context if needed
Docker can be faulty on its first start. So, it is always safer to restart it before proceeding. Right click on the Docker icon in the system tray, then left click "restart...".
Enable running scripts (If not there will be an error when running the next script).
Set-ExecutionPolicy RemoteSigned
In this approach, we are using Chocolatey to install Helm. So Download and run Chocolatey.
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) ; $Env:Path="$Env:Path" + ';' + "$Env:Allusersprofile\chocolatey\bin"
Install Helm
choco install kubernetes-helm
Initialize Tiller (Server Component)
helm init
kubectl create clusterrolebinding tiller-clusterrole-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Run the following command, making sure to replace "namespaceName" with your desired namespace name.
$DESIREDNAMESPACE = "<namespaceName>"
kubectl create namespace $DESIREDNAMESPACE
If a Helm chart needs to pull a protected image, instructions on how to create and use a secret can be found here. For example, the following code would create a quay.io secret called quay-registry-secret:
kubectl create secret docker-registry quay-registry-secret --docker-server=quay.io --docker-username=<your-name> --docker-password=<your-pword> --namespace $DESIREDNAMESPACE
helm repo add alfresco-incubator https://kubernetes-charts.alfresco.com/incubator
We will be forming a local dns with the use of nip.io. All you have to do is get your ip using the following command.
$LOCALIP = (
Get-NetIPConfiguration |
Where-Object {
$_.IPv4DefaultGateway -ne $null -and
$_.NetAdapter.Status -ne "Disconnected"
}
).IPv4Address.IPAddress
Go back to the config.json file mentioned in step 8 and check that there is a string after "auth", such as in the following example.
"auth": "klsdjfsdkifdsiEWRFJDOFfslakfdjsidjfdslfjds"
The minimal-values.yaml file contains values for local only development and multiple components are disabled with the purpose of reducing the memory footprint of the Digital Business Platform. This should not be used as a starting point for production use.
Pull the minimal values file from the current repo:
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Alfresco/alfresco-dbp-deployment/master/helm/alfresco-dbp/minimal-values.yaml -OutFile minimal-values.yaml
(Get-Content minimal-values.yaml).replace('REPLACEME', $LOCALIP) | Set-Content minimal-values.yaml
Copy and paste the following block into your command line.
# From within the same folder as your minimal-values file
helm install alfresco-incubator/alfresco-dbp -f minimal-values.yaml --namespace $DESIREDNAMESPACE
kubectl get pods
Note: When checking status, your pods should be READY x/x
and STATUS Running
You can access DBP components at the following URLs:
Alfresco Digital Workspace: http://alfresco-cs-repository.YOURIP.nip.io/workspace/
Content: http://alfresco-cs-repository.YOURIP.nip.io/alfresco
Share: http://alfresco-cs-repository.YOURIP.nip.io/share
Alfresco Identity Service: http://alfresco-identity-service.YOURIP.nip.io/auth
APS: http://alfresco-cs-repository.YOURIP.nip.io/activiti-app
APS Admin: http://alfresco-cs-repository.YOURIP.nip.io/activiti-admin
If any pods are failing, you can use each of the following commands to see more about their errors:
kubectl logs <podName> --namespace $DESIREDNAMESPACE
kubectl describe pod <podName> --namespace $DESIREDNAMESPACE
Use the following command to find the release name.
helm ls
Delete that release with the following command, replacing 'DBRELEASE' with the release name that you just retrieved in the previous command.
helm delete --purge <DBPRELEASE>