Set up Rancher with GitOps
Rancher Installation - Create EKS Cluster, bootstrap Flux, deploy Rancher.¶
Install and manage with git only.
Here is a straightforward approach to create EKS Cluster
and deploy Rancher Server
in high available mode with Helm Charts
using Flux2
.
Create EKS cluster¶
Creating the EKS cluster
AWS Resources
note that AWS resources cost money. make sure to remove resources you don't use.
This could be with Terraform
or other tools, but since this is a very small cluster which will require minimum configuration changes, Terraform seems like over-complicating to me.
Since eksctl
allows to use .yml
config file, it fits the GitOps approach.
We use one config file to define everything we need, without the overhead of terraform.
In this example I set up the cluster in an existing VPC
You should adapt this to your needs.
The cluster is configured/upgraded by the cluster.yml config file.
eksctl create cluster -f cluster.yml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: rancher-eks
region: us-east-1
version: "1.18"
vpc:
id: vpc-xxxxxxxx
subnets:
private:
us-east-1a:
id: "subnet-xxxxxxxx"
us-east-1b:
id: "subnet-xxxxxxxx"
us-east-1c:
id: "subnet-xxxxxxxx"
public:
us-east-1a:
id: "subnet-xxxxxxxx"
us-east-1b:
id: "subnet-xxxxxxxx"
us-east-1c:
id: "subnet-xxxxxxxx"
nodeGroups:
- name: ng-rancher-nodes
labels: { role: rancher-node }
instanceType: t2.medium
volumeSize: 80
volumeType: gp2
desiredCapacity: 3
privateNetworking: true
ssh:
publicKeyPath: Your.Key.Pair
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::xxxxxxxxxx:policy/s3-backup-etcd
withAddonPolicies:
externalDNS: true
certManager: true
Upgrade EKS Cluster¶
Update eks version in the cluster.yml file
metadata:
name: rancher-eks
region: us-east-1
version: "1.17" update to "1.18"
Upgrade the cluster:
eksctl upgrade cluster --config-file cluster.yml
nodegroup¶
Add nodegroup
eksctl create nodegroup -f cluster.yml
Delete nodegroups
eksctl delete nodegroup --config-file=cluster.yml --include=ng-private-rancher
Expected output:
2021-04-13 21:59:11 [ℹ] eksctl version 0.44.0
2021-04-13 21:59:11 [ℹ] using region us-east-1
2021-04-13 21:59:12 [ℹ] comparing 2 nodegroups defined in the given config ("cluster.yml") against remote state
2021-04-13 21:59:12 [ℹ] combined include rules: ng-private-rancher
2021-04-13 21:59:12 [ℹ] 1 nodegroup (ng-private-rancher) was included (based on the include/exclude rules)
2021-04-13 21:59:13 [ℹ] (plan) would drain 1 nodegroup(s) in cluster "rancher-eks"
2021-04-13 21:59:13 [ℹ] (plan) would delete 1 nodegroups from cluster "rancher-eks"
2021-04-13 21:59:14 [ℹ] (plan) 1 task: { 1 task: { delete nodegroup "ng-private-rancher" [async] } }
2021-04-13 21:59:14 [ℹ] (plan) would delete 1 nodegroups from auth ConfigMap in cluster "rancher-eks"
2021-04-13 21:59:14 [✔] (plan) would have deleted 1 nodegroup(s) from cluster "rancher-eks"
2021-04-13 21:59:14 [!] no changes were applied, run again with '--approve' to apply the changes
Review the output.
Run the above command with --approve
to apply.
The nodes will pause,drain and eventually will be removed from the cluster and terminated.
- cordon:
NAME STATUS ROLES AGE VERSION
ip-10-10-1-40.ec2.internal Ready,SchedulingDisabled <none> 3h34m v1.18.9-eks-d1db3c
ip-10-10-1-83.ec2.internal Ready <none> 9m18s v1.18.9-eks-d1db3c
ip-10-10-2-97.ec2.internal Ready,SchedulingDisabled <none> 3h34m v1.18.9-eks-d1db3c
ip-10-10-3-10.ec2.internal Ready <none> 9m21s v1.18.9-eks-d1db3c
ip-10-10-3-226.ec2.internal Ready,SchedulingDisabled <none> 3h34m v1.18.9-eks-d1db3c
ip-10-10-3-82.ec2.internal Ready <none> 9m21s v1.18.9-eks-d1db3c
- Deleted:
ip-10-10-1-83.ec2.internal Ready <none> 11m v1.18.9-eks-d1db3c
ip-10-10-3-10.ec2.internal Ready <none> 12m v1.18.9-eks-d1db3c
ip-10-10-3-82.ec2.internal Ready <none> 12m v1.18.9-eks-d1db3c
Install Rancher with Flux2 gitops-toolkit¶
Bootstrap Flux
Now that we have our k8s cluster ready, we can install Rancher, not manually but with Flux2.
- This is a one-time setup, create personal token for your Git Project
export GITLAB_USER="[email protected]"
export GITLAB_TOKEN="XXXXXXXXXXXXXX"
- Bootstrap Flux
flux bootstrap gitlab --hostname=gitlab.your-gitlab.com --token-auth \
--owner="devops" --path="devops" \
--repository="rancher" \
--branch="rancher-2.5.x" \
--path="./clusters/rancher-eks" \
--namespace="flux-system" \
--verbose
Setting up sources
, kustomizations
& HelmReleases
|-- apps
| |-- rancher
| | |-- kustomization.yaml
| | |-- namespace.yaml
| | `-- release.yaml
| |-- rancher-backup
| | |-- kustomization.yaml
| | `-- release.yaml
| `-- rancher-backup-crd
| |-- kustomization.yaml
| |-- namespace.yaml
| `-- release.yaml
|-- cluster.yml
|-- clusters
| `-- rancher-eks
| `-- flux-system
| |-- apps.yaml
| |-- gotk-components.yaml
| |-- gotk-sync.yaml
| |-- infrastructure.yaml
| |-- kustomization.yaml
| `-- monitoring-kustomization.yaml
`-- infrastructure
|-- cert-manager
| |-- kustomization.yaml
| |-- namespace.yaml
| `-- release.yaml
|-- external-dns
| |-- kustomization.yaml
| `-- release.yaml
|-- ingress
| |-- kustomization.yaml
| |-- namespace.yaml
| `-- release.yaml
|-- kustomization.yaml
`-- sources
|-- external-dns.yaml
|-- ingress-nginx.yaml
|-- jetstack.yaml
|-- kustomization.yaml
|-- monitoring.yaml
|-- rancher-charts.yaml
`-- rancher.yaml
Dependencies¶
define dependencies deployment before installing rancher
Inside the infrastructure
directory we define all the components we will need before deploying the Rancher Application
In the sources
directory we define Helm repositories to pull Helm Charts
from.
The repository with all required files is Here
This will automatically set up all required components in order to prepare the cluster for rancher Setup, and then it will install rancher itself and the rancher-backup.
SSL is also configured with cert-manager
& LetsEncrypt
- Create
flux-system
&ingress-nginx
namespaces. - Add Helm repositories for
ingress-controller
& monitoring components. - Add Helm repository for
external-dns
. - Add Helm repository for rancher, rancher-backup-crd, rancher-backup.
- Install
HelmRelease
ofingress-nginx-controller
with an externalLoadBalancer
address. - Install
HelmRelesae
ofexternal-dns
and setDNS
record for the LoadBalancer address. - Install
HelmReleases
of rancher,rancher-backup-crd,rancher-backup incattle-resources-system
namespace. - Install
HelmRelease
ofmonitoring
stack.
Flux will first reconcile all sources
and HelmReleases
under infrastructure
, and only then will deploy the Rancher Application.
eksctl auto bootstrap flux is currently experimental (12.04.2021) so not in use here, but we should use it when it's GA.
Quickstart profiles can only be applied when used with Flux v1 (git.repo). Support for profiles with Flux v2 (gitops.flux) is not yet available.
https://eksctl.io/usage/gitops/#experimental-installing-flux-v2-gitops-toolkit
However this is not an issue, as we bootstrap as seen at the top of the page just like in any cluster.
gitops:
flux:
gitProvider: gitlab
# hostname: gitlab.your-server.com #currently No option to set custom hostname
owner: devops
repository: rancher
personal: true
branch: rancher-2.5.x
namespace: "flux-system"
path: "./clusters/rancher"
authTokenPath: ""
Backing up Rancher¶
Rancher Backup is configured and backup files are stored at S3 Bucket by the rancher-backup
operator.
(installed via Flux as well)
The EKS workers instances have an IAM Policy which is assigned to them, and allows to Store the backups remotely in the above S3 bucket under the /rancher-backup
folder.
Restoring Rancher¶
The new rancher-backup operator allows Rancher to be backed up and restored on any Kubernetes cluster.
This application is a Helm chart, and it can be deployed through the Rancher Apps & Marketplace page, or by using the Helm CLI.
Depending on the use case (Restoring Rancher/Migrating Rancher to a New Cluster) this is a one time manual operation which is documented Here
Summary¶
That's it.
Once flux will lift up all resources your should be able to see that all HelmReleases
are READY.
flux get helmreleases --all-namespaces
NAMESPACE NAME READY MESSAGE REVISION SUSPENDED
cattle-resources-system rancher-backup True Release reconciliation succeeded 1.0.301 False
cattle-resources-system rancher-backup-crd True Release reconciliation succeeded 1.0.301 False
cattle-system rancher True Release reconciliation succeeded 2.5.7 False
cert-manager cert-manager True Release reconciliation succeeded v1.0.4 False
ingress-nginx ingress-nginx True Release reconciliation succeeded 3.12.0 False
kube-system external-dns True Release reconciliation succeeded 4.10.0 False
-
Now you can access Rancher UI from your browser with the Endpoint (DNS record) you set for your LB.
-
You now have a high available Rancher deployed on EKS.
-
You didn't have to run
kubectl
in order to configure components. -
You only use Git repository as a declarative GitOps approach to set the desired state of your apps.
-
You can destroy and recreate the entire project easily.
references¶
eksctl, Flux2, Cert-Manager, external-dns, Rancher