feat: switch to binary kubernetes, fluxcd and helm install (#351)

* feat: more binary installs

* feat: install k8s from binaries

* fix: sync with the main branch

* fix(containerd): go back to using ansible_system

* fix(containerd): containerd+crictl cleanups

* chore: refactor k8s role

* ci: fix job name

* ci: do not fail-fast

* ci: disable swap

* ci: disable sudo

* ci: add kubelet logs

* ci: install udev

* ci: fix package names

* ci: fix idempotence

* ci: install deps earlier

* ci: added k8s tests

* ci: fix vars for fedora

* chore: drop unused submodule

* ci: fix typo in kubelet

* ci: start dbus.socket

* ci: fix fedora

* ci: fix paths

* fix: add maxconn to avoid killing system

* ci: print container logs

* ci: fix role test

* ci: move to stdout logs

* ci: fix idempotence

* ci: capture both stdout+stderr

* ci: drop extra default-server

* ci: fix haproxy

* ci: install apparmor-utils

* ci: update apt cache

* ci: remove pyyaml from rocky linux

* ci: add ha tests

* ci: fix flipped scenarios

* ci: use default keepalived iface

* chore: add debug

* chore: start tmate on failure

* chore: use newer containerd

* chore: fix shas

* ci: fix debian

* ci: back to debug

* ci: add containerd test suite

* ci: fix idempotence

* ci: force containerd restart

* ci: drop handler

* ci: load ip_tables module

* ci: add modprobe

* ci: add missing pkgs

* ci: load ip6_tables

* ci: add /lib/modules

* ci: add missing udev

* ci: run unconfined apparmor

* ci: drop debian + fedora support

* ci: fix paths

* chore: refactor to use vexxhost.kubernetes

* chore: refactor to using helm role

* wip

* ci: remove un-needed tests

* chore: refactor to k8s_node_label

* chore: fix k8s deploy

---------

Co-authored-by: Mohammed Naser <mnaser@vexxhost.com>
32 files changed
tree: db74522e6d4846fc972c6fc80c5f713a51be9536
  1. .github/
  2. charts/
  3. cmd/
  4. docs/
  5. hack/
  6. images/
  7. internal/
  8. meta/
  9. molecule/
  10. playbooks/
  11. plugins/
  12. roles/
  13. tests/
  14. .ansible-lint
  15. .coveragerc
  16. .flake8
  17. .gitignore
  18. .markdownlint.yaml
  19. .pre-commit-config.yaml
  20. .python-version
  21. .release-please-manifest.json
  22. CHANGELOG.md
  23. galaxy.yml
  24. go.mod
  25. go.sum
  26. mkdocs.yml
  27. poetry.lock
  28. pyproject.toml
  29. README.md
  30. release-please-config.json
README.md

Atmosphere

Quick Start

The quick start intends to provide the most near-production experience possible, as it is architected purely towards production-only environments. In order to get a quick production-ready experience of Atmosphere, you will need access to an OpenStack cloud.

The quick start is powered by Molecule and it is used in continuous integration running against the VEXXHOST public cloud so that would be an easy target to use to try it out.

You will need the following quotas set up in your cloud account:

  • 8 instances
  • 32 cores
  • 128GB RAM
  • 360GB storage

These resources will be used to create a total of 8 instances broken up as follows:

  • 3 Controller nodes
  • 3 Ceph OSD nodes
  • 2 Compute nodes

First of all, you'll have to make sure you clone the repository locally to your system with git by running the following command:

git clone https://github.com/vexxhost/atmosphere

You will need poetry installed on your operating system. You will need to make sure that you have the appropriate OpenStack environment variables set (such as OS_CLOUD or OS_AUTH_URL, etc.). You can also use the following environment variables to tweak the behaviour of the Heat stack that is created:

  • ATMOSPHERE_STACK_NAME: The name of the Heat stack to be created (defaults to atmosphere).

  • ATMOSPHERE_PUBLIC_NETWORK: The name of the public network to attach floating IPs from (defaults to public).

  • ATMOSPHERE_IMAGE: The name or UUID of the image to be used for deploying the instances (defaults to Ubuntu 20.04.3 LTS (x86_64) [2021-10-04]).

  • ATMOSPHERE_INSTANCE_TYPE: The instance type used to deploy all of the different instances (defaults to v3-standard-4).

  • ATMOSPHERE_NAMESERVERS: A comma-separated list of nameservers to be used for the instances (defaults to 1.1.1.1).

  • ATMOSPHERE_USERNAME: The username what is used to login into the instances ( defaults to ubuntu).

  • ATMOSPHERE_DNS_SUFFIX_NAME: The DNS domainname that is used for the API and Horizon. (defaults to nip.io).

  • ATMOSPHERE_ACME_SERVER: The ACME server, currenly this is from LetsEncrypt, with StepCA from SmallStep it is possible to run a internal ACME server. The CA of that ACME server should be present in the instance image.

Once you're ready to get started, you can run the following command to install poetry dependencies:

poetry install

Then you can run the following command to build the Heat stack :

poetry run molecule converge

This will create a Heat stack with the name atmosphere and start deploying the cloud. Once it's complete, you can login to any of the systems by using the login sub-command. For exampel, to login to the first controller node, you can run the following:

poetry run molecule login -h ctl1

In all the controllers, you will find an openrc file location inside the root account home directory, as well as the OpenStack client installed there as well. You can use it by running the following after logging in:

source /root/openrc
openstack server list

The Kubernetes administrator configuration will also be available on all of the control plane nodes, you can simply use it by running kubectl commands on any of the controllers as root:

kubectl get nodes -owide

Once you're done with your environment and you need to tear it down, you can use the destroy sub-command:

poetry run molecule destroy

For more information about the different commands used by Molecule, you can refer to the Molecule documentation.

Contributing

You'll need to make sure that you have pre-commit setup and installed in your environment by running these commands:

pre-commit install --hook-type commit-msg