Show pageOld revisionsBacklinksBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Cluster system ====== This is an idea dreamt up by @samp20 that may one day become a reality. The goals are to: * Create a cluster management system for small clusters. * Be simpler to setup and use than Kubernetes. * Work across networks. For example between a cloud provider and self-hosted. * Not require a separate management service. ===== Parts list ===== Here's the parts that are proposed for this project: * Python based core. * JSON for cluster configuration. * Will eventually be signed (possibly JWT). * runc (or other OCI compliant runtime) for container management. * Wireguard for the node mesh network. * NFTables for firewall configuration. * Python Flask for config distribution. * Caddy for HTTPS ingress proxy ===== General architecture ===== The core of the system will be a python service that will receive the configuration (probably via UNIX socket so the Flask http server can be kept in a separate container), validate it and update the various components through python plugins using ''importlib.metadata.entry_points''. The configuration will consist of objects representing different parts of the system that need to be configured. An example is below: <code> { "hosts": { "cloud01": { "wg_network": { "type": "wireguard_network", "link_name": "wg_cluster", "address": "10.69.0.1/16" }, "peer_local01": { "type": "wireguard_peer", "network": "wg_network", "allowed_ips": ["10.69.1.0/24"] } } } } </code> This configuration, while technically able to be created by a human, will likely be created/updated by a separate **offline** tool that consumes a more human friendly layout. This is in contrast to Kubernetes which relies on an **online** service to manage these updates. ===== RunC integration ===== RunC containers are created with a ''config.json'' and a ''rootfs''. RunC can generate a base configuration. While we could technically include that in the cluster config directly, it probably makes more sense to use the RunC generated config and merge the cluster config into it. A Docker config (Docker uses RunC under the hood) is even more restricted, specifying what syscalls are allowed for example. We may want to add this in the future, but not initially. We will need a mechanism for pulling the Root filesystem. There is probably a standardized way of downloading these from a container registry and unpacking them. For now we won't worry about private registries, but something to consider if this project becomes more widespread. ===== Network and Wireguard ===== This will use the [[https://docs.pyroute2.org/index.html|pyroute2]] python module. This has support for [[https://docs.pyroute2.org/wireguard.html|Wireguard]] and [[https://docs.pyroute2.org/netns.html|Network Namespaces]] (their example is with a veth pair). ===== NFTables ===== NFTables comes with its own python interface that wraps around ''libnftables''. There's a good tutorial to get started here https://ral-arturo.org/2020/11/22/python-nftables-tutorial.html. The scope of this will initially be to configure the forward chains to forward from the container's virtual ethernet to the wireguard tunnel. In the future this can be extended to implement firewall policies between containers. ===== Caddy ===== Caddy can be configured directly through JSON. We can merge together multiple container configs, along with any required global configuration, and pass this directly to Caddy. There are probably some gotchas to be aware of when merging. A good starting point will be to take a existing ''Caddyfile'' and convert it to JSON to see what the structure is like and how it can be split. projects/cluster Last modified: 22 hours agoby samp20