Kubernetes
OpenNebula has a "Marketplace App" for a Kubernetes cluster. The documentation on using the appliance is located here. Working through that document is a good way to get your feet wet. However, by default, it starts a single node cluster, which isn't really a cluster. I decided that I wanted a 3 node cluster, one master node with no workers jobs, and 2 worker nodes. I also decided that I need the resource allocation of the master node and the worker nodes to differ. I used the following resource allocations:
Role | CPU | VCPU | Memory | Disk |
Master | 1 | 2 | 1.5GB | 40GB |
Worker | 2 | 6 | 6GB | 40GB |
So to get this I took the Service Kubernetes - KVM entry from the OpenNebula App Store and made 2 copies of the template.
Master Template
The first one I created was for the master node. Here's the template:
As a note, this wound up being template #13 in the OpenNebula datastore.CONTEXT = [ NETWORK = "YES", ONEAPP_K8S_ADDRESS = "$ONEAPP_K8S_ADDRESS", ONEAPP_K8S_ADMIN_USERNAME = "$ONEAPP_K8S_ADMIN_USERNAME", ONEAPP_K8S_HASH = "$ONEAPP_K8S_HASH", ONEAPP_K8S_NODENAME = "$ONEAPP_K8S_NODENAME", ONEAPP_K8S_PODS_NETWORK = "$ONEAPP_K8S_PODS_NETWORK", ONEAPP_K8S_PORT = "$ONEAPP_K8S_PORT", ONEAPP_K8S_TOKEN = "$ONEAPP_K8S_TOKEN", ONEGATE_ENABLE = "$ONEGATE_ENABLE", PASSWORD = "reallynotlikely", REPORT_READY = "YES", SSH_PUBLIC_KEY = "notlikely", TOKEN = "YES" ] CPU = "1" DESCRIPTION = "Master Template" DISK = [ IMAGE_ID = "16", SIZE = "40960" ] GRAPHICS = [ LISTEN = "0.0.0.0", TYPE = "VNC" ] HYPERVISOR = "kvm" INPUTS_ORDER = "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME" LOGO = "images/logos/centos.png" MEMORY = "1536" MEMORY_UNIT_COST = "MB" NIC = [ NETWORK = "vlan201", NETWORK_UNAME = "oneadmin", SECURITY_GROUPS = "0" ] OS = [ ARCH = "x86_64", BOOT = "" ] USER_INPUTS = [ ONEAPP_K8S_ADDRESS = "O|text|K8s master node address/network (CIDR subnet)", ONEAPP_K8S_ADMIN_USERNAME = "O|text|UI dashboard admin account (default admin-user)", ONEAPP_K8S_HASH = "O|text|K8s hash (to join node into the cluster)", ONEAPP_K8S_NODENAME = "O|text|K8s master node name", ONEAPP_K8S_PODS_NETWORK = "O|text|K8s pods network in CIDR (default 10.244.0.0/16)", ONEAPP_K8S_PORT = "O|text|K8s API port (default 6443)", ONEAPP_K8S_TOKEN = "O|password|K8s token (to join node into the cluster)", ONEGATE_ENABLE = "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES" ] VCPU = "2"
(This is important further down on this page.).
Worker Template
The second copy I modified to look like:
CONTEXT = [
NETWORK = "YES",
ONEAPP_K8S_ADDRESS = "$ONEAPP_K8S_ADDRESS",
ONEAPP_K8S_ADMIN_USERNAME = "$ONEAPP_K8S_ADMIN_USERNAME",
ONEAPP_K8S_HASH = "$ONEAPP_K8S_HASH",
ONEAPP_K8S_NODENAME = "$ONEAPP_K8S_NODENAME",
ONEAPP_K8S_PODS_NETWORK = "$ONEAPP_K8S_PODS_NETWORK",
ONEAPP_K8S_PORT = "$ONEAPP_K8S_PORT",
ONEAPP_K8S_TOKEN = "$ONEAPP_K8S_TOKEN",
ONEGATE_ENABLE = "$ONEGATE_ENABLE",
PASSWORD = "reallynotlikely",
REPORT_READY = "YES",
SSH_PUBLIC_KEY = "notlikely",
TOKEN = "YES" ]
CPU = "2"
DESCRIPTION = "Worker Template"
DISK = [
IMAGE_ID = "16",
SIZE = "40960" ]
GRAPHICS = [
LISTEN = "0.0.0.0",
TYPE = "VNC" ]
HYPERVISOR = "kvm"
INPUTS_ORDER = "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME"
LOGO = "images/logos/centos.png"
MEMORY = "6144"
MEMORY_UNIT_COST = "MB"
NIC = [
NETWORK = "vlan201",
NETWORK_UNAME = "oneadmin",
SECURITY_GROUPS = "0" ]
OS = [
ARCH = "x86_64",
BOOT = "" ]
USER_INPUTS = [
ONEAPP_K8S_ADDRESS = "O|text|K8s master node address/network (CIDR subnet)",
ONEAPP_K8S_ADMIN_USERNAME = "O|text|UI dashboard admin account (default admin-user)",
ONEAPP_K8S_HASH = "O|text|K8s hash (to join node into the cluster)",
ONEAPP_K8S_NODENAME = "O|text|K8s master node name",
ONEAPP_K8S_PODS_NETWORK = "O|text|K8s pods network in CIDR (default 10.244.0.0/16)",
ONEAPP_K8S_PORT = "O|text|K8s API port (default 6443)",
ONEAPP_K8S_TOKEN = "O|password|K8s token (to join node into the cluster)",
ONEGATE_ENABLE = "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES" ]
VCPU = "6"
As a note, this wound up being template #14 in the OpenNebula datastore.
(This is important further down on this page.).
Service Template
Finally I created a Service template:
{"name": "Kubernetes",
"deployment": "straight",
"description": "",
"roles": [
{
"name": "master",
"cardinality": 1,
"vm_template": 13,
"vm_template_contents": "ONEGATE_ENABLE=\"YES\"",
"elasticity_policies": [],
"scheduled_policies": []
},
{
"name": "worker",
"cardinality": 2,
"vm_template": 14,
"parents": [
"master"
],
"vm_template_contents": "ONEGATE_ENABLE=\"YES\"",
"elasticity_policies": [],
"scheduled_policies": []
}
],
"ready_status_gate": true
}
What this does is define a service comprised of one VM mapped into the "master" role using VM template 13 (see above) and 2 VMs mapped into the "worker" role using VM template 14 (Also see above). Because the deployment is a "straight" deployment the worker nodes won't be deployed until their "parent" name "master" is running. The "ready_status_gate" indicates that the VMs will report to OneGate that they are successfully running. See here for more information(Under "Determining when a VM is READY"). The vm_template_contents line prepends the "ONEGATE_ENABLE="YES"" to the VM template thereby setting the ONEGATE_ENABLE in the context stanza. The Context stanza is made available to the VM.
OneGate Magic
There is a little magic that goes on within the VM templates when instantiated. To show how the service can use the same disk image and wind up with different behaviors, I logged into one of the worker nodes and gathered some information from onegate.
Step 1 - Gather information about current VM
[root@onekube-ip-192-168-201-11 ~]# onegate vm show --json
{
"VM": {
"NAME": "worker_3_(service_5)",
"ID": "68",
"STATE": "3",
"LCM_STATE": "3",
"USER_TEMPLATE": {
"DESCRIPTION": "Master Template",
"HYPERVISOR": "kvm",
"INPUTS_ORDER": "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME",
"LOGO": "images/logos/centos.png",
"MEMORY_UNIT_COST": "MB",
"ONEGATE_ENABLE": "YES",
"READY": "YES",
"ROLE_NAME": "worker",
"SERVICE_ID": "5",
"USER_INPUTS": {
"ONEAPP_K8S_ADDRESS": "O|text|K8s master node address/network (CIDR subnet)",
"ONEAPP_K8S_ADMIN_USERNAME": "O|text|UI dashboard admin account (default admin-user)",
"ONEAPP_K8S_HASH": "O|text|K8s hash (to join node into the cluster)",
"ONEAPP_K8S_NODENAME": "O|text|K8s master node name",
"ONEAPP_K8S_PODS_NETWORK": "O|text|K8s pods network in CIDR (default 10.244.0.0/16)",
"ONEAPP_K8S_PORT": "O|text|K8s API port (default 6443)",
"ONEAPP_K8S_TOKEN": "O|password|K8s token (to join node into the cluster)",
"ONEGATE_ENABLE": "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES"
}
},
"TEMPLATE": {
"NIC": [
{
"IP": "192.168.201.11",
"MAC": "02:00:c0:a8:c9:0b",
"NETWORK": "vlan201"
}
]
}
}
}
So the startup scripts on the VM can check "ROLE_NAME" and see that it's a "worker" node. This would mean that it needs some information to connect to the master node. Namely, ONEAPP_K8S_ADDRESS, ONEAPP_K8S_TOKEN, and ONEAPP_K8S_HASH as listed in the instructions for the appliance from the app store. If we query OneGate for the service information that the VM is instantiated under we get:
[root@onekube-ip-192-168-201-11 ~]# onegate service show --json
{
"SERVICE": {
"name": "ON Kubernetes",
"id": "5",
"state": 2,
"roles": [
{
"name": "master",
"cardinality": 1,
"state": "2",
"nodes": [
{
"deploy_id": 64,
"running": true,
"vm_info": {
"VM": {
"NAME": "master_0_(service_5)",
"ID": "64",
"STATE": "3",
"LCM_STATE": "3",
"USER_TEMPLATE": {
"DESCRIPTION": "Master Template",
"HYPERVISOR": "kvm",
"INPUTS_ORDER": "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME",
"LOGO": "images/logos/centos.png",
"MEMORY_UNIT_COST": "MB",
"ONEGATE_ENABLE": "YES",
"ONEGATE_K8S_HASH": "ea404b70f9a1f96fcfcae8abc5ba2a603ea22d9be9ca3c0580f3405060a50cd",
"ONEGATE_K8S_MASTER": "192.168.201.10",
"ONEGATE_K8S_TOKEN": "87f1ee.w9rln13ryyxlv56",
"ONEGATE_K8S_UI_LOGIN_TOKEN": "eyJhbGciOiJSUzI1NiIsImtpZCI6IlRpUk9LaFhlMFpKMnhyYVBhOFPT01qeFF1emQtbW00dGpnQXR6S2tBelEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTRzNm1qIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiZWIxNjNiYy04MDA4LTQ2NGItOTBlMS0yMTJmZDgyZTY0MTUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.FK7O6dmr2E3A-MwU7hxLFBK8jr7i6gfXk9aqF7PB7jnJHk8NNSRAf31h9NEBNwQkkDDcJr4Tzq5YX5rTixtmKJSsRnyhPN2Lw0IidpOIqgDCTYn_sI1_H40nioh7STbFlCaPmaw3cuwtN1Y2yEq25cocElN8axnjFqz7oQ2XYA5iKQP0u-0A6UqkIP2s07ccPpXEgBHhQX_ishnmp1k_jwtHQdpMVfcIgjDTKtQFt4BUfz-FIoauzegA3aQXlv7zcaXNx2B6XDsY3-sYjp0MPifR-1QVXBaP1ZxQ_7568CTLqLW3twooFg9ONwAnhTPN9pYVAkzdiIbOQXBWkKhG5w",
"ONEGATE_K8S_UI_PROXY_URL": "http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/",
"READY": "YES",
"ROLE_NAME": "master",
"SERVICE_ID": "5",
"USER_INPUTS": {
"ONEAPP_K8S_ADDRESS": "O|text|K8s master node address/network (CIDR subnet)",
"ONEAPP_K8S_ADMIN_USERNAME": "O|text|UI dashboard admin account (default admin-user)",
"ONEAPP_K8S_HASH": "O|text|K8s hash (to join node into the cluster)",
"ONEAPP_K8S_NODENAME": "O|text|K8s master node name",
"ONEAPP_K8S_PODS_NETWORK": "O|text|K8s pods network in CIDR (default 10.244.0.0/16)",
"ONEAPP_K8S_PORT": "O|text|K8s API port (default 6443)",
"ONEAPP_K8S_TOKEN": "O|password|K8s token (to join node into the cluster)",
"ONEGATE_ENABLE": "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES"
}
},
"TEMPLATE": {
"NIC": [
{
"IP": "192.168.201.10",
"MAC": "02:00:c0:a8:c9:0a",
"NETWORK": "vlan201"
}
]
}
}
}
}
]
},
{
"name": "worker",
"cardinality": 2,
"state": "2",
"nodes": [
{
"deploy_id": 68,
"running": true,
"vm_info": {
"VM": {
"NAME": "worker_3_(service_5)",
"ID": "68",
"STATE": "3",
"LCM_STATE": "3",
"USER_TEMPLATE": {
"DESCRIPTION": "Master Template",
"ERROR": "Tue May 19 03:43:16 2020 : Error shutting down VM: Could not shutdown one-68",
"HYPERVISOR": "kvm",
"INPUTS_ORDER": "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME",
"LOGO": "images/logos/centos.png",
"MEMORY_UNIT_COST": "MB",
"ONEGATE_ENABLE": "YES",
"READY": "YES",
"ROLE_NAME": "worker",
"SERVICE_ID": "5",
"USER_INPUTS": {
"ONEAPP_K8S_ADDRESS": "O|text|K8s master node address/network (CIDR subnet)",
"ONEAPP_K8S_ADMIN_USERNAME": "O|text|UI dashboard admin account (default admin-user)",
"ONEAPP_K8S_HASH": "O|text|K8s hash (to join node into the cluster)",
"ONEAPP_K8S_NODENAME": "O|text|K8s master node name",
"ONEAPP_K8S_PODS_NETWORK": "O|text|K8s pods network in CIDR (default 10.244.0.0/16)",
"ONEAPP_K8S_PORT": "O|text|K8s API port (default 6443)",
"ONEAPP_K8S_TOKEN": "O|password|K8s token (to join node into the cluster)",
"ONEGATE_ENABLE": "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES"
}
},
"TEMPLATE": {
"NIC": [
{
"IP": "192.168.201.11",
"MAC": "02:00:c0:a8:c9:0b",
"NETWORK": "vlan201"
}
]
}
}
}
},
{
"deploy_id": 69,
"running": true,
"vm_info": {
"VM": {
"NAME": "worker_4_(service_5)",
"ID": "69",
"STATE": "3",
"LCM_STATE": "3",
"USER_TEMPLATE": {
"DESCRIPTION": "Master Template",
"HYPERVISOR": "kvm",
"INPUTS_ORDER": "ONEGATE_ENABLE,ONEAPP_K8S_ADDRESS,ONEAPP_K8S_TOKEN,ONEAPP_K8S_HASH,ONEAPP_K8S_NODENAME,ONEAPP_K8S_PORT,ONEAPP_K8S_PODS_NETWORK,ONEAPP_K8S_ADMIN_USERNAME",
"LOGO": "images/logos/centos.png",
"MEMORY_UNIT_COST": "MB",
"ONEGATE_ENABLE": "YES",
"READY": "YES",
"ROLE_NAME": "worker",
"SERVICE_ID": "5",
"USER_INPUTS": {
"ONEAPP_K8S_ADDRESS": "O|text|K8s master node address/network (CIDR subnet)",
"ONEAPP_K8S_ADMIN_USERNAME": "O|text|UI dashboard admin account (default admin-user)",
"ONEAPP_K8S_HASH": "O|text|K8s hash (to join node into the cluster)",
"ONEAPP_K8S_NODENAME": "O|text|K8s master node name",
"ONEAPP_K8S_PODS_NETWORK": "O|text|K8s pods network in CIDR (default 10.244.0.0/16)",
"ONEAPP_K8S_PORT": "O|text|K8s API port (default 6443)",
"ONEAPP_K8S_TOKEN": "O|password|K8s token (to join node into the cluster)",
"ONEGATE_ENABLE": "M|boolean|Enable OneGate reporting? (req. for multi-node)| |YES"
}
},
"TEMPLATE": {
"NIC": [
{
"IP": "192.168.201.12",
"MAC": "02:00:c0:a8:c9:0c",
"NETWORK": "vlan201"
}
]
}
}
}
}
]
}
]
}
}
If we parse the JSON for SERVICE->roles->name="master" and the look under nodes->vm_info->VM->USER_TEMPLATE we see: ONEGATE_K8S_HASH, ONEGATE_K8S_MASTER, ONEGATE_K8S_TOKEN which gives the VM all the information it needs to start up a Kubernetes worker node and join it to the cluster.
No comments:
Post a Comment