Compute Nodes
The two compute nodes are identical hardware despite being in different cases. I borrowed them from a friend since the shelter in place rules left me with not many options in terms of going out and buying some used gear. They each have an AMD Phenom II X4 965 Processor with 16G of RAM and a builtin Realtek Gigabit ethernet port. A second ethernet is added via a USB ASIX gigabit ethernet adapter. For a price point of reference, Free IT Athens sells I5/I7s ( broadly comparable for our purposes ) for $65.00 each. 16G memory upgrades run about $100 each. So on the used market, each compute node is <$200 each. Both of the compute nodes are running Ubuntu 18.04 64 bit server.
Once again I'm not going to explicitly describe the install process since the web site has fairly complete install instructions. However, I do recommend opening an editor and carefully pasting all of the parts related to your environment into the editor since the instructions can get somewhat confusing with them interleaved the way they are. When configuring the virtual networks I set the "Physical device" in the configuration to vmnets. This will make more sense in context with the netplan local changes described below.
Local Changes
/etc/netplan/50-cloud-init.yamluser@node1:~$ more /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
san:
match:
macaddress: 8c:89:a5:32:86:90
addresses:
- 192.168.199.101/24
mtu: 9000
nameservers: {}
set-name: san
vmnets:
match:
macaddress: 00:0e:c6:bb:3f:85
dhcp4: no
addresses:
- 192.168.200.101/24
gateway4: 192.168.200.1
nameservers:
addresses:
- 192.168.200.1
search:
- cluster.sysnetinc.com
set-name: vmnets
version: 2
Since when building a cluster out of used and leftover parts there is no guarantee of the hardware matching and since OpenNebula expects the access port that it's managing to be all named identically, it's safest to force the naming of the network interfaces. By using the match property in netplan I explicitly named the interfaces being used in the compute nodes. A side effect of using the match is it also helped get around the MTU not being applied to the correct interface discovered when setting up the OpenNebula Frontend. The names are atypical for a Linux box, but I decided to name them in a descriptive manner since I was going to have to explicitly name them anyway.
/var/lib/one/datastores/** r,
It turns out that under Ubuntu 18.04 the AppArmor configuration prevents the deployment of virtual machine images using the OpenNebula shared or qcow datastores. I had to explicitly name the datastore path and use a recursive wildcard for all of the subdirectories and files.
No comments:
Post a Comment