Compute Nodes
 |
Compute Nodes |
All three compute nodes are Lenovo M73s with Intel Core i5-4590 processors, 16 GB of DDR3-1600 RAM, 500 GB SATA disks, built-in Realtek 1 Gb NICs, and PCI Express Realtek 1 Gb cards. It's useful to have at least three nodes in any cluster. In a cluster with three nodes, a quorum (majority) requires at least two nodes to agree on any decision. This helps maintain consistency, as any changes to data must be agreed upon by at least two nodes, ensuring that no single isolated node can make arbitrary changes.
I bought these 3 nodes from
Free IT Athens for $65 each.
I'm running a Proxmox cluster across all 3 machines. I'm not going to waste anybody's time by trying to rewrite the already existing
documentation. The existing documentation is pretty solid. You might want to follow along with these
diagrams for the rest of this page.
Since all the hardware is identical, all of the device names are the same. The 2 NICs are enp4s0 and enp3s0. The enp3s0 NIC on each node is attached directly to the storage network. The enp4s0 is a trunk interface attached to the vmbr0 bridge instance. The IP address for each node is applied to the vmbr0 bridge instance on the node.
Remember that VLAN 192.168.200.0/24 is the addressing on the native VLAN for the ports on the switch.
The 192.168.201.0/24 is the addressing for the VMs under Proxmox and is on VLAN 201.
Proxmox uses the vmbr0 bridge interface to connect the VMs. For example, here's the bridge configuration on node101:
bridge link show
3: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 100
5: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
The
tap interface is the connection to the VM running Ubuntu + microk8s on the node. The tap103i0 name is assigned by proxmox.
bridge vlan show
port vlan-id
enp4s0 1 PVID Egress Untagged
2
3 <<long list of vlans from 4 to 4093>>
4094
vmbr0 1 PVID Egress Untagged
tap103i0 201 PVID Egress Untagged
Conclusion
With this setup, we’ve established a robust Proxmox cluster that leverages the capabilities of VLAN-aware bridging and ensures high availability and consistency across all compute nodes. The cluster not only provides a reliable environment for hosting VMs and containers but also exemplifies a cost-effective approach to building a powerful lab environment.
By implementing VLAN segmentation and using dedicated storage networks, we can efficiently manage network traffic and ensure optimal performance for each VM and container.
No comments:
Post a Comment