DISCLAIMER: notes below are, at best, a fragmented collection of snippets from the session. Watch the recording if you want to know what was actually said.
Background: VMWare pricing changes have inspired departments across campus to evaluate alternative solutions.
Common architecture: Linux kernal hypervisor with a UI applied on top.
Electrical Engineering Department in the School of Engineering looked at alternatives early on due to VMWare licensing being applied to core counts instead of just socket counts.
Commercial support versions available for ProxMox with prices depending on support tier and corresponding SLA. (https://www.proxmox.com/en/proxmox-virtual-environment/pricing)
Common migration strategy is to mount a shared storage location (typically NFS) to legacy VMWare cluster and new ProxMox cluster.
Zones can be used to create virtual network zones and bridge connectivity across VMWare and ProxMox clusters.
Demo of research cluster
- 1385 CPUs
- 150TB of Ceph storage
- Heterogeneous cluster of Intel and AMD processors
- Set processor mode at lower levels to ensure compatibility
User roles can be integrated with AD. Grant users access to subsets of VMs with varying levels of permissions.
backup service can be connected to a variety of target storage systems. PBS is proxmox appliance for backups.
Vendors that have traditionally worked with VMWare are announcing ProxMox support.
Replication capability between two different clusters.
Networking backbone at 10 GBs or 25 GBs
SSO capability through OpenID Connect (on top of OATH 2.0) , MS Active Directory, LDAP, or Linux PAM
ET Core Infrastructure team has a sandbox environment for evaluating VMWare to ProxMox migration. Can directly mount an ESXi host through ProxMox through SSH for migration.

