Setting up ComSSA’s Virtual Server

Users and Groups

# apt-get install sudo screen bridge-utils libvirt-bin libvirt-clients tmux mosh openvpn git vim isc-dhcp-server bind9

ComSSA hosts PMS inside UCC’s machine room, and UCC ask for just a few simple conditions to make life easier on everyone

ucc-wheel must have sudo to the hypervisor
ucc-wheel must have some contact details for the comssa-wheel subcommittee

This is made really easy by adding users to groups instead of managing individual users

addgroup ucc-wheel
addgroup comssa-wheel

I’m Adam, and the other two people who are responsible for this are delan and nroach44, so it’s time to add ourselves to comssa-wheel 

adduser delan comssa-wheel
adduser nroach44 comssa-wheel
adduser adam comssa-wheel

Now lets add the two wheel groups to sudo, this is done with the format %groupname ALL(ALL:ALL) ALL

echo "%ucc-wheel ALL(ALL:ALL) ALL" >> /etc/sudoers
echo "%comssa-wheel ALL(ALL:ALL) ALL" >> /etc/sudoers

Alternatively, if you would like passwordless sudo (ie because you’ve already had to authenticate to get a shell anyway) you can substitute the ALL part like this

%comssa-wheel ALL = NOPASSWD: ALL   #no password
%comssa-wheel ALL(ALL:ALL) ALL      #needs password

Any users in those groups should be able to sudo now, and if you want to remove someone from sudo, remove them from their respective group

Networking

We’re given a single IP for the box, so we’re going to need to use NAT, we could just use libvirt to do that, but isn’t as powerful or persistent in machine interface names and address allocation. Instead we’ll make a bridge and add VM’s interfaces to this bridge. This way we can write static dhcp leases and forward ports using iptables.

brctl addbr br0
nano /etc/network/interfaces

Make your bridge declaration look similar to this, eth0 is our external connection, br0 is our bridge. To be perfectly clear, at no point should eth0 be bridged to br0, instead we’re going to be routing between those later on.

If you’re wondering why we’re using 192.168.1.0/24 instead of a bigger subnet like 172/8 or 10/8, its because UWA already uses those ranges, and while it doesn’t particularly matter, we might want to connect to a machine on those ranges at some point.

auto br0
iface br0 inet static
 address 192.168.1.1
 netmask 255.255.255.0
 broadcast 192.168.1.255
 bridge_stp on
 bridge_maxwait 0

We plan to be giving out VM’s to members of the club, and not all of those members VM’s are to be trusted and let loose on the UWA network, so we’re also installing openvpn so that some traffic may be routed through the VPN instead of UCC’s router.

FIXME

Now lets start working on a firewall, we’ll be using iptables to both secure the box, and forward ports to users VM’s

There are much better ways to do this, but I’m stubborn, I include a shell script in rc.local to apply firewall rules at boot.

nano /etc/rc.local
#!/bin/sh -e

/etc/ip.tables

exit 0
nano /etc/ip.tables
 ## Clear all rules
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

## Default Block all except outbound
        iptables -P INPUT DROP
        iptables -P FORWARD DROP
#       iptables -P OUTPUT DROP  //bad, no, dont do this

## Allow responses
        iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

## Allow ICMP
        iptables -A INPUT -p icmp -j ACCEPT
        iptables -A OUTPUT -p icmp -j ACCEPT

## Allow internal TCP and UDP traffic
        iptables -A INPUT -p tcp -s 192.168.1.0/24 -j ACCEPT
        iptables -A INPUT -p udp -s 192.168.1.0/24 -j ACCEPT
        iptables -A INPUT -p icmp -s 192.168.1.0/24 -j ACCEPT

## Allow loopback traffic
        iptables -A INPUT -i lo -j ACCEPT
        iptables -A OUTPUT -o lo -j ACCEPT

## Disable iptables affecting linux bridges
        echo "0" > /proc/sys/net/bridge/bridge-nf-call-iptables
        echo "0" > /proc/sys/net/bridge/bridge-nf-call-ip6tables

## Enable ip forwarding, routing
 echo "1" > /proc/sys/net/ipv4/ip_forward

## Allow ssh externally
        iptables -A INPUT -p tcp --dport 22 -j ACCEPT
        iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT
## Consider replacing with
#       iptables -A INPUT -p tcp -s 0/0 -d $SERVER_IP --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
#       iptables -A OUTPUT -p tcp -s $SERVER_IP -d 0/0 --sport 32 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT


## Allow DNS Queries
        iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
        iptables -A INPUT -p udp --sport 53 -j ACCEPT
Ldap

Can I legally be aborted at 20 years of age, or is that a little too late. Time to clear up a few things. LDAP is the protocol, not the program, you don’t install LDAP, you install an implementation of LDAP, and you might even install an implementation of Kerberos or PAM if you want to use it for authentication.

Now lets take a trip down rusty coat-hanger lane.

# apt-get install slapd ldap-utils
# dpkg-reconfigure -plow slapd

You know what, fuck this, I’ll do LDAP later

DNS+DHCP

Because this is going to host a ton of member virtual machines, all sharing our one external IP, it’s probably best to set up a DHCP server to allocate addresses, and a local DNS server to be authoritative for local VM addresses, for internal access only; our normal DNS will just refer to each name as PMS’s address, which wont be very useful to one VM trying to connect to another VM

First lets configure isc-dhcp-server, firstly lets set dhcpd to only listen on our bridge ‘br0’ and not listen on UCC’s network

# vim /etc/default/isc-dhcp-server
INTERFACES="br0"

Now lets set up dhcpd.conf, clear this files shit out and use this instead

# vim /etc/dhcp/dhcpd.conf
include "/etc/bind/rndc.key";
include "/etc/dhcp/static-leases";

ddns-updates on;
ddns-update-style interim;
update-static-leases on;

default-lease-time 86400; #24 hours
max-lease-time 86400;
authoritative;
allow booting;
allow bootp;

log-facility local7;
allow client-updates;

#DNS Related Settings
zone comssa.org.au. {
    primary localhost;
    key rndc-key;
}

zone 13.168.192.in-addr.arpa {
    primary localhost;
    key rndc-key;
}

subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.100 192.168.1.200;
option subnet-mask 255.255.255.0;
option routers 192.168.1.1;
option domain-name-servers 192.168.1.1;
option domain-name "pms.comssa.org.au";
ddns-domainname "pms.comssa.org.au.";
} 

#Silence windows 7 errors in the logs, because windows is a rad cool OS that requires you to change everything about the environment it is in, rather than change the actual issue at hand, its akin to putting your dick in a door frame and slamming it shut because Micro$haft asked you to. I promise I'm not salty. option wpad code 252 = text; option wpad "\n";

You can put statically assigned addresses both in and outside of that range, inside the file /etc/bind/static-leases

Upon installation of bind9 (the very first line of this page) a rndc key file should have been placed at /etc/bind/rndc.key however if it wasn’t, google “create rndc key” it was made for me so I couldn’t be bothered writing it up.

Next lets configure BIND dns zones. Note this is only for resolving internally, as private IP’s will be in the answer section for these queries and will mean shit all if you aren’t sitting in UCC.

cd /etc/bind/
# do not touch named.conf please and thank you
# edit the files that actually make a fucking iota of difference

vim named.conf.options
vim named.conf.local
options {
directory "/var/cache/bind";
forwarders {
        130.95.13.9;
};
dnssec-enable no;
dnssec-validation no;
auth-nxdomain no;
};

Libvirt & QEMU+KVM

This is kinda lazy and kinda shitty, each user has to be added to the relevant groups for libvirt and qemu, my username is ‘adam’ for the point of this example

# adduser adam libvirt 
# adduser adam libvirt-qemu

From here on you can avoid the terminal for vm management, by using redhats “virt-manager” by either installing it on your local system and adding the server as a remote target, or by ssh x-forwarding and running it on the server.

Storage and home directories

We’ve used linux md raid to put our disks into raid1’s, of which we then put LVM on top of that. The end result looks like this

  1. 350gb lvm (50gb for root, 200gb for vm roots, 100gb left over for lvm snapshotting)
  2. 1000gb lvm for user vm’s and home directory storage(haven’t decided how to allocate this yet)
  3. 2000gb lvm scratch space for replications and backups of users home dirs

The idea behind this is that there will be many different small vm’s, but users will mount their external home directories from the hypervisor (either over nfs, samba if its windows, etc)

Leave a Comment

Your email address will not be published. Required fields are marked *