You know the drill. You need to set up a mobile datacenter that you need to move around from one place to another. When you get on location they give you a single ethernet cable with dhcp on it to connect to the internet. Your mission:
-Have multiple VLANs and being able to create new vlans
-Connect those multiple VLANs
-Have internet on all your VM's
-Being able to plugin the demo rack anywhere without any modification.
Previously we had a "hardware" router. An IBM blade with centos and tg3 broadcom drivers doing simple forwarding. It's not to hard to set up but you'll get a ship load of extra services you don't need + you loose a blade just for routing.
The solution : vyatta. It is an open source router wanting to compete with the cisco cli. It's easy and it's free ... or at least the software is, not the documentation.
So how did we set up my VM:
-eth0 : An ethernet adapter E1000. Pushed it in vlan 4095. In vsphere this will just pass all the incoming packets on the vSwitch without removing the vlan tags, essentially creating a trunk between your vswitch and your vm
-eth1 : An ethernet adapter E1000. Pushed it in vlan 12. This vlan is presented as 1 port on one of our physicall switches. This is were you plugin you uplink
-2gb hard disk
Installing vyatta is simple. Download the live CD. Start with the cd. Login with root : vyatta. Once you are logged in you can execute
$ install-system
Follow the wizard and reboot. Congrats, you have a brand new virtual router
Login with vyatta and password vyatta after the reboot. I executed the following
$ configure
This will put you into vyatta configure modus... like configure t on cisco
$ set interfaces ethernet eth0 address 192.168.114.254/24
$ set interfaces ethernet eth0 vif 15 address 192.168.115.254/24
$ set interfaces ethernet eth0 vif 16 address 192.168.116.254/24
$ set interfaces ethernet eth1 address dhcp
Untagged traffic is handled by the first line. The other vlans (15 and 16) are configured with the second and third command. They essentially create a virtual adaptor on eth0. The vif attribute defines the vlan. The last command sets up dhcp on eth1
$ set system host-name vRouter
$ set system domain-name mo.ilnk
$ set system name-server 192.168.115.1
$ set service ssh
Set up host name and dns domain. Third command set the dns server. The last command enables ssh.
$ set service nat rule 1 source address 192.168.114.0/24
$ set service nat rule 1 outbound-interface eth1
$ set service nat rule 1 type masquerade
$ set service nat rule 2 source address 192.168.115.0/24
$ set service nat rule 2 outbound-interface eth1
$ set service nat rule 2 type masquerade
$ set service nat rule 3 source address 192.168.116.0/24
$ set service nat rule 3 outbound-interface eth1
$ set service nat rule 3 type masquerade
Set up 3 nat rules. The first line of each rule defines which source addresses should be nated. The second line defines the uplink. The third line sets up masquerading. This will transform each tcp/ip packet so that it all looks like the packets are comming from the input. Essentially masking you demo environment and allowing you to have internet anywhere in your vlans. Remember you'll need a rule per VLAN
$ run renew dhcp interface eth1
Run a dhcp renew on eth1, like doing dhclient eth1 on many linux boxes. This will also set default router if your dhcp anounces one
$ set service dhcp-server shared-network-name ETH0_16_POOL subnet 192.168.116.0/24 start 192.168.116.10 stop 192.168.116.200
$ set service dhcp-server shared-network-name ETH0_16_POOL subnet 192.168.116.0/24 default-router 192.168.116.254
$ set service dhcp-server shared-network-name ETH0_16_POOL subnet 192.168.116.0/24 dns-server 192.168.115.1
Defines a dhcp rule in vlan 16 so that you can plug and play. You can make more then one pool per vlan if you like. You define the vlan by defining the subnet
$ commit
$ save
$ exit
Commit is in essence put your work to run. Save is like copy run start on cisco. Save the running config you commited to the boot config so you can reboot
You don't need to set up a default gateway because you'll get it from your dhcp lease. However if you need to assign a static ip you can do it the same ways as you did on eth0. However you'll need the following command to setup a the default gateway in your router
$ set system gateway-address 192.0.2.99
Remember to commit and save ;)
Just for the fun of it, we love to sing bom vyatta at work
2010/02/05
2010/02/01
Ping loss with ESX 4.0
So a very short post here but for people out there and for myself as reference it might be handy.
Friday I was at a customer site having the problem that he lost 5 pings to virtual machines every now and then. Unrelated to other events, debugging it was a hell. What also was bizarre was the fact that we didn't have any loss on the network ( we checked statics with ethtool and on the switches). The packets were just delivered to late to the VM, which then ofcourse replied to late and caused a ping timeout (we saw this behavior with a network sniffer).
We start to analyse the variables in the game. What changed since the network problems occured. Well the SAN changed. Look further in the logs (/var/log/messages) revealed a multipathing problem (SATP,PSP and NMP warnings). What happened was that the client was migrating to a newer version of SVC and in effect replacing the nodes. Multipathing worked fine but the old nodes were still showing up as dead paths. Probably this path were fixed paths (because SVC likes fixed) and the ESX hypervisor was trying to jump back to this old paths. Rescanning the hba's or rebooting the ESX server solved the problem (for now?)
Friday I was at a customer site having the problem that he lost 5 pings to virtual machines every now and then. Unrelated to other events, debugging it was a hell. What also was bizarre was the fact that we didn't have any loss on the network ( we checked statics with ethtool and on the switches). The packets were just delivered to late to the VM, which then ofcourse replied to late and caused a ping timeout (we saw this behavior with a network sniffer).
We start to analyse the variables in the game. What changed since the network problems occured. Well the SAN changed. Look further in the logs (/var/log/messages) revealed a multipathing problem (SATP,PSP and NMP warnings). What happened was that the client was migrating to a newer version of SVC and in effect replacing the nodes. Multipathing worked fine but the old nodes were still showing up as dead paths. Probably this path were fixed paths (because SVC likes fixed) and the ESX hypervisor was trying to jump back to this old paths. Rescanning the hba's or rebooting the ESX server solved the problem (for now?)
Subscribe to:
Posts (Atom)