Vagrant – Part IV – Network topology using Juniper and Cumulus

In this part of the series, I will show how you can use Vagrant to spin up a topology using Juniper vQFX and Cumulus VX.

For testing purposes, I need to have a topology running on my laptop.

Because I wanted to learn Cumulus, I decided to use Cumulus VX in my tests to get a glimpse of it.

This is the topology which you will see pretty often in the future as I plan to use it for different scenarios and use cases:

The content of the Vagrantfile is this and makes use of what we saw in the first three parts  of the series:

Vagrant.configure(2) do |config|

config.ssh.insert_key = false
config.vm.synced_folder '.', '/vagrant', disabled: true
config.vbguest.auto_update = false

#Cumulus leaf

config.vm.define "leaf" do |leaf|
   leaf.vm.box = "CumulusCommunity/cumulus-vx"
   leaf.vm.hostname = "CUMULUS-LEAF"
   # network to cumulus spine - swp1
   leaf.vm.network "private_network", virtualbox__intnet: "cumulus-l-cumulus-s", auto_config: false
   # network to juniper spine - swp2
   leaf.vm.network "private_network", virtualbox__intnet: "cumulus-l-juniper-s", auto_config: false
   # network to srv1 - swp3
   leaf.vm.network "private_network", virtualbox__intnet: "srv1-cumulus-l", auto_config: false
   # network to srv3 - swp4
   leaf.vm.network "private_network", virtualbox__intnet: "srv3-cumulus-l", auto_config: false
end

# Cumulus Spine

config.vm.define "spine" do |spine|
   spine.vm.box = "CumulusCommunity/cumulus-vx"
   spine.vm.hostname = "CUMULUS-SPINE"
   # network to cumulus leaf - swp1
   spine.vm.network "private_network", virtualbox__intnet: "cumulus-l-cumulus-s", auto_config: false
   # network to juniper leaf - swp2
   spine.vm.network "private_network", virtualbox__intnet: "juniper-l-cumulus-s", auto_config: false
end

#Juniper leaf

config.vm.define "vqfxpfe1" do |vqfxpfe1|
    vqfxpfe1.vm.box = 'juniper/vqfx10k-pfe'
    vqfxpfe1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re1"
    vqfxpfe1.vm.provider "virtualbox" do |v|
     v.customize ["modifyvm", :id, "--cpuexecutioncap", "50"]
end
end

config.vm.define "vqfxre1" do |vqfxre1|
    vqfxre1.vm.box = 'juniper/vqfx10k-re'
    vqfxre1.vm.network "forwarded_port", guest: 830, host: 8833
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re1"
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_reserved1"
    vqfxre1.vm.hostname = "JUNIPER-LEAF"
    # network to juniper spine - xe-0/0/0
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "juniper-l-juniper-s"
    # network to cumulus spine - xe-0/0/1
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "juniper-l-cumulus-s"
    # network to srv2 - xe-0/0/2
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "srv2-juniper-l"
    # network to srv4 - xe-0/0/3
    vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "srv4-juniper-l"
end

# Juniper Spine

config.vm.define "vqfxpfe2" do |vqfxpfe2|
    vqfxpfe2.vm.box = 'juniper/vqfx10k-pfe'
    vqfxpfe2.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re2"
    vqfxpfe2.vm.provider "virtualbox" do |v|
    v.customize ["modifyvm", :id, "--cpuexecutioncap", "50"]
end
end

config.vm.define "vqfxre2" do |vqfxre2|
    vqfxre2.vm.box = 'juniper/vqfx10k-re'
    vqfxre2.vm.network "forwarded_port", guest: 830, host: 8834
    vqfxre2.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re2"
    vqfxre2.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_reserved2"
    vqfxre2.vm.hostname = "JUNIPER-SPINE"
    # network to juniper leaf - xe-0/0/0
    vqfxre2.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "juniper-l-juniper-s"
    # network to cumulus spine - xe-0/0/1
    vqfxre2.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "cumulus-l-juniper-s"
end

# Server 1

config.vm.define "srv1" do |srv1|
    srv1.vm.box = "olbat/tiny-core-micro"
    config.ssh.shell = "sh -l"
    srv1.vm.hostname = "SRV1"
    srv1.vm.network 'private_network', ip: "10.10.10.11", virtualbox__intnet: "srv1-cumulus-l"
end

# Server 2

config.vm.define "srv2" do |srv2|
    srv2.vm.box = "olbat/tiny-core-micro"
    config.ssh.shell = "sh -l"
    srv2.vm.hostname = "SRV2"
    srv2.vm.network 'private_network', ip: "10.10.10.12", virtualbox__intnet: "srv2-juniper-l"
end

# Server 3

config.vm.define "srv3" do |srv3|
    srv3.vm.box = "olbat/tiny-core-micro"
    config.ssh.shell = "sh -l"
    srv3.vm.hostname = "SRV3"
    srv3.vm.network 'private_network', ip: "10.10.20.11", virtualbox__intnet: "srv3-cumulus-l"
end

# Server 4

config.vm.define "srv4" do |srv4|
    srv4.vm.box = "olbat/tiny-core-micro"
    config.ssh.shell = "sh -l"
    srv4.vm.hostname = "SRV4"
    srv4.vm.network 'private_network', ip: "10.10.20.12", virtualbox__intnet: "srv4-juniper-l"
end
end

The Vagrantfile is built in a manual way which requires a lot of work and it is prone to errors.

Considering this topology and the size of it, probably is faster to do it like this.

One other way would be to use a provisioner(like Ansible) to deploy multiple (almost) VMs and spin up the VMs with some predefined configuration.

Regarding the topology, this is pretty straightforward.

For Juniper vQFX, you need to use this private network on PFE VM:

vqfxpfe1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re1"

And on RE VM:

vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_pfe_re1"
vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "vqfx_internal_reserved1"

The first network from both VMs is used for the communication between them and the second network from RE VM is reserved for other purposes.

Any other private networks can be used as “network ports” and they need to be defined on RE VM:

vqfxre1.vm.network 'private_network', auto_config: false, nic_type: '82540EM', virtualbox__intnet: "juniper-l-juniper-s"

It’s very important to make sure you are using the same virtualbox__intnet between the devices which you want to be directly connected.

After the project is initialized and started, this is the status of all VMs:

parau-mbp:aut parau$ vagrant status
Current machine states:

leaf                      running (virtualbox)
spine                     running (virtualbox)
vqfxpfe1                  running (virtualbox)
vqfxre1                   running (virtualbox)
vqfxpfe2                  running (virtualbox)
vqfxre2                   running (virtualbox)
srv1                      running (virtualbox)
srv2                      running (virtualbox)
srv3                      running (virtualbox)
srv4                      running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
parau-mbp:aut parau$

Let’s connect to the Juniper spine and configure the interface towards the Cumulus leaf:

parau-mbp:aut parau$ vagrant ssh vqfxre2
--- JUNOS 17.4R1.16 built 2017-12-19 20:03:37 UTC
{master:0}
vagrant@vqfx-re>edit

Entering configuration mode

{master:0}[edit]
vagrant@vqfx-re# set system host-name vqfx-2-re

{master:0}[edit]
vagrant@vqfx-re# commit
configuration check succeeds
commit complete


{master:0}[edit]
vagrant@vqfx-2-re# set interfaces xe-0/0/1.0 family inet address 192.168.10.1/30

{master:0}[edit]
vagrant@vqfx-2-re# delete interfaces xe-0/0/1.0 family inet dhcp

{master:0}[edit]
vagrant@vqfx-2-re# commit
configuration check succeeds
commit complete

{master:0}[edit]
vagrant@vqfx-2-re#

Now similar things on Cumulus leaf and I should be able to ping the Juniper spine from Cumulus leaf:

parau-mbp:aut parau$ vagrant ssh leaf

vagrant@cumulus-leaf:~$ sudo net add interface swp2 ip address 192.168.10.2/30
vagrant@cumulus-leaf:~$ sudo net commit

--- /etc/network/interfaces   2018-02-25 19:39:50.000000000 +0000
+++ /var/run/nclu/iface/interfaces.tmp  2018-04-23 09:23:18.009141999 +0000
@@ -3,10 +3,14 @@

 source /etc/network/interfaces.d/*.intf

 # The loopback network interface
 auto lo
 iface lo inet loopback

 # The primary network interface
 auto eth0
 iface eth0 inet dhcp

+
+auto swp2
+iface swp2
+    address 192.168.10.2/30

net add/del commands since the last 'net commit'
================================================

User    Timestamp                   Command
------  --------------------------  -------------------------------------------------
root    2018-04-23 09:23:04.316359  net add interface swp2 ip address 192.168.10.2/30
root    2018-04-23 09:23:18.009950  net commit

vagrant@cumulus-leaf:~$ sudo systemctl restart networking.service
vagrant@cumulus-leaf:~$ ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=101 ms

And that would be all about how to bring up a topology in Vagrant.

I hope you found this post interesting.

 

The following two tabs change content below.

Paris ARAU

Paris ARAU is a networking professional with strong background on routing and switching technologies. He is a holder of CCIE R&S and dual JNCIE(SP and ENT). The day to day work allows him to dive deeply in networking technologies. Part of the continuously training, he is focusing on Software Defined Network and cloud computing.

Comments

This post currently has 3 responses

  • Hello

    Thanks you for writing a such a nice blog.

    Can you pls share a spine leaf design configuration parts.

    • Hi Mehul,

      You mean for Juniper/Cumulus spine-leaf setup, right?

      If yes, I am working on this.

      Thanks,
      Paris

Leave a Reply

Your email address will not be published. Required fields are marked *

Sidebar



%d bloggers like this: