Firefly Perimeter Cluster (VSRX) Setup on Vmware ESX
As you might know Firefly Perimeter aka VSRX which is the virtual firewall running on Vmware ESX and KVM can be downloaded as evaluation at here I believe it is great to test most functionality for example an SRX cluster (High Availability) which is the topic of this post. Here I will show
- How you can install two firefly instances and operate them in a cluster.
- How you can setup redundancy groups
Installation of Firefly instances
First download your evaluation copy. You must have a Juniper Networks account to download one. At the time of this writing, current version is 12.1X46-D10.2. Once you downloaded the OVA file to your disk, deploy it into your ESX server via File->Deploy OVF Template.
Give name e.g firefly00 for the first instance. Continue and you can choose whatever suggested in the wizard. Now you should have two firefly instances ready to be configured as below:
Configuring Firefly instances
After deploying instances we must configure them for clustering. A default firefly instance requires 2GB RAM and two CPUs and instance comes with two ethernet interfaces but we will need more for clustering. It is because:
- ge-0/0/0 is used for management interface (can’t be changed)
- ge-0/0/1 is used for control link (can’t be changed)
- ge-0/0/2 is going to be used for fabric link (this is configurable)
Note: Although the minimum memory requirement is 2GB, I use 1GB for my testing purposes. It is also working but it isn’t the recommended memory.
As we lose 3 interfaces here, we will add 6 more interfaces to already configured 2 interfaces. (Max 10 interfaces can be added) Check release notes for this limitation
Add Internal ESX vSwitch
We will need an internal vSwitch on ESX platform for our HA and Control links. It doesn’t have to have any Physical Adapter. You can follow Configuration->Add Networking->Virtual Machine->Create a vSphere standard switch to add a new internal switch. You should have something like below after addition:
In my case, virtual switch’s name is vSwitch5. Now add two normal interfaces with no VLAN assigned.
Both interfaces should be identical in their Port Group properties. Then we need to increase MTU. In order to do this, under vSwitch5 Properties window click “vSwitch” which is under the Ports list and then click “Edit”. Set the MTU to 9000 as below and apply. (Ignore the warning about no physical adapter assigned, we don’t need it for HA interfaces)
Assign Cluster Interfaces
We have configured the internal vSwitch and HA interfaces now it is time to assign these to instances.
Note: Adapter 3 will be fab0 for node0 and fab1 for node1
Below is a simple table showing how interfaces are assigned in ESX and Firefly
ESX to Firefly Interface Mapping
Network adapter 1 ---> ge-0/0/0 Network adapter 2 ---> ge-0/0/1 Network adapter 3 ---> ge-0/0/2 Network adapter 4 ---> ge-0/0/3 Network adapter 5 ---> ge-0/0/4 Network adapter 6 ---> ge-0/0/5 Network adapter 7 ---> ge-0/0/6 Network adapter 8 ---> ge-0/0/7
Pretty intuitive:)
My management interface vlan is vlan4000_MGT. This is the VLAN through which I will connect to my VMs. We assigned adapter 2 and 3 to Control and FAB ports on vSwitch5. Exactly the same port assignment must be done on firefly01 VM too as they will be in cluster. Now it is time to boot two instances.
After booting both firefly VMs, you will see the Amnesiac screen. There isn’t any password yet and you can login with root username. From now on, cluster configuration is the same as any branch SRX configuration. To configure cluster smoothly, follow the steps below on both nodes.
firefly00 (node0)
>conf #delete interfaces #delete security #set system root-authentication plain-text-password #commit and-quit >set chassis cluster cluster-id 2 node 0 reboot
Note: As I already have another Firefly cluster, I have chosen 2 as the cluster id.
firefly01 (node1)
>conf #delete interfaces #delete security #set system root-authentication plain-text-password #commit and-quit >set chassis cluster cluster-id 2 node 1 reboot
Firefly Interface Configuration
At this point, you should have two firefly instances running and one should have {primary:node0} and the other one {secondary:node1} on the prompt but we still don’t have management connectivity. we will do the cluster groups configuration and access the VMs via IP instead of console:
firefly00 node0
set groups node0 system host-name firefly00-cl2 set groups node0 interfaces fxp0 unit 0 family inet address 100.100.100.203/24 set groups node1 system host-name firefly01-cl2 set groups node1 interfaces fxp0 unit 0 family inet address 100.100.100.204/24 set apply-groups ${node} commit and-quit
After commit your config will also be synced to node1 automatically which you will see on the CLI as well.
After this configuration, you should be able to reach your cluster nodes via their fxp0 interfaces. You don’t need any security policy for these interfaces to connect. As you can see I could SSH from my management network to firefly00 node.
root@srx100> ssh root@100.100.100.203 The authenticity of host '100.100.100.203 (100.100.100.203)' can't be established. ECDSA key fingerprint is 68:26:63:11:6d:63:91:7e:e7:69:d6:6e:01:b7:7b:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '100.100.100.203' (ECDSA) to the list of known hosts. Password: --- JUNOS 12.1X46-D10.2 built 2013-12-18 02:43:42 UTC root@firefly00-cl2%
Redundancy Group Configuration
Topology that I am trying to achieve is below. The host debian1 will reach Internet via Firefly cluster. Its gateway is 10.12.1.20 reth1 which belongs to the redundancy group 1. There is only one traffic redundancy group and once it fails, cluster should fail to node1. As you can see in the topology, second node’s interfaces start with ge-7/0/x once it is part of the cluster.
Chassis Cluster Config
set chassis cluster reth-count 2 set chassis cluster redundancy-group 0 node 0 priority 100 set chassis cluster redundancy-group 0 node 1 priority 99 set chassis cluster redundancy-group 1 node 0 priority 100 set chassis cluster redundancy-group 1 node 1 priority 99
Redundant Interface Config
set interfaces reth0.0 family inet address 10.11.1.10/24 set interfaces reth0 redundant-ether-options redundancy-group 1 set interfaces ge-0/0/3 gigether-options redundant-parent reth0 set interfaces ge-7/0/3 gigether-options redundant-parent reth0 set interfaces reth1.0 family inet address 10.12.1.20/24 set interfaces reth1 redundant-ether-options redundancy-group 1 set interfaces ge-0/0/4 gigether-options redundant-parent reth1 set interfaces ge-7/0/4 gigether-options redundant-parent reth1 set routing-options static route 0/0 next-hop 10.11.1.1
Note: Cluster’s default gateway is SRX100 device.
Security zone and Policy Config
set security zones security-zone external interfaces reth0.0 set security zones security-zone internal interfaces reth1.0 set security zones security-zone internal host-inbound-traffic system-services all set security policies from-zone internal to-zone external policy allow-all-internal match source-address any set security policies from-zone internal to-zone external policy allow-all-internal match destination-address any set security policies from-zone internal to-zone external policy allow-all-internal match application any set security policies from-zone internal to-zone external policy allow-all-internal then permit commit and-quit
Check the Interfaces and cluster status
{primary:node0} root@firefly00-cl2> show interfaces terse Interface Admin Link Proto Local Remote gr-0/0/0 up up ip-0/0/0 up up ge-0/0/2 up up ge-0/0/3 up up ge-0/0/3.0 up up aenet --> reth0.0 ge-0/0/4 up up ge-0/0/4.0 up up aenet --> reth1.0 ge-0/0/5 up up ge-0/0/6 up up ge-0/0/7 up up ge-7/0/2 up up ge-7/0/3 up up ge-7/0/3.0 up up aenet --> reth0.0 ge-7/0/4 up up ge-7/0/4.0 up up aenet --> reth1.0 ge-7/0/5 up up ge-7/0/6 up up ge-7/0/7 up up dsc up up fab0 up down fab0.0 up down inet 30.33.0.200/24 fab1 up down fab1.0 up down inet 30.34.0.200/24 fxp0 up up fxp0.0 up up inet 100.100.100.203/24 fxp1 up up fxp1.0 up up inet 129.32.0.1/2 tnp 0x1200001 gre up up ipip up up lo0 up up lo0.16384 up up inet 127.0.0.1 --> 0/0 lo0.16385 up up inet 10.0.0.1 --> 0/0 10.0.0.16 --> 0/0 128.0.0.1 --> 0/0 128.0.0.4 --> 0/0 128.0.1.16 --> 0/0 lo0.32768 up up lsi up up mtun up up pimd up up pime up up pp0 up up ppd0 up up ppe0 up up reth0 up up reth0.0 up up inet 10.11.1.10/24 reth1 up up reth1.0 up up inet 10.12.1.20/24 st0 up up tap up up
{primary:node0} root@firefly00-cl2> show chassis cluster status Cluster ID: 2 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 99 secondary no no Redundancy group: 1 , Failover count: 1 node0 0 primary no no node1 0 secondary no no
Hmmm…there is something wrong. I don’t see the priorities for RG1. Why? Let’s check, cluster interfaces.
{primary:node0} root@firefly00-cl2> show chassis cluster interfaces Control link status: Up Control interfaces: Index Interface Status Internal-SA 0 fxp1 Up Disabled Fabric link status: Down Fabric interfaces: Name Child-interface Status (Physical/Monitored) fab0 fab0 fab1 fab1 Redundant-ethernet Information: Name Status Redundancy-group reth0 Up 1 reth1 Up 1 Redundant-pseudo-interface Information: Name Status Redundancy-group lo0 Up 0
Aha… I forgot to configure the fabric links. I always forget to do something:) as you might remember from the beginning of the post, you can choose any interface you want for the fabric link and I have chosen ge-0/0/2 interfaces on both nodes.
Configure Fabric Link
set interfaces fab0 fabric-options member-interfaces ge-0/0/2 set interfaces fab1 fabric-options member-interfaces ge-7/0/2 commit and-quit
Check the cluster status again
{primary:node0} root@firefly00-cl2> show chassis cluster status Cluster ID: 2 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 99 secondary no no Redundancy group: 1 , Failover count: 2 node0 100 secondary no no node1 99 primary no no {primary:node0} root@firefly00-cl2> show chassis cluster interfaces Control link status: Up Control interfaces: Index Interface Status Internal-SA 0 fxp1 Up Disabled Fabric link status: Up Fabric interfaces: Name Child-interface Status (Physical/Monitored) fab0 ge-0/0/2 Up / Up fab0 fab1 ge-7/0/2 Up / Up fab1 Redundant-ethernet Information: Name Status Redundancy-group reth0 Up 1 reth1 Up 1 Redundant-pseudo-interface Information: Name Status Redundancy-group lo0 Up 0
Yes, now we have fabric links configured and UP.
Now lets do a ping test from debian1 host to cluster’s reth1 IP.
root@debian1:~# ping 10.12.1.20 PING 10.12.1.20 (10.12.1.20) 56(84) bytes of data. From 10.12.1.10 icmp_seq=1 Destination Host Unreachable From 10.12.1.10 icmp_seq=2 Destination Host Unreachable From 10.12.1.10 icmp_seq=3 Destination Host Unreachable
Hmm, something isn’t working. What have I forgotten? I must tell you that the number one mistake I make when working with ESX is that port assignment. Since I haven’t assigned the interfaces to their respective VLAN ports, ping doesn’t work. Let assign.
As you can see I assigned Adapter 4 (ge-0/0/3) to vlan2001 and Adapter 5 (ge-0/0/4) to vlan2002. These are child links of reth0 and reth1 respectively.
Let’s try ping once again:
root@debian1:~# ping 10.12.1.20 PING 10.12.1.20 (10.12.1.20) 56(84) bytes of data. 64 bytes from 10.12.1.20: icmp_req=1 ttl=64 time=45.1 ms 64 bytes from 10.12.1.20: icmp_req=2 ttl=64 time=2.53 ms 64 bytes from 10.12.1.20: icmp_req=3 ttl=64 time=0.796 ms ^C --- 10.12.1.20 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.796/16.160/45.154/20.514 ms
Heyyyy, it works!
Now ping 8.8.8.8 and see how the session table looks like
{primary:node0} root@firefly00-cl2> show security flow session protocol icmp node0: -------------------------------------------------------------------------- Total sessions: 0 node1: -------------------------------------------------------------------------- Session ID: 1217, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid In: 10.12.1.10/4 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84 Out: 8.8.8.8/12114 --> 10.12.1.10/4;icmp, If: reth0.0, Pkts: 1, Bytes: 84 Session ID: 1218, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid In: 10.12.1.10/5 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84 Out: 8.8.8.8/12114 --> 10.12.1.10/5;icmp, If: reth0.0, Pkts: 1, Bytes: 84 Total sessions: 2
Hmm, sessions are flowing through node1 which isn’t what I wanted. Let’s fail over RG1 to node0.
{primary:node0} root@firefly00-cl2> request chassis cluster failover redundancy-group 1 node 0 node0: -------------------------------------------------------------------------- Initiated manual failover for redundancy group 1 {primary:node0} root@firefly00-cl2> show security flow session protocol icmp node0: -------------------------------------------------------------------------- Session ID: 288, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid In: 10.12.1.10/179 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84 Out: 8.8.8.8/12114 --> 10.12.1.10/179;icmp, If: reth0.0, Pkts: 1, Bytes: 84 Session ID: 295, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid In: 10.12.1.10/180 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84 Out: 8.8.8.8/12114 --> 10.12.1.10/180;icmp, If: reth0.0, Pkts: 1, Bytes: 84 Session ID: 296, Policy name: allow-all-internal/4, State: Active, Timeout: 4, Valid In: 10.12.1.10/181 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84 Out: 8.8.8.8/12114 --> 10.12.1.10/181;icmp, If: reth0.0, Pkts: 1, Bytes: 84 Total sessions: 3 node1: -------------------------------------------------------------------------- Total sessions: 0
After the fail over, we can see that packets are flowing through the node0.
In this post, I would like to show how virtual firefly SRX can be used on an ESX host. I haven’t configured interface monitoring as there doesn’t seem to be any point to monitor a virtual port which should be UP at all times or maybe there is a point but I don’t know it. You can also use ip monitoring by which you can leverage cluster functionality. I hope with this post, you can get up to speed on firefly quickly. If you have any questions or anything to contribute to this post, don’t hesitate!
Have a nice fireflying!
Genco.
Very nice!
thanks a lot, nice post! just one correction, VM can support up to 10 interfaces..so in the firefly VM we can have from ge-0/0/0 to ge-0/0/9
Thanks for the feedback Red. I recalled that it was 8 but release notes says it is 10vNICs. I have updated the post with the release notes link too.
Hi Did you use Vmware ESXi 5.5?
Hi Jorge,
I haven’t used ESXi 5.5 but it is not in the list of supported hypervisor versions:
http://www.juniper.net/techpubs/en_US/firefly12.1×46-d10/topics/reference/general/security-virtual-system-requirement.html
Yes! the Cluster part is not working in the Vmware ESX 5.5 🙁
Hi, I cant seem to see any ge-0/0/x after enabling cluster mode. Is there something I am missing out in the esxi 5.1?
I only have fxp0 and fxp1. 🙁
Cheers,
Dennis
Hi Dennis,
– Do you have 12.1X46-D10.2 release?
– You have ESX 5.1 without any update? It should be default 5.1 as far as I recall.
– Have you followed the guide exactly? Because I don’t recall of having seen such issue.
thanks.
I re-downloaded the image and tested again. Same version and same thing again, I cant see any of the ge-x/x/x interfaces. Are you able to see any ge-x/x/x interfaces when u are in cluster mode?
The only time that I had missing ge-x interfaces, when I played with the HW configuration especially CPU config of the VM. You shouldn’t be changing any setting if you have done so. Apart from these, I don’t recall anything. I am interested to look into this actually. I will try to check in my ESX server as well and send you my details to compare.
Nope, I didnt change any other settings apart from adding more vm network adapters to it. I am wondering if it has anything to do with 60 days trial since i gotten the file during January. I am now downloading the file again with another account to see if that makes a difference. It will be very interesting to find out what/why it happened. Let me bring up the new vm and i will copy the screen output and paste it here. 🙂
I think its due to memory limitation. Try to increase the memory at least 512mb.
I tried in VMWare workstation 10 and it is working fine…. all are working.
root@srx0# run show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Status Internal-SA
0 fxp1 Up Disabled
Fabric link status: Up
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/2 Up / Up
fab0
fab1 ge-7/0/2 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Down 1
reth1 Down 1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Please tell me . How to setting interface on vmware workstation 10
Wirat, vmware workstation isn’t officially supported and haven’t really used workstation. I have got experience only with ESX.
I downloaded the latest image 12.1x460d20.5 which works perfectly. I guess the earlier version may have bugs and not supported yet.
Thanks for feedback Dennis. Good to learn this.
Hello Malik,
Can you explain me vmware workstation configuration for cluster please ? When I create cluster all of my interface ge disappear and node are on hold status.
Hi,
Did you manage to solve that issue?
Hello,
Is it possible to create cluster on vmware workstation 10 ? If yes How can I do it please ?
Unfortunately, I didn’t try firefly on Vmware workstation but according to commentator Malik on this post, it also works.
It works on vm player for clustering and vmware workstation would work as well but didnt try clustering. Fusion works without clustering though.
Bad news.. It didnt work on my windows 8 machine… must be some drivers causing it not to work properly… 🙁
Finally, I found the problem!!!!! Duh… Its my stupidity that caused my clustering not to work.
Now, I can confirm that it will work with any vmware softwares including vmware fusion.
The solution is…. ” DELETE ” the default configuration when you first boot up the SRX even if you ran ” set chassis cluster cluster-id 1 node X reboot “.
You have to remove the default config file by
1) edit
2) delete
3) set system root-authentication plain-text-password
4) enter ya password
5) re-enter ya password
6) commit and-quit
7) request system reboot
Repeat that on the other box.
To confirm it works,
1) request chassis cluster status
2) request chassis cluster interface
it should tell you if there’s anything wrong with the clustering.
Lastly, make sure your network adaptor is set exactly the same ie. bridging or customised adaptor.
Cheers,
Dennis
Hi Denis, What are the interface settings while deploying on VMWare workstation?
Hi HelpSeeker,
Depends on how many network adapters you assigned to your firefly from VMWare workstation. It’d be ge-0/0/X.
Hi,
Trying to create a cluster of Firefly on ESX 5.1 u1 , saw on the release notes that u2 was not working , but no mentions of u1 …
I removed all the configration related to interfaces , do the set chassis cluster .
Reboot and i can see that the traffic on the control link is ok , but i don’t see the interfaces of the node1 , and when i’m trying to commit the configuration i’m a getting a “connection refused” on the node 1 …
I’m getting mad …
Hi JD,
I don’t see that u1 is mentioned anywhere. I must say that I haven’t tested with u1.
Don’t you see anything in the logs related to chassis clustering? You may also wait a bit. I believe the new release will have the fix for u2
and if there is any issue with u1, probably it will have been fixed too.
Genco.
I just want to thankyou on this usful post 🙂 this is very good.
You’re welcome Ahmed. I am glad that you have liked it.
I believe firefly should work with 5.5. Chassis clustering should work (I have not tried it though)
http://kb.juniper.net/InfoCenter/index?page=content&id=KB28884
That KB is about vGW with its new name Firefly Host which doesn’t have JUNOS actually. 5.5 isn’t in the supported list yet.
hi all, any news if there is any new version of Firefly that supports ESXi 5.5??
I have already firefly with ESXi 5.0. however, it is only an evaluation mode and it is expiring in the coming 2 days. i am gonna loose everything i’ve done so far 🙁
any ideas?
12.1X47-D10 release has support for ESX5.5 and it was released couple of days ago but dont know if it is available for eval.
Release notes for new Firefly version 12.1×47:
http://www.juniper.net/techpubs/en_US/firefly12.1×47/information-products/topic-collections/firefly-perimeter/firefly-perimeter-release-notes.pdf
“VMware vSphere 5.5 supported in VMware
addition to VMware vSphere 5.0
and 5.1.”
I tried to setup the cluster but not luck at all, the control interface still down.
Make sure you delete all default configurations before configuring it in a cluster mode.
Otherwise, your control link will always be down. Hope that helps.
It worked for me as well on VMWare Workstation with windows 8.
Hello,
I have configured the cluster but suddenly all the interfaces of the node 1 disappeared..
I mean the interfaces in node 0 was ge-0/0/0, ge-0/0/1,ge-0/0/2, etc.. and node 1 was ge-7/0/1, ge-7/0/2, etc.. but right now I just can see node 0 interfaces ( ge-0/0/0, ge-0/0/1,ge-0/0/2..etc.)
Can you help me?
root@fw-01# run show interfaces terse
Interface Admin Link Proto Local Remote
gr-0/0/0 up up
ip-0/0/0 up up
ge-0/0/2 up up
ge-0/0/2.0 up up aenet –> reth1.0
ge-0/0/3 up up
ge-0/0/3.0 up up aenet –> fab0.0
ge-0/0/4 up up
ge-0/0/5 up up
ge-0/0/6 up up
ge-0/0/7 up up
ge-0/0/7.0 up up aenet –> reth0.0
dsc up up
fab0 up up
fab0.0 up up inet 30.33.0.200/24
fxp0 up up
fxp0.0 up up inet 100.100.100.203/24
fxp1 up up
fxp1.0 up up inet 129.32.0.1/2
tnp 0x1200001
gre up up
ipip up up
irb up up
lo0 up up
lo0.16384 up up inet 127.0.0.1 –> 0/0
lo0.16385 up up inet 10.0.0.1 –> 0/0
10.0.0.16 –> 0/0
128.0.0.1 –> 0/0
128.0.0.4 –> 0/0
128.0.1.16 –> 0/0
lo0.32768 up up
lsi up up
mtun up up
pimd up up
pime up up
pp0 up up
ppd0 up up
ppe0 up up
reth0 up up
reth0.0 up up inet 192.168.88.26/24
reth1 up up
reth1.0 up up inet 172.19.1.1/24
st0 up up
tap up up
{primary:node0}[edit]
It’s the configuration
root@fw-01# show
## Last changed: 2014-11-17 20:04:32 UTC
version 12.1X47-D10.4;
groups {
node0 {
system {
host-name fw-01;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 100.100.100.203/24;
}
}
}
}
}
node1 {
system {
host-name fw-02;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 100.100.100.204/24;
}
}
}
}
}
}
apply-groups “${node}”;
system {
root-authentication {
encrypted-password “$1$GF8gmp1s$iGUBPDdBiDt0VO7hVyqSJ0”; ## SECRET-DATA
}
services {
ssh;
web-management {
http {
interface ge-0/0/0.0;
}
}
}
syslog {
user * {
any emergency;
}
file messages {
any any;
authorization info;
}
file interactive-commands {
interactive-commands any;
}
}
license {
autoupdate {
url https://ae1.juniper.net/junos/key_retrieval;
}
}
}
chassis {
cluster {
reth-count 2;
redundancy-group 0 {
node 0 priority 100;
node 1 priority 99;
}
redundancy-group 1 {
node 0 priority 100;
node 1 priority 99;
}
}
}
interfaces {
ge-0/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-0/0/7 {
gigether-options {
redundant-parent reth0;
}
}
ge-7/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-7/0/7 {
gigether-options {
redundant-parent reth0;
}
}
fab0 {
fabric-options {
member-interfaces {
ge-0/0/3;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-7/0/3;
}
}
}
reth0 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 192.168.88.26/24;
}
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 172.19.1.1/24;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 192.168.88.1;
}
}
security {
policies {
from-zone trust to-zone untrust {
policy allow-all-internal {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
}
zones {
security-zone untrust {
host-inbound-traffic {
system-services {
all;
}
}
interfaces {
reth0.0;
}
}
security-zone trust {
host-inbound-traffic {
system-services {
all;
}
}
interfaces {
reth1.0;
}
}
}
}
Hello,
I am trying to deploy vsrx clustering in KVM but facing some issues. I was wondering if you have given it a try.
According to juniper, the new version 12.1X46-D20.5 can do clustering when using the virtio driver.
I have translated the steps of your tutorial to KVM but the instances don’t see each other:
Created to 3 virtual networks through the virt-manager: 1 per management, 1 per HA (control and fabric) and a 3rd one for reths
set the MTU to 9000 in the corresponding virbr-nic and virbr.
Set the brctl ageing to 1
However when I set the cluster in both nodes and reboot, none of them see each other (i.e.: both appear as primary)
show chassis cluster status” list the other pair as ‘lost;
“show chassis cluster statistics” shows ‘failed to connect to the server after 1 retries…’
Any help will be really appreciated as I am working 100% in KVM.
Thanks,
Hi Jaime,
I must tell you that I haven’t tested clustering on KVM yet. I would like to but haven’t done it yet. X47 is also available. Have you tried with that?
Genco.
Hello,
Can we use one single Cluster and fabric network for multiple Junos Clusters???
ie: HA Control, HA Fabic for Cluster 1 , 2 , 3 ……
or do i need to have a separate Control and fabric network for Each Cluster.
Regards,
Ramakrishnan N
Rama, check this KB http://kb.juniper.net/KB15911. You need separate connections for data&control links. you aren’t supposed to mix different clusters in single data/control link. In my lab environment, I sometimes use it but I reckon it shouldn’t be used in a real network.
Excellent article..
Facing 2 issues:
1. Getting frequent syslog errors like:
Message from syslogd@SRX-VM at Jan 22 21:46:14 …
SRX-VM SRX-VM Frame 4: sp = 0x5835bf10, pc = 0x806e715
Message from syslogd@SRX-VM at Jan 22 21:46:15 …
SRX-VM SRX-VM Frame 5: sp = 0x5835bf50, pc = 0x805a690
2. Devices hang at reboot/shutdown with these line & nothing happens:
syncing disks… All buffers synced.
Uptime: 2h9m4s
Normal Shutdown (no dump device defined)
I also find frequent failovers but htis may be result of low CPU power provided by my Laptop.
Amandeep,
I don’t recall of having seen those messages on my VMs maybe after boot ups but you are using a laptop on which you may expect unusual behavior as the product has been tested on ESX and Centos KVM only.
Genco.
Thank you for the confirmation..
My issue seems to be due to resource crunch.
I will try to move to CentOS KVM…
Thank a lot for very useful informations.
Thank you for your feedback too Ahmet.
Thank you very much !
Sharing your information very helpful.
Hi,
My VMs are running perfectly now, Thanks to your article.
However I am not able to onboard the VM that is running in Qemu in Juniper NSM.
On Analyzing, I found the issue as shown in the below links:
http://s27.postimg.org/xr1lw8nn7/qemu.png
http://s3.postimg.org/x92su60ur/player.png
The VM running in Qemus is not having Serial Number.
Any idea what can be the issue & how to resolve it?
Thanks for a good article. One thing i noticed -The interface monitoring you mentioned, is required for redundancy groups if you want them to fail over once an issue is discovered eg a dead interface/nexthop.
Interface monitoring makes sense if you have a hardware appliance but in ESX where port is actually in software, it isn’t supposed to go down I believe as long as you don’t disconnect manually or due to some software issue. Probably ip monitoring is a better alternative.
Great post and thanks for sharing.
Sadly that Firefly Perimeter does not support LAG and ethernet switching..
Thank you for your information. I also build the ESXi lab with vSRX. It’s great guide
But I try practice “vrrp”, it seen not works. Can you try this function whether it works or not?
vSRX support vrrp, but I can’t not use this protocol success.
The version I use is 12.1X47-D20.7 and I disable security function. Just use packet mode to try vrrp.
If you can reply this question, I will appreciate that, thank you.
I would recommend you to check the release notes of 12.1X46. There is one issue mentioned there for VRRP. Not sure whether this is what you are looking for not.
I have everything setup but have a couple questions.
1. From my internal machine I can not ping 8.8.8.8 but from node0 I can ping 8.8.8.8 and I can ping the internal machine. Is this a policy or a nat problem – do you think.
2. Since this is a virtual is there a way to install the VMware tools for better performance?
1. Can we have a config snippet? Make sure you have your source-nat configured as well as security policy to permit the traffic. I used this to prepare and pass my JNCIE-SEC
2. VMware tools really don’t matter to me because you can console to the devices through the serial port of the virtual machine.
Would it be best for whole config or just the output of show | display set this seems to be shorter.
from the config mode, do:
sh sec | display set
doing that command showed me I did not set my internal interface to the internal zone. once I did that all is working. Now the better test. Delete and restart to make sure I can get things working again. Great article. NO WAY I would have every got this working without it. Thanks
If I run into further problems I will be back. Thanks Leke and rtoodtoo
Gregory, as Leke said it can be either nat or security policy or if you chose an IP not assigned to interface, proxy-arp might be missing.
For the second, I also agree with Leke.
I notice
your cli outputs are nicely coloured for clarity, did your ssh client do this or did you do this by hand, if the SSH client does this what are you using, it looks very neat and easy to read.
got my vSRX cluster working nicely, this is enabled me to be sure of the procedure to swap out a real live cluster member, thanks for your time, Simon
Simon,
I think it is done by the wordpress plugin “Crayon Syntax Highlighter”. I am not doing something special other than tagging the outputs by the plugin’s tag.
thanks
Hello guys..
Does anyone knows that why isnt ping working on a directly connected Windows Client to a reth sub-interface….Just writes Destination host unreachable……?
But when it is connected to physical interface it pings…
All system services allowed…also Promiscuous mode accept in ESXi vSwitch..but still no success….
Moreover arp also does not shows any entry for those hosts…
These Clients cannot ping reth1 as their default gateway.
For the lab topology…please see the link below:
https://mega.nz/#!jlVWAIaD!u6xYqF0OSeDLMHoa0mLRHyezLT5rbamr7ns62jiZB6w
Waiting eagerly for your kind consideration.
Dear members……
Instead of having an EX switch between Cluster and Debian host…am just using a vSwitch from ESXi…..Please help me urgently…..because cannot ping the reth interface from a Windows host….
Waiting eagerly fro your kind consideration.
Regards.
Since you’re using a vswitch, what port-groups does your reth belong to? What interface type are you using on the vsrx vm? is it e1000 or vxnet interface?
Can you post your configs on the vsrx and some screenshots of ESXi
when VMs like this don’t work try this, on your vSwitch properties, enable promiscuous mode also enable forged transmits for good measure. Also
is you Reth interface is a securty zone ?, does that security zone permit ICMP/ PING in.
to check basic connectivity check the arp cache of the host if you have a IPaddress and Mac for the SRX then the V switch is not the issue and its something on the SRX. most of the time people have not put an interface into a Zone at all.
In order to configure on VMware Workstation simply add interfaces in Bridged mode, rest follow this post, cluster will come up. Kudos rtoodtoo for writing this tutorial!!
Vital Information if you are working with vSRX 15.1×49 or newer. I was pulling my hair out for days trying to get my Fabric interfaces online…My Juniper JSEC Instructor couldn’t even help me out.
http://www.juniper.net/techpubs/en_US/vsrx15.1×49/topics/reference/general/security-vsrx-interface-names.html
Although not relating exactly to clustering I’ve downloaded the latest vSRX 15.1X49-D15.4. And no more interfaces than ge-0/0/0 and ge-0/0/1 come up.
I wonder if this is a new restriction consistent with this VMs intended use to provide security inside virtual environment. OR am I missing something I’ve added 7 interfaces on vmware and rebooted, does anyone have any thoughts ?
root> show configuration interfaces
ge-0/0/0 {
unit 0 {
family inet;
}
}
ge-0/0/1 {
unit 0 {
family inet;
}
}
ge-0/0/2 {
unit 0 {
family inet {
address 1.2.3.4/32;
}
}
}
ge-0/0/3 {
unit 0 {
family inet {
address 65.4.2.2/32;
}
}
}
ge-0/0/4 {
unit 0 {
family inet;
}
ge-0/0/0 up up
ge-0/0/0.0 up up inet
gr-0/0/0 up up
ip-0/0/0 up up
lsq-0/0/0 up up
lt-0/0/0 up up
mt-0/0/0 up up
sp-0/0/0 up up
sp-0/0/0.0 up up inet
inet6
sp-0/0/0.16383 up up inet
ge-0/0/1 up up
ge-0/0/1.0 up up inet
dsc up up
I was able to answer my own question when adding extra interfaces they must be VMXNET3 type, the vSRX can see them
Has anyone got the latest vSRX working with Family Ethernet switching ? its in the CLI, but when you try to commit this fails with a message as below
[edit interfaces ge-0/0/1]
root# show
unit 0 {
family ethernet-switching;
}
[edit interfaces ge-0/0/1]
root# commit check
[edit interfaces ge-0/0/1 unit 0]
‘family’
family ‘ethernet-switching’ only valid in enhanced-switching mode
warning: Interfaces are changed from route mode to mix mode. Please request system reboot hypervisor or all nodes in the HA cluster!
error: configuration check-out failed
[edit interfaces ge-0/0/1]