[pox-dev] pox with flowvisor

Murphy McCauley murphy.mccauley at gmail.com
Thu Sep 24 19:48:22 PDT 2015


My first guess is that something is wrong with your slice configuration, but I'm the wrong person to ask about that (maybe try openflow-discuss?).

If I were trying to debug this, I might...

Capture all the OpenFlow traffic between the controllers and FlowVisor.

Write a simple controller for one slice that just installs a single table entry.

Start up a controller for the second slice that does *nothing*.

If the second controller starting up causes flows to be deleted, check the captured OpenFlow traffic.  There should only be a single flow-mod (for the first slice).  If there are others, or if you can identify the flow-mod that's clearing the tables, then the problem is on the POX side.  If you can't, the problem would seem to be on the FlowVisor side (or the switches somehow).

Good luck.

-- Murphy

On Sep 24, 2015, at 7:34 PM, Vishlesh Patel <vishlesh.patel12 at gmail.com> wrote:

> Hi
> 
> I pox instances are running on different slices of flowvisor: slice1 and slice2. Here is the details:
> 
> traffic between 10.0.0.3 and 10.1.0.3 are hadled by slice1
> traffic between 10.0.0.4 and 10.1.0.4 are hadled by slice2
> 
>  fvctl list-slice-info slice1
> Password:
> {
>   "admin-contact": "admin at slice1",
>   "admin-status": true,
>   "controller-url": "tcp:127.0.0.2:10001",
>   "current-flowmod-usage": 0,
>   "current-rate": 0,
>   "drop-policy": "exact",
>   "recv-lldp": false,
>   "slice-name": "slice1"
> }
> 
> fvctl list-slice-info slice2
> Password:
> {
>   "admin-contact": "admin at slice2",
>   "admin-status": true,
>   "controller-url": "tcp:127.0.0.3:10002",
>   "current-flowmod-usage": 0,
>   "current-rate": 0,
>   "drop-policy": "exact",
>   "recv-lldp": false,
>   "slice-name": "slice2"
> }
> 
> pox controller started in this way:
>  ./pox.py openflow.of_01 --address=127.0.0.2 --port=10001 forwarding.new_controller1
> ./pox.py openflow.of_01 --address=127.0.0.3 --port=10002 forwarding.new_controller2
> 
> 
> I also used core.openflow.clear_flows_of_connect = False  attribute to disable deleting flowentry when it connects.
> 
> Am i doing any mistake here in configuring flowspace?
> Here is the configured flowspace in flowvisor:
>> Configured Flow entries:
>> {“force-enqueue”: -1, “name”: “dpid2-flow1”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:02”, “id”: 15, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.3”, “nw_dst”: “10.1.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid3-flow1”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:03”, “id”: 19, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.3”, “nw_dst”: “10.1.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid2-flow2”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:02”, “id”: 21, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.3”, “nw_dst”: “10.0.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid3-flow2”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:03”, “id”: 22, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.3”, “nw_dst”: “10.0.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid1-flow1”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:01”, “id”: 25, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.3”, “nw_dst”: “10.1.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid1-flow2”, “slice-action”: [{“slice-name”: “slice1”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:01”, “id”: 26, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.3”, “nw_dst”: “10.0.0.3”}}
>> {“force-enqueue”: -1, “name”: “dpid1-flow3”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:01”, “id”: 29, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.4”, “nw_dst”: “10.1.0.4”}}
>> {“force-enqueue”: -1, “name”: “dpid1-flow4”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:01”, “id”: 30, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.4”, “nw_dst”: “10.0.0.4”}}
>> {“force-enqueue”: -1, “name”: “dpid2-flow3”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:02”, “id”: 31, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.4”, “nw_dst”: “10.1.0.4”}}
>> {“force-enqueue”: -1, “name”: “dpid3-flow3”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:03”, “id”: 32, “match”: {“wildcards”: 3145983, “nw_src”: “10.0.0.4”, “nw_dst”: “10.1.0.4”}}
>> {“force-enqueue”: -1, “name”: “dpid2-flow4”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:02”, “id”: 33, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.4”, “nw_dst”: “10.0.0.4”}}
>> {“force-enqueue”: -1, “name”: “dpid3-flow4”, “slice-action”: [{“slice-name”: “slice2”, “permission”: 7}], “queues”: [], “priority”: 100, “dpid”: “00:00:00:00:00:00:00:03”, “id”: 34, “match”: {“wildcards”: 3145983, “nw_src”: “10.1.0.4”, “nw_dst”: “10.0.0.4”}}
>> 
> Please reply. 
> 
> Please suggest how i stop pox controller to delete flow entries. If you know flowvisor mailinglist, please forward link.
> 
> Best Regards,
> Vishlesh Patel
> M.S. Computer Engineering
> NYU Polytechnic School of Engineering

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.noxrepo.org/pipermail/pox-dev-noxrepo.org/attachments/20150924/0ea83d09/attachment.html>


More information about the pox-dev mailing list