Replies: 10 comments 19 replies
-
Each NApp that pushes flows, if it's configured to use a table other than 0, it's expected that this NApp will also install one or more flows in the table zero to forward to whichever table it's using. It's also desirable that the forwarding match from table 0 to its "X" table to be as specific as possible whenever applicable, just so this minimize interference with other NApps that might also be using a different table. In a case where there are multiple flows with different types of matches, like mef_eline, maybe we'll need a more generalized match trying to match all with a low priority (aligned with a priority that this NApps already uses). In your illustrated example, you're highlighting mef_eline using more than two tables. Theoretically, with OpeFlow that should be doable, but, in practice, from what I understand of the use cases that we're trying to cover, for the multi-table feature, from a perspective of a single NApp or feature, it's expected that it'll use only table 0 + another one to simplify. Ultimately, I'll suggest you to document how each of these NApps will have matches when using another table than 0. To get this started, I'll encourage you to get started with one of the simplest that is
If you look on Ultimately, one of the advantages of also having this multi table feature, is that network engineers are allowed to partition their tables with respective matching (with wildcard/mask or not) and actions capabilities, to achieve a configuration that will allow them to maximize the number of supported flow entries. So, once network engineers know that for instance that We're implementing this feature generically, but AmLight is driving it, @italovalcy is very experienced with how pipeline and its partitioning works, he has presented some information about it before, that we could cross reference here to get an idea (please ask him to get this info). Italo will be one of the reviewers of this issue discussion, so ultimately we can double check if the matches/actions being used will end up being also aligned with the expectations/use case. |
Beta Was this translation helpful? Give feedback.
-
I think we should only keep and maintain a single option on settings on which table to use, if it's different than 0, then the NApp can derive it's using multi-table. Which tables are being configured should be responsibilities of network engineers. The configured valued should be a positive int from 0 to
On Kytos-ng version |
Beta Was this translation helpful? Give feedback.
-
Since each NApp will only use table 0 and only another extra table, then we got part of this guaranteed and simplified. |
Beta Was this translation helpful? Give feedback.
-
@Alopalao since |
Beta Was this translation helpful? Give feedback.
-
Hello @viniarck and @Alopalao, what a great discussion here! Please consider my half-bit contribution: Goal: optimize resources for the switch match-action pipeline by 1) having more specific match fields and exact match for each table (which allows more entries to be allocated to a table on Tofino); 2) splitting the packet processing pipeline into multiple tables and aggregating entries throughout the multiple tables (which makes it possible to have one entry on table 1 that correspond to multiple entries on table 0, saving space). The ideal solution will require us to define a very complex pipeline to be really optimized. We will not pursue the ideal solution now, that will be delivered as part of the migration to P4 (another ongoing project). For now, the best way would be to optimize only what is easy to do and create the infrastructure to support multiple tables. From my point of view, optimizing what is easy to do means choosing the Napps that have a very well-defined set of matches and actions, allowing us to create tables with an exact match for each one (see examples later). On the other hand, creating the infrastructure means: creating table-miss entries to ensure the packet forwarding process will work smoothly and the consistency routine will do its job correctly. Other Napps that could benefit from this are BFD, OF-LLDP, Mirror, coloring, etc. Thus, it looks like what we actually need is:
Note that, as mentioned by @viniark before, another approach would be decentralizing the configuration of the multi-table pipeline and delegate to each Napp the creation of the flows. My only concern with this option is to make it more complex to manage. |
Beta Was this translation helpful? Give feedback.
-
Here one scenario for example. Considering the following napps and their match fields:
|
Beta Was this translation helpful? Give feedback.
-
Looks like we are inclining for a new NApp few operation. Its responsibilities are related to
From asked requirements, I see that the best option is to communicate back and forth to the NApp that wants to install certain flow. Overall, this is just speculating from my part. To understand this better, how would the process work for this new Napp in terms of handling the data and communication with |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The following was settled:
With this new information I made another diagram but first few things to clarify:
|
Beta Was this translation helpful? Give feedback.
-
Closing this discussion. PR landed on #359 |
Beta Was this translation helpful? Give feedback.
-
This discussion is for issue #186: Blueprint for OpenFlow multi-table pipeline processing
First I wanted to confirm some concepts here before beginning implementation.
Issue
Implement Multi-table pipeline processing which requires adjustments on mef_eline, of-lldp, coloring. The concept is installing multiple flows in a switch with different table number which will match with packets. This will distribute the packets to different flows allowing simultaneous processing.
![Screenshot from 2023-03-14 11-01-41](https://user-images.githubusercontent.com/55767214/225043326-5c3222a6-e123-4676-bec7-2dbd8a3a052d.png)
(Image simplified representation, maybe not skipping tables?)
Specification
There are sections on the Openflow specification that describe the openflow pipeline processing on the switches. The most important parts in my opinion are the rules for consistency on the pipeline (section 5.1.1 and forward) and required instruction, next-table-id (section 5.9).
Consistency
There are consistency rules to follow so its ensure the network functionality. Table-miss should be present on every flow table thus inconsistency will likely cause some packets to be dropped.
Next table
According to the specifications, the table number always goes up.
Thoughts:
Beta Was this translation helpful? Give feedback.
All reactions