Replies: 4 comments 1 reply
-
Hi, good question. In my view, "positive negative" is a leftover in RPNI's name as it was initially designed to learn DFAs, which have accepting (positive) and rejecting (negative) traces. For Moore machines and Mealy machines, a sequence does not end up in an accepting or rejecting state (possible outputs are True and False), it rather ends up in some state and an output is observed. So for Mealy machines (Moore also), you just need to pass sequences that are in the form
where output is an observed output after the input sequence is executed. Check Important note: if you want to learn Mealy machines, your input set should be prefix-closed!
|
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. When implementing RPNI over Mealy Machines, was there a specific paper or reference material you followed? Or was it implemented based on your own approach? Best Regards, |
Beta Was this translation helpful? Give feedback.
-
As far as I know the extension of RPNI to Mealy machines has never been published. Assuming some familiarity with RPNI it is actually quite simple. In RPNI, a Prefix Tree Automaton (PTA) is created from the positive data, which is a DFA that accepts exactly the positive data. Then states of this PTA are successively merged to obtain a more general model. Merging two states induces a partitioning of the PTA states that gives a new, smaller automaton. If the resulting automaton is not consistent with the negative sample (i.e. accepts a trace from it), the merge is rejected. This is essentially done until no merges are possible anymore and we get the final automaton. Now, if we take a closer look at the PTA we can see that non-accepting states do not actually mean that we reject at this point. It only means that we don't know whether we should accept or not. We could also create a similar PTA for true rejection from the negative sample. And while we're at it we could use the same structure for the positive and the negative data. As a result we get a PTA formed from the positive and negative sample which has states with ternary labels: accepting, rejecting and unknown. When merging two states we now have to check whether the partitioning is consistent, that is: no partition contains states that have conflicting labels (rejecting and accepting in the same partition). We don't have to check consistency with the negative sample because it is already incorporated in the PTA. In this setting, "accepting" and "rejecting" are just labels that are treated exactly the same and adding another label does not really change anything. This means that we just generalized the method to Moore machines, which makes sense since DFAs can be seen as Moore machines with binary output. Speaking of conceptual similarities: We can think of Mealy machines as Moore machines where the "output" is a function mapping inputs to the actual outputs. Two output functions (and therefore states in a partition) are locally consistent if for each input they agree on the output or one functions output is unknown. We just generalized the algorithm to Mealy machines 🥳 I hope this helps. |
Beta Was this translation helpful? Give feedback.
-
Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm Sangjun Park who is interested in your project.
When I go through your Project, there is a RPNI learning algorithm that can learn DFA with Positive and Negative Set.
However, i'm curious when use it for Mealy Machine, my question is that "Is it possible to define negative set in Mealy Machine?"
I mean in network packet trace, how can i define negative set of network trace?
Best Regards,
Sangjun Park
Beta Was this translation helpful? Give feedback.
All reactions