-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Linux/2.0.3] ubridge is extremely greedy when a cloud node with a loopback interface is linked to a qemu node #36
Comments
The 2 uBridge processes you see are for the cloud and the Ansible node. uBridge will use CPU power to receive and send packets. It make sense that the one for the cloud does since it must be receiving packets from your network. However, it doesn't make sense for your Ansible node. Can you isolate the issue to 1 or 2 nodes? |
First off, here is the log
|
@grossmj |
Yes the cloud is powered by uBridge. There must be lot of traffic on the loopback interface which explain that uBridge take resources to read it (can be configured by a Wireshark capture on lo interface). Indeed the best way is to use a TAP interface instead. |
I will have closer look. |
I think it's "normal". In the l0 we have traffic between qemu and ubridge and after that between the two ubridge. Ubridge 2 = cloud node connected to lo
We can create a tap for lo but we have also the issue with other interfaces. We already create a tap if you try to connect a bridge: |
@noplay yes I think this is what is going on. I will do a quick test to confirm this. |
I could not reproduce the issue with one ASAv (not running) attached to a "cloud" on lo interface. I do not see any excess traffic. I would like to understand why you get a loop when using a Loopback interface. |
I actually can reproduce that issue (excepting the high CPU usage). Somehow when a uBridge process is attached to loopback interface, ICMP packets are generated. I suspect an issue with libpcap. |
Some more information about my setup: |
@grossmj However, I do notice a few PINGs generated automatically and endlessly as soon as GNS3 is launched, before any project is loaded. There is also some non-ICMP traffic with the 9000 port, but I guess this one if expected. |
GNS3 doesn't send ping. Their is no code in current GNS3/ubridge to make an ICMP packet. You see that after starting GNS3 and when you have no topology opened? |
GNS3 gui/server 2.0.3
ubridge 0.9.11
GNS3 is launched, but no project is open:
![before opening project](https://user-images.githubusercontent.com/13176858/27738467-b007fe16-5dab-11e7-8f2c-245a674d4fa4.jpg)
Opening an existing 2.0.3 project with a single non running IOS-XRv 6.1.2 qemu node:
![after opening an ios-xrv project](https://user-images.githubusercontent.com/13176858/27738537-e54ba866-5dab-11e7-860f-1ce4c91b6e11.jpg)
![process usage](https://user-images.githubusercontent.com/13176858/27738567-f6ea577a-5dab-11e7-9118-5a5da453ca1d.jpg)
![project](https://user-images.githubusercontent.com/13176858/27738905-2385971c-5dad-11e7-8498-220e6f55e9ef.jpg)
Checking which process needs so much CPU:
ubridge should NOT be running, since no node is currently running.
If I load another project with other type(s) of qemu node, such as ASAv or CSR 1000v, ubridge is not running.
The text was updated successfully, but these errors were encountered: