Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about multi-link aggregation #2

Open
woriss opened this issue Apr 26, 2024 · 3 comments
Open

Question about multi-link aggregation #2

woriss opened this issue Apr 26, 2024 · 3 comments

Comments

@woriss
Copy link

woriss commented Apr 26, 2024

Does this project allow for the aggregation of bandwidth across multiple links, thus enhancing stability and bandwidth? Are there examples of multi-link configurations available?

@max-niederman
Copy link
Owner

At the moment it will not aggregate bandwidth, only addr redundancy, but the current implementation was designed with the intent of making bandwidth aggregation easy to add in the future. It's one of the things I have planned for after I split the implementation into a daemon with an API the CLI talks to, so you can dynamically reconfigure the interface (for, e.g., roaming support).

As for multi-link configuration examples, all you need to do is add more addresses to recv_addrs and peers[i].local_addrs. You can also add new remote addresses, but it's not necessary.

@woriss
Copy link
Author

woriss commented Apr 26, 2024

If we could achieve bandwidth aggregation across multiple links seamlessly facilitating NAT traversal, that would be fantastic. Glorytun might offer some insights in this regard. Additionally, Multipath QUIC is an emerging technology. I suggest considering data stream balancing at the protocol level, leveraging MPTCP/Multipath QUIC, while Centipede focuses on tunneling.

https://multipath-quic.org/
https://github.com/angt/glorytun

@max-niederman
Copy link
Owner

I hadn't been thinking of it this way, but in the current implementation there is actually already a generic transport protocol within Centipede. Right now, Centipede actually doesn't inspect the packets it tunnels at all, it just shuttles them between TUN devices on different machines using that generic transport protocol (although this will change very soon so that you can effectively have networks of >2 peers).

I think that, as you say, it would probably be a good idea to cleanly separate the transport and tunneling parts of the code. I wouldn't be opposed to using an existing multipathing transport protocol, but I'm skeptical of using MPQUIC because (unless I'm misunderstanding) its stream-based nature will introduce unnecessary head-of-line blocking in tunneled connections.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants