Replies: 4 comments 4 replies
-
Answering this part that's most relevant to what I'm working on:
Yes, that's exactly how the first implementation of iroh-net's browser support will work/the last prototype works. |
Beta Was this translation helpful? Give feedback.
-
Regarding connection by node id: we have several built-in mechanisms to allow this, and you can also your own. The trait to configure discovery is Discovery. We have several built in discovery mechanisms. For now (this might change in the future, we are currently discussing this ) an iroh-net endpoint has no discovery mechanisms preconfigured. The discovery mechanisms that we currently enable by default in iroh are based on signed dns records. See DnsDiscovery and PkarrPublisher. If you use these two, you will use iroh infrastructure for discovery. We also have a fully peer to peer discovery mechanism, DhtDiscovery. This one uses the bittorrent mainline DHT, so it does not require you or us to run any additional infrastructure. I wrote an entire blog post about this topic: https://www.iroh.computer/blog/iroh-global-node-discovery , and also gave some talks, e.g. https://youtu.be/uj-7Y_7p4Dg?t=1013 As for your actual question: I would not recommend racing different relays. I think I tried it at some point before we had node discovery, and there were some issues with internal state. Trying one by one would work, but then that would be painfully slow if you have a larger number of relays. |
Beta Was this translation helpful? Give feedback.
-
From a network topology point of view that sounds good I think. Regarding holding connections to thousands of clients: it depends :) If those connections are mostly idle you will manage to handle more active connections then when they're constantly active. In general however we do not have much production experience with this scenario. In principle this should be fine, but we may need to address some needed optimisations. You probably want to make sure that your application protocol is designed in such a way that the servers can be made to scale horizontally. Though if all servers need to have a full mesh to each other that might not be so easy, but I guess you can fairly easily put aggregators in your server mesh as the protocol is uni-directional and that would give you a huge scaling boost. |
Beta Was this translation helpful? Give feedback.
-
Okay it seems to me that Iroh net is indeed a good fit - thanks to everyone for your timely answers. I will continue to experiment with the integration and get back to you when further questions arise. One question left open though is if you are still planning on achieving protocol stability this month? |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
I am another Fedimint (https://github.com/fedimint/fedimint) developer, we previously reached out to you about backwards compatibility already here #2428
I am currently experimenting with moving our networking from json rpc over to iron-net. So far I can say working with iron-net is just a joy, everything just works right away and to us it seems like a great fit for our project in many ways.
In this discussion I would like to explain our use case to you and check if you agree that it is a good fit or if you foresee any issues already that might come up in production as our use case is quite different from the file-sharing use case often used as an example. Furthermore I have a few concrete questions as well.
We have a client server architecture, where the server is federated, meaning we have at minimum four but up to twenty servers collaborating to form one byzantine fault tolerant system, which can progress as long as two thirds of the servers are online and even if up to one third of the servers are malicious. The state of the servers is eventually consistent, meaning they achieve consensus on the same system state.
This leaves us with the following networking requirements: We need a client - server request response network layer where a response might take hours to be ready - we currently use json rpc on web sockets for this. The client needs to maintain one connection per server, so 4-20 connections. Furthermore, Every server needs to broadcast messages to the other servers with low latency, so every server maintains a connections with every other server - we also use web sockets here.
Right now every server has to configure its own domain and is therefore difficult to run on a local machine. It is also not just one big fedimint that we run ourself, but actually much less technical people running the system for their local community. Therefore we need to make the setup as easy and foolproof as possible any configuring a domain is a big hurdle here and prevents them from running on home servers like a raspberry pi. Furthermore we want to cryptographically authenticate every response so iroh-net solves several problems at once for use.
In my prototype I used bidirectional streams for the request response calls and unidirectional stream to send messages between servers. For every incoming connection from the client the server then spawns a task to handle it and spawns a new task for every request made.
Do you see any issues with that approach?
Do you see any issues with the server holding connections to a thousand clients at the same time?
Furthermore, we would prefer to able to just connect to a public key without any other non-static information. In my prototype I use the following function to connect to a public key while in production we would run our own relays in addition to a hardcoded list, however probably no more than 8 or so in total. As we only trust the relays for availability a hardcoded list of them is fine for us.
Do you see any problems with that approach?
Finally, do you still intended to achieve protocol stability in October and would it be possible that if we were to run the client in the browser that the connections to multiple servers are all multiplexed over one Web-socket connection to a relay?
Beta Was this translation helpful? Give feedback.
All reactions