[Xorp-hackers] Thrifted XrlSender

Bruce Simpson bms at incunabulum.net
Mon Nov 9 06:57:52 PST 2009


Bruce Simpson wrote:
>    I'm still not 100% happy with how the XrlPFSender cache mechanism 
> works. Because we're holding a pointer somewhere, to something whose 
> lifecycle is managed somewhere else, there really isn't any other 
> answer than the one you've already suggested.
> ...
>
>    In Thrift, the binary blobs themselves can be decoupled from where 
> they go. There is a potential chicken-and-egg problem if we support 
> multiple TProtocol types, where we would need to know the sender 
> before the stubs create the blob; if we just speak TBinaryProtocol to 
> everything, we don't have this problem, but it does mean we can't just 
> tell the XORP RPC endpoints 'speak JSON to this guy', 'speak XML-RPC 
> to this guy', 'speak AMQP to this guy' etc.
>
>    So the idea of caching the transport we'd prefer to transmit from, 
> is still one that bears further scrutiny, even in a re-spin. The 
> difference is, for maximum flexibility, Thrifted XRL stubs would 
> actually want to see XrlPFSender's equivalent upfront, before 
> XrlSender::send() is even called.

I'm thinking the best way forward here is to assume the use of 
TBinaryProtocol in all situations.

    As libxipc currently stands, calls through the XrlSender::send() 
interface need not know the destination endpoint first; they can be 
temporarily buffered whilst the FinderClient lookup completes, which is 
an asynchronous operation, whose completion gets dispatched from the 
EventLoop.

    We need to buffer the outgoing RPC call at this point. In a language 
where the call stack is independent from the object stack, e.g. Python, 
it's easier to use a continuation here to deal with awaiting for the 
result of a pending operation. However, in C++, we can't easily split 
the XRL output marshaling like this, so we buffer it.

    Now, the most likely source of performance issues with XRL is the 
intermediate representation. With Thrift, there is no intermediate 
representation -- what we buffer, is what we transmit.
    If we stick to using TBinaryProtocol, we need only render the blob 
into a buffer, and dispatch that blob when the FinderClient lookup 
completes, which requires no change to the existing logic.

    Using TBinaryProtocol shouldn't be an impediment to future 
scalability or feature additions. AMQP is designed with relaying binary 
blobs in mind. A change in representation is only really useful if we 
need to interact with the processes using some other protocol, and there 
are better ways, more appropriate ways to do this.



More information about the Xorp-hackers mailing list