[Xorp-hackers] Boost use

Eddie Kohler kohler at cs.ucla.edu
Tue Apr 27 07:27:33 PDT 2010


Hey!

I am not a Xorp developer but I used to be.  For what it's worth, from the 
outside:

* Boost is a very well designed and respected library.  It isn't a fanboy 
library or like some jQuery plugin.  And well-designed C++ libraries are 
often, unfortunately, difficult to read -- try looking at an STL 
implementation.  So Ben's main objections don't really land for me.  But:

* Boost is a very LARGE library and using all of its parts would be a mistake. 
  I agree with Ben on improving XRL IPC being a better path than using Boost 
shard-memory magic.

* In my opinion, it seems that the most widely agreed upon parts of Boost 
include shared_ptr, Math, MPI, and Thread.  As someone suggested, using the 
parts of Boost accepted for C++0x would not be a bad path.

Eddie


Ben Greear wrote:
> On 04/27/2010 03:23 AM, Bruce Simpson wrote:
> 
>>>> Do you honestly expect a developer under pressure to deliver rapidly,
>>>> to not make the mistake of misreading a & symbol?
>>> There are all sorts of ways to write broken code, and bugs will go into
>>> the system, but we can fix the bugs..and can review the code as it goes
>>> in to check for that sort of thing.
>>>
>>> I do expect developers to know the difference in value v/s ref, but if
>>> they don't, I'm more than happy to explain it to them and/or fix the
>>> bugs myself.
>> So this use of XORP is largely an educational exercise,
>> vs delivering a production router suite?
> 
> I am merely offering to help review code and help explain mistakes
> to people as I find them, regardless of their experience.
> 
> I would hope that others will review my code and return the favour,
> as I still make mistakes too.
> 
>> As you know my views on this are strong,
>> that we should have been looking to limit necessary use of C++ from the
>> outset,
>> where it isn't actually needed in the system--
> 
> It's never too late to start making things better, and adding something
> like boost, which appears to use every C++ trick in the book isn't going
> in the right direction in my opinion.
> 
>> Templates are not going away; they are not an automatic foot-shooting
>> device,
>> as many may well believe. Think of C++ as a surgeon's scalpel
>>
>> However time is always against us. My argument viz typo still stands
> 
> I'll use templates when needed, just don't prefer them when other techniques
> are easily available.
> 
> W/regard to ref-ptr v/s boost, would there be any difference between passing
> a boost smart pointer by ref v/s val and passing a xorp ref-ptr  by ref v/s val?
> If I understood the boost logic, you get the same basic behaviour with both.
> 
>> A single-threaded binary is even less likely to scale to multicore--
>> this is in effect advocating ditching the architecture -- ain't gonna
>> happen--
> 
> Considering the current cost of IPC, you would be very likely to increase
> performance by consolidating xorp processes into a single binary.  I'm
> not suggesting to do this because I like that the individual modules are
> relatively simple and easy to debug, but just adding more processes is not always
> a performance gain.
> 
>>> The one part of bgp/harness logic that seems like it could feed MRTD
>>> data uses
>>> non-batched XRL commands, so it's definitely going to be slow. But, I
>>> might could
>>> load on one xorp, and then have it peer with another and test that
>>> peer sync.
>> That is probably a better solution--
>> once again sorry that BGP harness couldn't be fixed on XORP Inc account,
>> my former client's focus shifted
> 
> I now have it mostly sorted out in xorp.ct.  As soon as we get 1.7 out and I'm
> cleared to submit, I'll push it upstream.  A few bugs crept into bgp since
> the tests were disabled (or started being ignored), but I'm fixing them or working
> around them as I find them and nothing huge so far.
> 
>>> I tested with the xorp XRL sender/receiver test. I also looked
>>> at code in bgp, and it all seems serialized (one XRL request at a
>>> time outstanding). That means select, send, (process wake on receiver,
>>> including
>>> flushing cache), select, receive, process-data, send result for every
>>> XRL request.
>>> The original sender does no further work until it gets the response
>>> back, and receiver does no
>>> work until it gets the next request.
>>> Batching would be a huge improvement here.
>> Yes it should lead to some improvement
>> and was implemented on the XORP, Inc corporate branch to my knowledge
>>
>> There's nothing fundamentally wrong with how libxipc interfaces to other
>> code--
>> but what is an issue is that XRL is largely its own thing,
>> and was mend/make do at the time (before better alternatives came along)
> 
> "Better" is all hand waving until someone posts some actual performance numbers
> and/or shows a patch with significant code clean up.
> 
> In a volunteer project such as xorp has become, you cannot force people to use
> a particular method of development.  If someone posts patches and such, then
> whoever or whatever group is in charge can accept or decline the patches based
> on merit.  I have ongoing plans to continue to post patches and improve xorp,
> and I see no need for boost in my plans.  If you expect to make ongoing contributions
> to xorp, then you can post patches that use boost and show the actual advantages.
> But, if you do NOT have any plans to contribute in the near future, then don't
> expect others to write code using _your_ preferred methodologies.
> 
> Thanks,
> Ben
> 



More information about the Xorp-hackers mailing list