RFC: make static/dynamic analysis tools great again

You can jump to the “Proposal and RFC” section below if it’s too long to read
the motivation.

This list is usually used for non-technical decisions but I think it would be
great to discuss an issue we have right now and a proposal on how to improve

Currently coverity and other static analysis tools are tricked by the fact we
don’t initialize class members. We rely on either the object being on a BSS
section (and thus zeroed by the compiler/OS) or being allocated with new
operator (we have this file in place for that:
which globally replaces the new operator).

I always knew about this “issue”, tried to fix it some times by initializing
the members but this increases the binary size which is particularly bad for
PX4 on old pixhawk1 boards that are almost out of flash space.

However one problem that I hadn’t run into yet is that recent versions of valgrind is
hit by the problem of globally replacing the new operator. I’m explicitly saying
replacing rather than overriding since this is different from c++ override
mechanism. Valgrind hits this problem because it needs to replace the new operator
in order to track memory allocations… just like it does with malloc/calloc and other
functions. The difference here is that malloc/calloc are on libc and it can
simply wrap the call rather than replace. For the new operator, since it’s
embedded in the binary it can’t. This was my poor diagnostic of the situation
with a quick look on valgrind’s source code. You can check the tons of
"use of uninitialized memory" if you use valgrind > 3.11.

Besides being a problem for static/dynamic analysis tools it’s also a problem of
managing the project: there’s currently no way to forbid objects from being
created on stack. With the policy that “we don’t initialize members to zero in
order not to increase the flash size” this can create subtle and difficult to
track bugs. One of the reasons they are difficult to track is because the tools
that report their misuse were rendered useless or difficult due to this policy.

I’ve reached to coverity support once to try to overcome this using their
modelling tools and the feedback I obtained was:

    "This type of UNINIT_CTOR defect cannot be suppressed by modelling. They will
    have to either use Annotations in their code or explore some of the options
    of UNINIT_CTOR to see if something there might help."

So I lost hope on that happening.


Attached 2 draft .cpp files. The requirements I had were:

  1. Do not increase binary size
  2. Allow to have statically allocated objects
  3. Allow to have dynamically allocated objects
  4. Check if valgrind doesn’t explode
  5. Make the “we do not zero objects” an opt-in thing: if
    we don’t explicitely provide a mechanism in the class definition/declaration
    people should use the initialization methods: this forces people to think if
    the object can be allocated in the stack



This would be something to change on AP_InertialSensor, AP_Baro, or any other object
that is made static in AP_HAL_xxxx/AP_HAL_Class.cpp.

The #define NEW 1 is there just to be able to quickly change from the current
implementation to new one.

Bonus point here is to forbid copy constructions, which we have in some of our classes.

As for size:

$ arm-none-eabi-g++ -DNEW=0 -std=gnu++11 -c -Os -o static-allocation.old.o static-allocation.cpp
$ arm-none-eabi-g++ -DNEW=1 -std=gnu++11 -c -Os -o static-allocation.new.o static-allocation.cpp
$ arm-none-eabi-size static-allocation.*.o
   text    data     bss     dec     hex filename
    124       4      16     144      90 static-allocation.new.o
    124       4      16     144      90 static-allocation.old.o



Instead of defining a global new operation we let those objects that are supposed
to be dynamically allocated to override the new operator. I created a base class
"CallocAllocator" that can be inherited, but there are multiple ways to achieve
the same.

This actually reduces the size. I didn’t investigate yet why and I think in the end
of a complete conversion on the codebase the size will actually fluctuate a little bit
up or down.

$ size  test-new-overide.*.o
   text    data     bss     dec     hex filename
    149       0       0     149      95 test-new-overide.new.o
    165       0       0     165      a5 test-new-overide.old.o


As of now I’m not seeing any drawbacks except the non-trivial amount of manual work
to do the conversion. In the end I think it will be much better though and I am pretty
confident coverity will also be happy with the changes, allowing us to make better use
of it.


Lucas De Marchi

thanks Lucas!

I think the test-new-override.cpp is missing in your post, I’ve linked it here:


I think this is an interesting approach. The really annoying thing about this problem is that gcc is supposed to put zero-initialised variables in the bss already, and there is an option for it to not do that with -fno-zero-initialized-in-bss

For some reason that doesn’t seem to work. I’ve just done a few experiments, and it looks like it does work correctly for simple test cases. Maybe there is something about our build environment that is preventing zero data from going in the bss?

I just tried zeroing some variables in the AP_AdvancedFailsafe class, and it increased text size. Ahh, I guess it has to work that way. If you initialise a variable in a class then it has to add code to the constructor, as it will need to run it when declared on the stack. That increases flash size.

So your approach just makes this more efficient, as instead of having code to initialise each variable separately in the constructor it just calls the parent new in CallocAllocator.

ok, I think this makes sense to me. I assume we would hide it a bit more so it wouldn’t be so intrusive in the code?

humn… BSS is zeroed. Either on the binary (flash) or when kernel maps the process memory it zeroes out that pages. When generating the code for the constructor the compiler doesn’t no there will never be an object initialized on heap or stack, so it does need to generate code to set them to zero. The linker could no that, but I don’t think it is implemented.

Not sure I like to hide it. It makes more verbose but it also forces people to think if that object will be on stack, heap or bss. Just inheriting from CallocAllocator or providing a static method for static initialization I think is a good compromise.

This is far from my expertise … But from my point of view improving our compatibility with analysis tool is good. My only consern is about usability for new user or normal user that don’t ask themself too much question about coding. I mean that this will need to be documented, so everyone can understand why we are doing that, and be simple to use !

yes, bss is zeroed, that works fine, what doesn’t always work is that initialised data that has a value of zero is put into the bss so it doesn’t consume flash. That is supposed to happen according to gcc docs.
I think I understand why it is now though - it does in fact put zero initialised symbols from data into bss, but when we initialise a class element in the header we aren’t creating zero data, we are creating text in the constructor. That can’t be moved to bss.

Thanks, I edited the post to add links to them.

@lucasdemarchi I’m not sure I fully reach the consequences of these changes yet, but here go some comments.

That’s just not going to work. 90% (or more) of people will do what you tell them and not understand when the object will be on stack, heap or bss - or why that would change their decision on what to do. That means much more work in reviewing PRs to make sure people did the right thing.

I’m not sure I get this requirement. Are you saying it is up to people to decide when to initialize variables or not?

This would be optional, right?

I think it’s much easier to say “hey, you are inheriting from CallocAllocator, so don’t need to set it to 0” than “don’t need to initialize to 0, there’s a hidden feature that makes sure it is, trust me” :wink:

I explained over and over the current hidden allocator and tried to make it compatible with static tools. I closed PRs that were “fixing a bug of missing initializations”, I commented on PRs regarding that. If we make the new one hidden, too we will continue to have the same problems.

You have 3 options: make it a static-allocated object, a dynamic one with our allocator or you need to follow C++ and initialize all members, even if to 0.

Well, I just say “don’t need to initialize to 0”, people don’t understand why, there is no reason to explain it. And the issue is that you are just talking about classes that will be changed to inherit from CallocAllocator, what about new classes? People just won’t know what to do and a reviewer will have to pay much more attention to it. Also, when someone introduces code creating a new object you have to look up if that class is prepared to be static, dynamic or both.

I understand wanting to make it work with static tools, but I think we’ll get broken code way faster with this new approach.

we will never know if we can’t try

True, I’m just afraid :smile:

what about if we added the CallocAllocator inheritance, but also had the global new() like we have now? The reason for that is I am concerned that if someone forgot the inheritance then it won’t zero and we get a hard to track down bug. The overhead of the global new() is pretty low I think.

The plan is to leave it there as long as we need it.

@lucasdemarchi Just in case there is any doubt, this was given a good to go. I’m still afraid of the consequences, but progress was never made based on cowardliness :grin: So feel free to start doing changes, this is a great time to do it, master is having lots of refactoring.

@OXINARF I’m playing with it inside ArduPilot now. The static part works fine with valgrind, but coverity still complains :(. I’m trying to figure out alternatives.