Bug 484780 - "Class has virtual method but non-virtual destructor" warning should have a quick fix
Summary: "Class has virtual method but non-virtual destructor" warning should have a q...
Status: NEW
Alias: None
Product: CDT
Classification: Tools
Component: cdt-codan (show other bugs)
Version: Next   Edit
Hardware: All All
: P3 normal (vote)
Target Milestone: ---   Edit
Assignee: CDT Codan Inbox CLA
QA Contact: Elena Laskavaia CLA
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-12-21 16:20 EST by Nathan Ridge CLA
Modified: 2016-08-17 13:09 EDT (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Nathan Ridge CLA 2015-12-21 16:20:02 EST
The quick fix should be "add a virtual destructor" if the class doesn't have an explicitly declared destructor, or "make destructor virtual" if it does.
Comment 1 Stefan Heinzmann CLA 2016-08-12 18:04:12 EDT
There is a third possibility: Make the destructor protected.

The warning actually is misguided, because it triggers on the wrong condition.

Rationale:
The problem that a virtual destructor is designed to avoid is this: If you call the nonvirtual destructor through a pointer or reference to base class, only the base class destructor is called, even if the object is of a derived class.

Note that this problem doesn't depend on the presence of a virtual function. Having a nonvirtual destructor can lead to trouble even when there's no virtual function at all. The real factor is whether a derived class' destructor does some important work, which may not be omitted when destroying an instance of the derived class. The phenomenon is called "slicing" (only the base class slice of the object is destroyed).

The problem can be circumvented by introducing a virtual destructor in the base class, but that's only one of several possibilities. Another is to prevent the calling of the destructor through a pointer or reference to base. Making it protected is the obvious way to realize that. Hence the warning shouldn't be displayed when the nonvirtual base class destructor is protected.

Even better: The warning should be displayed when deriving from the class, not for the base class as such. It is the deriving that introduces the problem. Some classes aren't safe to derive from, but that may be intentional.
Comment 2 Nathan Ridge CLA 2016-08-12 18:20:06 EDT
(In reply to Stefan Heinzmann from comment #1)
> There is a third possibility: Make the destructor protected.

Good point, we should have that as a quick fix option.

> Even better: The warning should be displayed when deriving from the class,
> not for the base class as such. It is the deriving that introduces the
> problem. Some classes aren't safe to derive from, but that may be
> intentional.

Let me try to understand what's being proposed. Assume in all cases Base has a nonvirtual public destructor.

  1) Base has a virtual method, there are no derived classes
      ==> no warning

  2) Base has a virtual method, has derived class Derived
      ==> give a warning, but at the declaration of Derived,
          not at the declaration of Base

  3) Base has no virtual methods, has derived class Derived
      ==> give a warning at the declaration of Derived
      
Assuming I got that right, my thoughts on them:

(1) and (2) seem fairly reasonable. One potential downside I can see, is that if you're writing a library, you may be expecting your clients to derive from your base without having any derived classes yourself. In such a case, only the clients would see the warning, but the required fix is on the library's side.

(3) sounds like it would flag an enormous amount of harmless non-polymorphic hierarchies.
Comment 3 Stefan Heinzmann CLA 2016-08-17 10:41:58 EDT
(In reply to Nathan Ridge from comment #2)

> (1) and (2) seem fairly reasonable. One potential downside I can see, is
> that if you're writing a library, you may be expecting your clients to
> derive from your base without having any derived classes yourself. In such a
> case, only the clients would see the warning, but the required fix is on the
> library's side.

That's a fair comment, but the actual problem here is that the correct interpretation would have to make assumptions on what the library writer wanted to achieve. In this case, you would want to know if the library writer intended the class to be derived from or not. There are quite a few libraries out there (including the standard library) which contain classes that aren't meant to be derived from, yet it isn't encoded in the source.

C++11 and beyond would allow to properly encode this, using means of the language, but not all libraries use those features, perhaps because they want to be backwards compatible.

> (3) sounds like it would flag an enormous amount of harmless non-polymorphic
> hierarchies.

That's true, but they aren't quite as harmless as they may seem, since slicing will occur whenever there should be a destructor call on the base class, and the derived class has a nontrivial destructor.

It may be quite involved for the parser to figure out when there's danger. There isn't, when the derived class has a trivial destructor. In other words, when the derived class doesn't add any members with a non-trivial destructor, and doesn't add a nontrivial destructor of its own. In those cases the warning would have to be suppressed. That's probably what I would attempt, provided the parser tracks for each type, whether it has a nontrivial destructor.

It would of course be very cool if there was a way to detect slicing, i.e. the warning could be generated on the destructor call that would lead to slicing, but that's going to be tough.
Comment 4 Nathan Ridge CLA 2016-08-17 11:39:47 EDT
(In reply to Stefan Heinzmann from comment #3)
> (In reply to Nathan Ridge from comment #2)
> 
> > (1) and (2) seem fairly reasonable. One potential downside I can see, is
> > that if you're writing a library, you may be expecting your clients to
> > derive from your base without having any derived classes yourself. In such a
> > case, only the clients would see the warning, but the required fix is on the
> > library's side.
> 
> That's a fair comment, but the actual problem here is that the correct
> interpretation would have to make assumptions on what the library writer
> wanted to achieve. In this case, you would want to know if the library
> writer intended the class to be derived from or not. There are quite a few
> libraries out there (including the standard library) which contain classes
> that aren't meant to be derived from, yet it isn't encoded in the source.

The premise for (1) and (2) was "Base has a virtual method". That's a pretty strong signal that Base is meant to be derived from.

> > (3) sounds like it would flag an enormous amount of harmless non-polymorphic
> > hierarchies.
> 
> That's true, but they aren't quite as harmless as they may seem, since
> slicing will occur whenever there should be a destructor call on the base
> class, and the derived class has a nontrivial destructor.
> 
> It may be quite involved for the parser to figure out when there's danger.
> There isn't, when the derived class has a trivial destructor. In other
> words, when the derived class doesn't add any members with a non-trivial
> destructor, and doesn't add a nontrivial destructor of its own. In those
> cases the warning would have to be suppressed. That's probably what I would
> attempt, provided the parser tracks for each type, whether it has a
> nontrivial destructor.

The parser can figure out whether a class has a trivial destructor. However, I'm afraid that there would be a lot of false positives even for classes with nontrivial destructors, in hierarchies that are not meant to be used polymorphically (e.g. consider cases where a class inherits from a "tag" class, or other non-polymorhpic uses of inheritance).

I think the presence or absence of a virtual function in the base class is a good enough signal for whether the base class is meant to be used polymorphically.

> It would of course be very cool if there was a way to detect slicing, i.e.
> the warning could be generated on the destructor call that would lead to
> slicing, but that's going to be tough.

Yes, this is the direction in which compilers have been moving. For example, recent versions of clang include -Wdelete-non-virtual-dtor (which warns on problematic deletes only) in -Wall, but not -Wnon-virtual-dtor (which is similar to CDT's warning).

For CDT, a bunch of additional infrastructure would need to be put into place for it to be able to detect problematic deletes in general. The particular sticking point is deletes that occur inside template functions where the type of the pointer being deleted is dependent on template parameters. In such a case, the delete can be problematic for some instantiations of the template and not for others.

  - To detect the problematic deletes, CDT would need to be able to instantiate
    the bodies of template functions. Infrastructure for doing so is being 
    developed in bug 490475, although for now the intention is to only use it 
    for purposes of C++14 constexpr evaluation.

  - To report a problematic delete in an instantiated context, CDT would have
    to indicate the location of the delete, as well as the call site that
    triggered the problematic instantiation (and if that's inside a template
    as well, then the call site that tirggered *that* instantiation, and so
    on - basically, every element of the instantiation backtrace that a 
    compiler would output). CDT currently has no UI for this sort of thing.

Once these pieces of infrastructure are put into place, we can consider making this warning more accurate the ways compilers have.
Comment 5 Stefan Heinzmann CLA 2016-08-17 12:27:30 EDT
On further reflection, perhaps a workable solution would be this:

1) A warning should be generated for a class definition, when the class is abstract (has pure virtual functions), has a public non-virtual destructor, and is not declared final. Such a class can't be instantiated directly, so its only purpose can be to be derived from, hence the warning is appropriate to flag up the dangerous design. Of course, an abstract class which is declared final is a different kind of mistake, since it can't be used at all.

2) A warning should be generated for a (potentially) polymorphic destructor call to a non-virtual destructor. There are many destructor calls which are sure to be non-polymorphic, and those don't trigger the warning. This rule catches cases when the base class isn't abstract, and may not even have virtual functions at all. This includes the harmless cases referred to above. Those are only flagged up when calling the destructor in a dangerous way.

The difficulty would presumably be how to determine whether a destructor call is potentially polymorphic or not. Many cases are simple: The destructor call of a member object (in another class or an array) is always non-polymorphic. The destructor calls of static or automatic variables are non-polymorphic. You would have to treat destructor calls through a reference or a pointer as polymorphic, except when it can be determined with certainty where the object was constructed, and what its type is.

These rules would trigger a warning in cases when there's a polymorphic destructor call even though there are no derived classes. Figuring out whether there are any derived classes would have to include collecting information from the entire source code of the application, which might be impossible in case of dynamically linked code. So the warning may flag up cases that don't occur in practice, but I consider this acceptable.

One of the tricky cases would be this: You dynamically allocate an object with a non-virtual destructor and manage the object through a std::unique_ptr allocated on the stack. Its destructor would therefore be called through the smart pointer on exit of the scope of the smart pointer. You effectively can be sure of the actual type of the object, because its construction site is nearby. Ascertaining safe destruction, however, would need a rather thorough analysis of the code in between, to see whether the actual type of the object managed by the smart pointer could change in between. It is probably not worth going that far. The destruction is through a pointer, hence it is potentially polymorphic, hence it can be considered risky and deserving of a warning.

> The parser can figure out whether a class has a trivial destructor. However,
> I'm afraid that there would be a lot of false positives even for classes
> with nontrivial destructors, in hierarchies that are not meant to be used
> polymorphically (e.g. consider cases where a class inherits from a "tag"
> class, or other non-polymorhpic uses of inheritance).

My rules would trigger a warning only if trying to destroy an object through a pointer to the tag class, which I would consider unsafe and worthy of a warning. Have you got an example which would produce a warning according to my proposed rules even though there is no danger? It may well be the lack of my imagination, but none has occurred to me so far.

>   - To report a problematic delete in an instantiated context, CDT would have
>     to indicate the location of the delete, as well as the call site that
>     triggered the problematic instantiation (and if that's inside a template
>     as well, then the call site that tirggered *that* instantiation, and so
>     on - basically, every element of the instantiation backtrace that a 
>     compiler would output). CDT currently has no UI for this sort of thing.

That's a major sticking point. You are right, the warning would have to produce output that would mimick the compiler output in order to be able to track back to the point of instantiation. I don't understand why this is a UI issue, however. Wouldn't it just make the warning text longer, similar to what a compiler produces? The parser itself must have some infrastructure to backtrack template instantiations, otherwise it would mishandle many other more important things.
Comment 6 Stefan Heinzmann CLA 2016-08-17 12:50:13 EDT
(In reply to Stefan Heinzmann from comment #5)

> My rules would trigger a warning only if trying to destroy an object through
> a pointer to the tag class, which I would consider unsafe and worthy of a
> warning. Have you got an example which would produce a warning according to
> my proposed rules even though there is no danger? It may well be the lack of
> my imagination, but none has occurred to me so far.

I have to retract this. A case where a plain C-style struct is destroyed through a pointer would trigger the warning, which is probably a very common case in code that uses C++ as a better C. So yes, maybe the only way is to treat classes without any virtual functions as exempt from my second rule.

One would want to detect cases where objects are being destroyed through a pointer in some part of the code, and at the same time the object's class is used as a base class in other parts of the code. That would require global code analysis, which may be impossible. Too bad.

So perhaps for the time being the only workable rule would be my first one, which is a slight change of the current behavior in that the warning only fires when the destructor is public.

Whether the second rule can be turned into something that produces sensible results is questionable at this point.
Comment 7 Nathan Ridge CLA 2016-08-17 13:09:28 EDT
(In reply to Stefan Heinzmann from comment #5)
> > The parser can figure out whether a class has a trivial destructor. However,
> > I'm afraid that there would be a lot of false positives even for classes
> > with nontrivial destructors, in hierarchies that are not meant to be used
> > polymorphically (e.g. consider cases where a class inherits from a "tag"
> > class, or other non-polymorhpic uses of inheritance).
> 
> My rules would trigger a warning only if trying to destroy an object through
> a pointer to the tag class, which I would consider unsafe and worthy of a
> warning. Have you got an example which would produce a warning according to
> my proposed rules even though there is no danger? It may well be the lack of
> my imagination, but none has occurred to me so far.

I was referring to your earlier proposal of flagging a warning at the declaration of the derived class.

The newer proposal of flagging delete sites doesn't have this problem (except for the caveat you mentioned), but as discussed, it will take some work for CDT to implement.

> That's a major sticking point. You are right, the warning would have to
> produce output that would mimick the compiler output in order to be able to
> track back to the point of instantiation. I don't understand why this is a
> UI issue, however. Wouldn't it just make the warning text longer, similar to
> what a compiler produces?

You're right that we could just stick the instantiation backtrace into the text of the warning. I was just thinking that an IDE user might want to be able to click on elements of the backtrace and be navigated to the corresponding call sites, and that would require new UI.