I understand that virtual functions come with an overhead of dereferencing to call a technique. However I guess with modern architectural speed it's almost minimal.

  1. Can there be any particular reason all functions in C++ aren't virtual as with Java?
  2. From my understanding, determining a function virtual inside a base class is enoughOrrequired. Now after i write a parent or gaurdian class, I would not know which techniques would overcome-ridden. The same is true which means that that although writing a young child class someone would need to edit parents class. This seems like bothersome and often difficult?

Update:
Outlining from Jon Skeet's answer below:

It is a trade-off between clearly making someone realize that they're getting functionality [that has potential risks by themselves [(check Jon's response)] [and potential small performance gains] having a trade-off at a lower price versatility, more code changes, along with a steeper learning curve.

Some other reasons from different solutions:

Virtual functions can't be in-lined because inlining need to happen at runtime. This have performance impacts whenever you expect you functions advantages of inlining.

There can be potentially some other reasons, and that i would like to know and summarize them.

You will find reasons for controlling which techniques are virtual beyond performance. As I don't really make the majority of my techniques final in Java, I most likely should... unless of course a technique is designed to become overridden, it most likely should not be virtual IMO.

Creating for inheritance could be tricky - particularly this means you have to document much more by what might refer to it as and what it really might call. Imagine for those who have two virtual techniques, and something calls another - that must be recorded, otherwise someone could override the "known as" method by having an implementation which calls the "calling" method, unknowingly developing a stack overflow (or infinite loop if there's tail call optimisation). At that time you've then got less versatility inside your implementation - you cannot switch it round later on.

Observe that C# is really a similar language to Java in a variety of ways, but made a decision to make techniques non-virtual automatically. Another people aren't interested in this, however i certainly welcome it - and I'd really prefer that classes were uninheritable automatically too.

Essentially, it comes down lower for this advice from Josh Bloch: design for inheritance or stop it.

  1. Among the primary C++ concepts is: you pay for which you utilize ("zero overhead principle"). If you do not require the dynamic dispatch mechanism, you should not purchase its overhead.

  2. Because the author from the base class, you need to choose which techniques ought to be permitted to become overridden. If you are writing both, proceed and refactor the thing you need. However it works by doing this, since there needs to be considered a method for the writer from the base class to manage its use.

However I guess with modern architectural speed it's almost minimal.

This assumption is wrong, and, I suppose, the primary reason behind this decision.

Think about the situation of inlining. C++’ sort function works much faster than C’s otherwise similar qsort in certain situations since it can inline its comparator argument, while C cannot (because of utilization of function pointers). In extraordinary instances, substandard performance variations of around 700% (Scott Meyers, Effective STL).

Exactly the same could be true for virtual functions. We’ve had similar discussions before for example, Can there be any reason to make use of C++ rather than C, Perl, Python, etc?