Lately, my job duties have involved a bit of C++ programming. Lucky for me, it’s mostly writing new code, as opposed to hacking on legacy code. C++ programs, in my experience, tend to have a low hackability factor, meaning that they generally require you to understand a lot about how the program works, and how your changes will fit into the architecture, compared to what making changes to a similar C program would require. Generally, a function in C can be examined in isolation much more readily than a C++ class method can be. The C++ method will require you, all too often, in my experience, to understand how the whole class works, how the classes it depends on works, etc.
But, I’m still a beginner when it comes to C++. I did lots of object oriented programming with Turbo Pascal 5.5 and later, back in the ’80s, but since then, it’s been a diet of pretty much strictly C.
One thing I’m finding to be a bit odd about C++ is all the little things it does behind the scenes. Mainly, constructors, and destructors, and esp. the copy constructor, which isn’t exactly behind the scenes — you may have to write it, without being explicitly told that you need to write it. For instance, if your object contains pointers which get allocated at certain points, and deallocated in destructors, well, if you want to use the STL containers, e.g. vectors, or maps, or what have you, then you had better write a copy constructor which allocates those pointers in the copy, and copies the contents of those pointers. This was a bit of a worrisome eye opener. In C, you’re used to being in complete control of memory allocation. The compiler won’t allocate anything for you automatically, or deallocate anything for you automatically. So, you know, if you’ve got an allocation problem, it’s because of something you did. With C++, there are these implicitly called constructors and destructors, which, I’m sure, once you get used to them, are fine. But, in the mean time, I’ve got this nagging doubt: Am I aware of every implicit allocation and deallocation, and how they all interact? I feel much less confident about the correctness of my program. About 1000x less confident. I don’t know what I don’t know.
It also strikes me that this object oriented model imposes a penalty in complexity. For example, in C, it is easy to add “helper functions” to decrease the complexity of any given function — basically pulling some middle part of a function out and making it its own separate function. In C++, this is more difficult. You generally have to add a new member function, perhaps private, change the header file, etc. It feels like a much bigger change than corresponding change made in C. A bigger lift, more effort required.
Also, code reuse — in its simplest form — is discouraged by C++. In C, since functions aren’t so tightly coupled with the data structures they work on, as they are with C++, it’s generally easy to take a chunk of code from a C program and re-use it, in cut-n-paste fashion, in another C program. If you try this with C++, you find that you have to either, extensively edit the function, including the interface, in order to extricate it from the class within which it resides, or b) find that it relies on other classes to such a degree that trying to reuse it requires that you suck in all these other (for your new purposes) unrelated classes and header files in an ever exploding network. At this point, you realize the C++ code you’re really trying to reuse is useful only as pseudo code to use as a guideline in your own re-write of the code. Now, I can hear all the C++ programmers whining that code reuse shouldn’t be about cutting and pasting. I’ve got Donald Knuth on my side though:
I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace. — Donald Knuth
Also, I’ve noticed that about half the complexity of the program seems to be artificial, invented complexity introduced by the language. You spend a lot of time trying to figure out how you’re going to model the problem — even a simple problem — in C++, as objects, etc., designing interfaces, etc. — all this complexity is invented complexity which is added on top of the intrinsic complexity of the problem which you’re trying to solve.
Maybe it’s my long years of C programming which have biased me, but I find myself agreeing with Linus Torvalds: Here’s an interesting google search: Linus Torvalds c++
Edit to add: And, I have seen enough C++ across different platforms to know that STL, the “standard” template library, and Boost are anything but standard. C++, despite being nearly 20 years old now, is a very immature language, by which I mean, the basic features, and the basic libraries C++ programs depend on, are still, even today, in a state of flux. Imagine, if in C, “#include <stdio.h>” could not be depended on. Imagine if some vaguely esoteric language features, like say, the “?” operator, was not universally present? Imagine if the preprocessor kept changing how it handled the #define macro. That’s a bit what like C++ was like just two or three years a ago. It is getting better, but it’s really still not there. C++, despite being nearly 20, still acts like an 11 year old, a child. You can’t depend on it.
Edit again: I should add, none of my recent experience with C++ has made me regret the (admittedly over-the-top, tongue-in-cheek) joke which I invented some time ago. It goes like this:
I don’t think the guy who invented C++ knows the difference between “increment” and “excrement.”