50 Years Later: There's Still No Silver Bullet

Andrey Vikt. Stolyarov
November 17, 2025

to the memory of prof. Fred Brooks,
who passed away three years ago

Long-long time ago, in the prehistoric 1975, when computers still were relatively large, and programs were relatively small, and even the very profession of a computer programmer was pretty exotic, Frederick Phillips Brooks Jr. published his brilliant The Mythical Man-Month. Twenty years later, the anniversary edition of the book included several new chapters, one of them named 'No Silver Bullet', which was initially published as a separate paper in 1986.

Now that another 30 years passed, in the middle of the insane 2025, some people unpleasantly had to admit that, once a development team starts to use various 'artificial intelligence' instruments, the effect is not exactly what they hoped for. The overall amount of code commited to repositories grows measured in those 'lines of code', but at the same time, similar tasks take more and more time to get solved. The overall productivity of the team in any reasonable sense, hence, notably falls, despite those repositories full of commited code. Another silver bullet candidate didn't work.

What's interesting here is that a lot of people seem not to give a damn to this observation. All in all, those 'AI servants' do write some code, don't they? Why would we bother doing a job which can be delegated to computers?

There are a lot of obvious things to notice. The code 'written' by an LLM (which in fact has nothing to do with any intelligence, artificial or whatever) often doesn't work at all and almost never works exactly as expected, it always takes a lot of time to double-check and correct, and, finally, its maintainability is so low that, once someone needs to make any changes, it is almost certain the code will have to be rewritten from scratch. This is not to mention the fact that the code may turn out to be a copy of something from the training set, either slightly modified or even verbatim, thus violating someone's copyright.

Guess what all these 'AI enthusiasts' do on all this? Well, they ignore.

The so-called artificial intelligence has attracted a lot of attention, with some speakers even predicting all computer programmers to lose their jobs soon; it is all this hype which most likely prevented the public from completely ignoring the fantastic productivity of artificial intelligence, that turned out to be negative. The problem is that AI is far from being the only silver bullet candidate in the history of computer programming. None of them really works. Brooks himself, in his 1986 'No Silver Bullet' essay, expressed some hope on, first, Ada and other high-level languages, and, second, the paradigm of object-oriented programming, at the same time dismissing any hopes related to 'graphical (or visual) programming', 'automatic programming' and — guess what — artificial intelligence (yes, back in 1986), as well as 'program verification', 'environments and tools', and some other sources of desperate hopes at the time. Four decades later, we know Ada was a dead-end; none of the later programming languages made any significant breakthrough as well. With object-oriented programming, things are not so obvious, but it would be weird to insist (now in 2025) that OOP really fits the silver bullet role. I'll return to this later.

Anyway, AI is only notable here because its negative efficiency can't be ignored... well, at least not by everyone. Actually, it may be only because AI-related panic has gone far beyond the IT community. Surprisingly enough, programmers prefer to look some other way when the time comes to recognize another global failure.

To better understand what I mean here, let me share some of my own experience. I wrote my first program in September 1988, being a 13-years-old schoolboy. A soviet coffin-like computer DVK-1, which the school had at the time, became my first. It was a horrible thing, having neither any networking nor peripherals, so there was nowhere to get any software from, nor any possibility to save the work for future sessions. Certainly a variety of peripherals for these machines existed, but the school had none of them. A primitive Basic interpreter was in the computer's ROM, so one could write a small program and enjoy running it until it was time to switch the machine off, and then the program would vanish. No matter what, it was love at first sight; I strongly decided to become a computer programmer, and I did. Two years later I managed to have more or less regular access to a PC-compatible machine; well, it was another soviet coffin-like masterpiece, named ES-1840, with two 5" floppy drives and a CGA display, which, despite its name, was only capable of showing four shades of gray, but the machine was able to run MSDOS; for me, in the now-distant 1990, it was a real difference. I only had two to three hours of time on that machine a week, not too much for a little fanatic. Anyway, it was possible to learn Pascal using these machines; Turbo Pascal 5.5 provided object-oriented features, and, surprisingly, I understood what it is all about; there was nobody around who could explain the paradigm to me, as at the time object-oriented programming was still kind of exotic.

In 1992 I entered Lomonosov Moscow State University, and in 1995, being a third year student, I got my first job as a programmer. The project I was invited to join was in C++ which I didn't know at that moment, but I understood object-oriented programming, was familiar with the Borland Pascal's version of Object Windows Library (whose interface was almost the same for Borland C++), more or less knew the Win16 API and, being a student, I didn't expect a big salary, so they accepted me easily. It only took a couple of weeks for me to switch from Pascal to C++: I had some experience with plain C, and I knew what to expect from the object-oriented part of the language, so it wasn't a difficult switch for a neophyte fond of writing programs.

Borland C++ 4.0 shipped some STL-like containers (not STL, strictly speaking, but very similar). We took a look at them, and we disliked the whole thing. I don't know the exact motivation of my teammates, but for me this dislike had obvious reasons: I just started to understand C++ templates, and at the moment I would hate the very idea of all the needed stuff being already written, leaving no room for my own template-based code. It was obvious for me how to implement my own containers, and I wished to do it. And, well, one day I did.

I even tried to use what I coded. At the time, I had two C++ projects at hands: the one at work, written in Borland C++, and the other project was the very first version of InteLib, implemented for GCC under Linux. So I placed my brand new collection of container templates in both projects and tried to make some use out of them. Colleagues at work didn't mind, but they didn't use my containers just like they didn't use those from the Borland C++ library. On the other hand, I soon started to realize it is not very convenient to synchronize the version of my template collection between the two projects; when one day another project appeared, a strong impression formed in my mind that I'd better not bother adding my templates there. Besides that, I soon realized there may be some problems related to intellectual property: the company I worked for could claim their rights on all the code I wrote for them, and despite they never did, the idea of ending up having my other project's code non-distributable from the legal point of view was, well, not very comfortable. This legal problem has nothing to do with the general matter being discussed here: finally, standard libraries, as well as freely distributed ones, make no troubles like this, so one can say “hey, just don't write them on your own, and no legal troubles you'll have”. The only reason for me to mention the problem here is that it caused me to consider the possibility of getting rid of these templates at least in my pet project.

So one day I just sat down and rewrote the code of the pet project without those containers. Surprisingly, it took only a couple of hours, much much less than the time I spent implementing the containers. Still being surprized, I gave a try to getting rid of them in the other project, at work; it took a bit longer, but, again, much less than the time containers took to implement and debug. And, what's even more important, that reimplementation allowed me to remove my container collection from the project, so the project itself became several files smaller. Don't underestimate this: yes, you (and your colleagues working on the same project) do constantly waste extra time because of each and every extra file in the project. Just pay some attention to this matter and you'll (surprisingly?) realize I'm right; damn, every file in your working directory takes your time whenever you issue the ls command or take a look on the file list in any other way (hey, wait... you don't use the desktop metaphor for handling your working directory with the code, do you? okay, okay, this would take even more time and workload, but I still hope you don't). Not very serious overhead each time, but how many times a day you, that way or another, deal with the directory list? Even when you just open one of your source files in the editor, unless you remember exactly how every file is named and type their names on the keyboard, I mean, complete names, using no completion mechanism, as any completion inevitably involves directory listings; but it's hard for me to believe you really do it this way.

I don't exactly remember the year when all the story took place, but it was either 1996 or 1997. I never used generic containers of any kind since then; personal experience is the best teacher.

Sometimes people around tried to convince me I'm wrong, as certainly there's no point in doing, again and again, the job someone already did for you. But I knew what to respond, and as my overall programming experience grew, I'd have more and more things to say to those "don't invent a wheel" principle proponents.

First of all, despite the strong faith widespread around, containers in fact don't (and can't) save you any significant amount of coding time. Some might get pissed off by such an obviously wrong statement, but it is no wrong, it is right, and I'm now going to show this clearly,

Look, the most popular containers in STL are probably vector and list; now think carefully what you could actually ever need from them.

With the vector template, I know only one feature which is really useful and often needed: the possibility to grow the size of the array whenever you need to put another element into it. Heh, with push_back (errr... you don't use the insert method with vectors very often, do you?), but not with the indexing operator (didn't you ever feel it is damn inconvenient to have to add that if/size/resize before almost every use of the square brackets? no? okay, never mind). It is achieved manually by storing the current size of the vector together with the current count of slots actually used, typically in a structure if it is intended to be passed around, or just by defining three fields in the private part of the class where the array is needed. Plus, you'll need the ResizeMyDamnArray function or method. The structure takes four lines, the function adds another six to ten lines, depending on the strategy, and the figures include headers and these lines that consist of a single curly bracket. All the code is obvious, easy to type and easy to read. Actually, if you need more than two to three minutes for it, or have any difficulties besides actual typing, you definitely must pay attention to this and have some extra training.

Ah, I forgot the initialization. Well, personally I don't mind to add a default constructor to a stand-alone structure, and even a destructor, but if you're a purist, you might want to define a stand alone function like InitTheDamnArray. BTW, if you work inside a class and need a resizeable array there, you won't need even this, because both constructor and destructor are perhaps already there, you only need to add a couple of words in your initializer list, and a couple of lines... errr... or, more likely, even just one line in the destructor.

Yes, instead of all the 10 to 14 lines of code, you can "simply" write smth. like vector<YourItemType> v(MY_VEC_START_SIZE); and become glad you saved those two to three minutes. But it's too early to celebrate. First, you now need to type that little word vector here, there and everywhere. Not a big deal, okay. Next, you need to be careful in order not to pass your vector to a function by value accidentally (errr... you don't pass them by value, do you? if you do, consider changing the profession, and I'm gravely serious here). Chances are you'll have to use iterators instead of simple indices; well, yes you can use indices but the library's design pushes you towards iterators, gently but nevertheless strongly. Heh, iterators aren't a big deal, too? Seriously? Now recall the keyword auto in its new meaning. If iterators weren't a big deal, nobody would ever need this nonsense, which effectively lets you prevent the compiler from detecting obviuos but sometimes dangerous errors.

Now one more thing to recall: we create (define, construct, whatever) arrays much much less often than we access them, this way or another. Hey, you've just saved three minutes, right? The longer your array lives in the codebase, the more of that "saved" time your codebase will take back with all those small not-a-big-deals. It's simply a matter of time for the score to become negative.

With the list template, the situation is even worse. In most cases, you only need a single pointer to handle a linked list, initialized with the null address; a situation when you need the second pointer for the end of the list, is quite usual, too, but not that usual. Doubly linked lists are rare, believe it or not; BTW, do you remember list represents the doubly linked list abstraction, with all its overhead? Yes, there's that forward_list, but it only appeared in C++11, and not many C++ programmers even know it is there. Damn, some of them even don't understand the difference, because they never worked with real lists. Well, what about actions? In most cases you only need to invent something to replace these push_front and pop_front... errr... wait a moment, personally I don't remember when last time I did even that. The reality is merciless: for a particular list, chances are you either add new items in only one place of your code, or you remove them in only one place, or even you don't remove them at all, until the time comes to dispose of the whole list. I doubt if you can save even two minutes with that forward_list<YourItemType>, and it is almost certain your codebase will not let the positive score of saved time to last long.

Certainly I realize the last several paragraphs won't convince an average C++ programmer, but this is only the beginning. We still type the code in the text editor, and do nothing else; and we only use vectors and lists, so the code we type is perhaps very simple. But typing the code is not what we do most of the time, right? And the code doesn't have to remain that simple.

Well, STL is not limited to those vectors and lists, right? Sure. Things like an unordered map, implemented by hash tables, would take more time to code and test than a simple vector or list. Furthermore, if we consider tree-based data structures such as rope (not present in STL, nor in the standard C++ library, but supported by the GCC C++ version of the standard C++ library, as well as the STLport library), then, certainly, they are sophisticated enough that we can't say their manual implementation would take negligible amount of time (as it definitely is for arrays and lists). What's important here is that you don't need them very often. Actually, the vast majority of programmers during their professional career never face a problem where using the rope structure makes sense. It is almost impossible to find a project in which there's a need for two ropes having elements of different types, and this renders any generic implementations of the rope structure become blatantly meaningless; the same is true for almost all situations where a tree-like data structure is to be introduced. You rarely need a black-red tree, and it is almost impossible you'll really need two of them (having different item types) within a single project. With hash tables, the situation is more or less similar: despite they are needed more often than trees, yet not every program needs them, and it is a really uncommon thing to face in a single project the real need in two hash tables, having same implementation structure and the hash function, but different types of stored items. But, well, where's anything problematic here? Nothing forces us to use data structures which we don't need, even if they are readily available from the library, right? Wrong. The fact you only need to write an appropriate #include directive and a variable definition like map<int, string> mymap; provokes to use them whenever their set of available methods looks more or less convenient for the situation at hands. Unfortunately, people too often do this kind of things without thinking much. In particular, the map container is often used for storing 10–15 key-value pairs, too often, much more often than we might want. Being used this way, it works slower than linear search by orders of magnitude; heh, don't you guys remember map is implemented by a self-balancing tree? Like a red-black tree, or an AVL tree. If you have ever implemented these trees manually, you know how much of complicated code it takes. It is like using a sports car to drive 50 meters: while you open the door, sit behind the wheel, tune the mirrors, start the engine up, unpark the car, drive, park the car, kill the engine, leave the car and lock its doors, I'll be already there, on foot. And just like a sports car is not a cheap toy, a balanced tree implementation is not cheap, as well: each and every variable of the map type will cost you extra time on each build, it will bloat up your executable binary, and definitely it will slow down the work of your program. It is only desirable to use self-balanced trees to store a lot of items, like 1000 of them or more. But when you don't need to implement it on your own, the temptation is too strong to resist.

Let's get back to these simple vectors and lists. Even if you manage to save a few minutes of coding, chances are you'll lose that many hours, or even that many days on debugging, and another several days to several weeks on maintenance. Doubtful, heh? Just recall those 3-4-5-lines type names you often see in the debugger. But, again, this is not the real story.

The real story starts from the fact those "generic containers" are classes, which means they are not transparent. Yes, this is exactly what we expect from all classes, and containers are no exception here. They are opaque. Just like the classes must be. We are supposed to only use the public methods, not paying attention to what's there under the hood. It's a very useful property of classes. But, folks, when it comes to containers, their lack of transparency is simply horrible. We put items into containers and get them back, we iterate through containers, but we don't see what's inside.

In my own practice there was a situation when a list was accidentally getting changed while a loop traversed it; actually, the loop was searching for items that meet certain conditions, and for each item found, a method was called to notify the object that certain event occured; sometimes, but not very often, that method indirectly, through several levels of function calls, invoked a method of the collection which tried to remove the item from exactly the same list. It took several days to learn how to reproduce the situation in a more or less stable manner, which allowed to locate the loop. However, it took another day or two to notice that the pointers within the list items get changed unexpectedly, right under my hands, and to understand which particular chain of calls leads to this. It only was possible to notice the unexpected changes because I actually saw the pointers; yes, definitely the list was implemented manually. It makes me scared even to think how much time I'd waste on the desperate debugging, if an opaque container was used to store those items, and opaque iterators (as there's no other option for STL lists) were used to traverse it.

While we're at it, let me mention another source of the programmer's time waste. It took a lot of years even for me to realize this fact, but, well, the time spent on code rebuilds matters, too. One of my recent projects, Thalassa CMS, contains circa 50K lines of code (quantity of this kind is more or less senseless, but I don't have anything better), it is written in C++, and on my main workstation (produced, BTW, in 2016) with make -j it rebuilds from scratch in less than two seconds. Less than. Two. Seconds. Or in five seconds without -j. For a simple comparision, Bitcoin Core contains smth. like 450K lines of code; well, it's about ten times more, but it takes no less than 15 minutes to build on the same machine with the same compiler. Well, without -j, as with it the thing doesn't build at all. And also not taking into account the damn ./configure. 180 times longer, heh, compared to the five seconds for Thalassa without -j. Well, look again: ten times more of code builds 180 times longer.

Unfortunately, Bitcoin Core's building speed is common for C++. I don't think you can easily find a C++ program nowadays which is in active use, and builds in less than a minute, even on fast computers. Thanks to template containers.

Builds are done very often during the active coding stage. Damn, I gained more time on the builds alone than I could potentially save with generic containers, even if you convince me to believe they can save anything. And you won't, as I'd rather believe my own experience than the well-known mainstream industry superstition.

There's one more thing to mention on this topic. Generic containers effectively prevent a programmer from developing problem-oriented data structures. You can't have containers for all possible situations being available from libraries, as nobody can predict all possible situations, and even if someone could, for such a library the documentation alone would take a life to read and more or less understand.

So, there's a limited amount of precreated generic data structures, and heavy use of them inevitably leads to stereotyped thinking: instead of considering how the data should be represented in memory, one starts to consider how it can be implemented using the available types of containers. Heh, suppose we need a directed graph. I'd represent each of its vertices with an object which stores an array of pointers to other objects, each pointer representing the appropriate ark, and optionally I'd also introduce an array of pointers to all vertices, just in order not to let them getting lost. But guess what these STL addicts do? Okay, set<pair<Vertex, Vertex> > is perhaps one of the most harmless ideas they come up with. Taking into account the fact set is effectively a tree, guess how fast it will work, e.g., if we need to find a path between two given vertices.


The standard C++ library (including its STL part) is not the only library in the world which is supposed (and believed) to be useful, but effectively appears to be harmful for any project relying on it. Those who believe in the power of libraries, usually say "hey, don't reinvent the wheel for another time", meaning that, once someone else already implemented something we need, why should we waste our time implemented the same thing again. This idea is too obvious to be true. Honestly speaking, the very concept of libraries seems to be another silver bullet candidate that fired in the wrong direction.

This time let me start with my own positive experience with a library. Several of my projects use the well-known md5 hash for different purposes (and if you feel the need to tell me how obsolete and broken md5 is, then please don't: you can't tell me anything new on this subject). I quickly found an appropriate implementation of md5 on the Internet; according to the commentary in the files, the code was written by Colin Plumb in 1993, and no copyright is claimed on it. If I recall everything right, first time I used it in 2009, and since then, I carry that particular implementation from project to project. Definitely it saved me a lot of time.

Now the interesting part comes. This implementaton of md5 has the following important properties:

  • it consists of exactly two files: the header (md5.h) and the implementation (md5.c);
  • only a C compiler is needed to compile it, and it compiles in a fraction of a second;
  • it only needs memcpy and memset from the standard library, and there's nothing else it depends on;
  • the code implements a well-known algorithm, and the implementation is known to be correct, so it is not expected to demand any maintenance;
  • the algorithm itself, despite being well-known, is complicated enough so that it would be really hard for me to implement it correctly;
  • the API is obvious: there's a structure named MD5Context and three functions to work with it — MD5Init initializes the context, MD5Update processes another portion of the data to be hashed, and MD5Final actually builds the hash value; should I ever want to implement the thing on my own, I'd have exactly the same API, only may be I'd choose different names;
  • the licensing status of the code is the most permissive: it is put to public domain by its author.

Even with such a simple thing, I had to do some corrections to the code right off. In particular, I added one more function which takes the hash of a given string, in one portion; unfortunately, this lead to the code being dependent on one more 'standard' function, this time strlen. I also removed some C99-isms which someone else introduced earlier; I'm sure they couldn't be there from the beginning, as the original code was written long before that committee-made catastrophe. Finally, I corrected the coding style a bit. All these fixes weren't very difficult to do, and since then, I simply use the module. In some of my projects, I isolate it in a separate directory and build as a library, which is then used by the main program; in some other projects, I didn't bother with that, simply placing the module and its header into the main directory along with my own modules.

Now let's admit: the properties enumerated above are not very usual for an average software library. It is far more common to see libraries consisting of dozens (or hundreds, or thousands) of modules, with their own approaches to building, and often with such a list of external dependencies that rebuilding the thing from its sources becomes, errr.... an adventure (if not a nightmare). Internals of a given library may appear to be so complicated (let's better say messy) that it could be easier to implement all the functionality from scratch than to understand how all the mess actually works; certainly, such a mess will always demand maintenance, but an average library user has no other choice than to wait for a newer version to be published. Licensing policy of a library may force you to do really strange things like keeping your project dynamically-linked. Finally, the API of a more or less large library will likely turn out to be anything but obvious; in most cases one has to learn how to use another library, and for many libraries around there are books devoted to them. Well, learning curves exist almost for everything, but for more or less 'serious' libraries (you know what I mean) learning curves can not be ignored. Experience with a partucular library often becomes a crucial factor for people applying for a job, and this means that sometimes you have to hire someone to handle the damn library. Clearly, libraries are not free, even those of them that are free.

And one more thing. If a library is anything more complex than a handful of obvious functions in a single module, it will never fully fit your project. It may look not too bad. It may be much better than other existing options. Sometimes people might tell you a particular library is excellent, and it won't be a lie in the sense they're really impressed. But there will always be those small ideas here and there that, should this or that have been implemented a bit differently, it might be more convenient.

Let's be honest, sometimes using a library is the only option. The thing implemented by a library may appear not to be doable. When you work alone, you might have no required knowledge, and if you work within a team, there may be nobody on the team capable of solving a particular problem. As of me, I could perhaps implement md5 on my own, but when it comes to asymmetric cryptography and elliptic curves, I could either pick a suitable library, or give up with the project at once.

Let's continue to be honest though. The situation of no other option is pretty common, but more often you actually do have the other option: there's a clear possibility to write all the code on your own, keeping in mind what you really need and how it all should be organized to fit your custom needs, rather than an imaginary need of an abstract (non-existing) 'user', which is the only thing authors of libraries have when they make decisions on their architecture and API. The thing you can do on your own will definitely fit the particular project's needs, and in most cases it will be simpler than a third-party library, often by orders of magnitude (heh, have you ever adopted a library and then used all features it had?). But... well, but it takes some time to implement everything. May be a week. May be a year. And at this point someone tells you that familiar thing: hey, don't waste the time inventing another wheel. It is even possible you tell this to yourself.

So you decide to use another library, being sure it will save you a lot of time. Heh, don't worry, your codebase will immediately start striking back, even before you actually get started with the library.

First of all, you have to learn how to deal with the library. In some marginal cases this — learning alone! — will take more time and effort than you would need to write all the necessary code. Okay, not all cases are that marginal, but such cases definitely exist, and the fact this doesn't stop all those "don't invent a wheel" folks clearly shows common sense is generally dead within the IT industry.

By the way, nobody is perfect at learnings, and there are flaws in documentation; unlike programming languages, 'standard' libraries and, e.g., system calls, the library you decided to rely on likely has far smaller community of users, so knowledge base available for it will not be as large and comprehensive as you'd like. Chances are you'll misunderstand a lot, and once you manage to get everything working, it will perform at a level far from your expectations, just because the authors of the library didn't intend it to be used exactly the way you do.

If, again, the library in question is anything but half a dozen of functions in a single module, you'll inevitably notice some inconvenience with its API. Hardly you'll ever try to adapt the library to your real needs, as in most cases this would clearly be harder than to write your own code instead of the library (which you're already commited not to do). But having no possibility to adapt the library to your project, you'll have to adapt the project to the library. People often don't even notice this, and definitely it worth noticing. You never can tell how much time you will lose this way and how much better your program could be if only you didn't have to adapt to the library's limits.

The next thing to take into consideration is that the library becomes another external dependency for your project. Yes, definitely this does matter. Even for you it means you must deploy the library on each workstation someone (you or your colleagues, or anyone else) work with the code of your project. Besides, you have to instruct those who only build your code, such as your users and, in many cases, the maintainers of packages for different environments, how to deal with the library.

Well, there's one thing you can do, and obviously it is the right thing to do, but nobody actualy does it: you can include the library's source code along with your sources, so at least those innocent people don't have to acquire and deploy it on their own. However, this obvious solution has its own problems: the library may appear to be a real nightmare to build from the sources.

And now one more interesting thing. The library may eventually become unsupported by its authors, or they may come up with a 'newer and better' version, incompatible with what you used in your project (heh, remember that case of GTK2 vs. GTK3?)

The time you saved using a library instead of implementing some of its features on your own is a one-time gain, although it may be quite significant. However, the time you and your poor users lose will add up forever, or, well, as long as your program exists. On a long run, the score will inevitably turn negative one day. The only hope is that your program is not going to live that long, but I somehow dislike this sort of hope. What about you?

Curiously, Brooks himself had certain hopes for libraries. In the original No Silver Bullet paper, one of the sections is headed Buy versus Build; in this section, the author explains that a software solution available to buy is almost always cheaper than the programmers' labor needed "to build afresh", and this, according to the author, is not only true for complete programs, but for modules as well.

From the three-decades-later epoch, what we see might look quite surrealistic for 1980s. There's almost no such thing as a "market" of software libraries now, simply because there are free libraries around for almost everything you can imagine. There are often several competing libraries for the same problem. However, this bullet is perhaps not very silver, too; indeed, what we really have is dependency hell instead of a breakthrough. People around tend to deny the reality, but the fact is that typically it only makes sense to use another library for your project in case you have absolutely no other choice. Otherwise, avoiding the library is what saves you resources, not using it.


Let's now get back to the hope Brooks had for Ada and other high-level languages. Since then, Ada has proven to be one of the greatest failures in the history of programming, but what about other higher-level programming languages? Isn't it much simpler to write programs in languages like Java, C#, Ruby, Python, you name it?

Well, yes, but only in the sense one needs less knowledge to do so. Surprisingly, people still write programs in C++ and even in plain C, and, trust me, it is not because they can't learn Python. But the opposite is often true: people write in Python because they either can't, or don't want to learn something decent.

Having 25 years of teaching experience (I started in February 2000), what I can say for sure is that it takes no less than two years to raise a plain C programmer, and no less than three to four years to grow up a decent C++ specialist. BTW, you'd better not start with C, and if you try to start right with C++, your game is lost. To what my experience shows, the only language safe enough to start with is Pascal, despite you won't ever need it in practice, so it is only learned to get the elementaries of computer programming and then switch to C; but all this is a completely different story.

On the other hand, it only takes two or three month of more or less intense training to make someone write code in Python. Definitely not all people can do that, but for those who can, it is not really a problem to start coding in Python. No pointers, no integer overflows, no need to understand how strings are stored in memory — there are generally not too many things to worry about, just get to understand variables, assignments, loops and, well, functions — and you're done.

I have had to retrain former Pythonists many times in their first university year, so that they can switch to other languages, and yes it is hard. Much harder than to teach students who never tried programming before. But this is a completely different story, too.

There are a lot of people around who know both C and Python (or C++ and Python), and, guess what, they don't typically write in Python. Well, sometimes they do. It is when what they need is a short program consisting of two or three pages of code, or even less. Whenever the program is supposed to be any longer than that, it will likely be written in Python only in case the programmer knows nothing else. And under a closer look, there's nothing here to get surprised by.

Short (well, very short) programs are easier to write in "super-highlevel" languages like Python just because Python already has (as built-in features) a lot of things one has to create manually when working in C (or even C++, even with STL). But in a more or less large program you just prepare all those things you need, and once you're done with them, they are at your service as long as you work on the particular project. So, pretty soon after you start the project, you find yourself in a situation when Python's rich features wouldn't make your life any easier. On the other hand, the higher that notorious 'level' of the language is, the more often you have to fight the language to achieve what you intend, not what the language tries to push you into. So here we see, for another time, the situation we're already familiar with: the decision to use Python instead of C or C++ saves you some time and effort at the beginning, but this stage is soon over; meanwhile, everything in the world has its price, nothing comes for free, and the losses, unlike profits, continue to accumulate forever (well, as long as your codebase exists), and at a certain moment the score goes below the zero. With Python vs. C, this happens shortly.

So, if an employer hires several high-qualified programmers and offers them to write in Python, they'll likely respond they'd better use something more reasonable for the same task. However, there's another possibility for the employer: if he thinks he's clever enough and hence really commited to save a lot by just letting the thing be done in Python, not those "overcomplicated" languages like C or C++ (well, C isn't any complicated, it is primitive, unlike Python which really is complicated, but a lot of people can't realize this) — anyway, if the employer is really commited to this type of saves, there's one more option: to hire people who write in Python. By the way, they are cheaper and easier to hire. A lot of employers who do things this way are absolutely sure they know the most important secret of software development.

Certainly this doesn't work either. First of all, Python programmers may be slightly cheaper than higher qualified folks who write in C or C++, but they are still computer programmers and they are sure they must be paid accordingly. The employer should perhaps be glad if he manages to pay them 30–40% less than what C wizards agree to work for. Now let's recall one of the key observations Fred Brooks made in his 'The Mythical Man-Month': the best programmers on your team may deliver ten times more value than some of the others. Python programmers are basically those who didn't make it through pointers, address arithmetics and other "heavy matters" of elementary coding, and they are generally those who learned several month instead of several years. Do you expect miracles from them? If so, then — don't. Cheap coders are certainly capable of writing code. They can write a lot of it, measured in lines. The problem is that a lot of lines of the code is not what the customer needs, as the customer actually needs his problem to be solved, and the amount of lines has nothing to do with that. If a ten-thousand-lines-long piece of code gets rewritten completely four or five times, this, in itself, won't deliver any value. Well, you get what you pay for. If you pay monkeys for writing code, you'll get a lot of monkey code.

If languages like Python didn't exist, a lot of people who now work as 'computer programmers' could likely never become programmers and would perhaps never get paid for writing code. Just one more thing to understand: this would be good, not bad. This way or another, instruments that require less qualification basically let underqualified people get job positions they don't have sufficient knowledge for. Certainly such instruments don't work as a silver bullet. They rather work as a cast iron ball with chain.


So, here are the two directions people tend to be looking for a kind of silver bullet, both actually fruitless. First is to do one's best to reuse whatever solutions available around, repeating like a mantra "don't invent another wheel", ignoring the reality, which is that the existing 'solution' doesn't really fit and that reusing it is far from being free of charge. The second is to bury the actual computer programming under a pile of high-level abstractions so lower-skilled people can write code. Somehow.

There's a quintessence of both approaches, known as 'web programming' — perhaps the most disgusting and outrageous thing in the modern IT.

The very idea of executing on someone else's device a code which the device owner never installed, is abolutely intolerable, to the extent it should be considered a serious crime, just like any other trespassing of someone else's property; but let's leave this aside for now.

The 'web programming' in its modern form is based on, first, the assumption that browsers 'already have' a lot of things for web programmers to use, and, second, the use of scripting (that is, ultra-high-level) programming languages such as JavaScript, PHP and the like, with Python or Ruby being the very best cases, hoping they'll reduce the required amount of labor. The blind faith in both things is so strong that even custom business applications are often implemented as a web site deployed inside the corporate local network, not accessible from outside, which means there's in fact no need to access the system through a web-based interface — besides that deeply held belief that a 'web application' is somehow easier to develop than a traditional client-server system, with a client implemented without the so-called 'web technologies'.

In reality this belief appears to be irrational and simply wrong. Every programmer who has experience in developing both web-based interfaces and traditional programs with GUIs clearly knows it is much easier to create input forms with an appropriate widget library than to typeset them in HTML, and it is much easier to handle the data and to react to the user's actions properly when there's no need to divide your implementation down to the client-side and the server-side parts, using weird methods to keep requests together to form a single session. After all, Web is designed for things totally different from such a, frankly speaking, misuse, and it always takes a lot of extra effort to achieve the goal with a technology which simply doesn't fit.

And one more thing. Properly implemented client program can ship as a single stand-alone executable binary. You don't need a browser to run it. And definitely you don't need a browser of a version that the app has graciously agreed to support.

To be clear, this text is written in 2025. On my main workstation, I have Palemoon 28.9.1, Firefox 78,15 and even Chromium 90.0.4430.212, whatever it means (actually I hate to have Chromium, but a lot of sites don't open in other browsers at all). Interfaces to all those 'artificial idiots' are unable to work with any of them. I tried chatgpt.com, chatgpt.org, grok.com, copilot.microsoft.com, gemini.google.com, perplexity.ai and some others, I don't even remember which. Some of them show a blank page, others show the input form pretending they're ready to work, but either show some irrelevant error messages in response to any request, or just silently refuse to work. The only significant thing here is the unanimity with which they all refuse to work. There are a lot of competing sites, but it is of no help: none of them works, despite the interfaces as such are trivial, basically consisting of a single input field.

I never let any software updates get into my working environment, as a matter of principle; I only switch to newer software in case I've got a new (well, typically not very new) computer and install a fresh operating system on it. Hence, effectively I'm unable to try those LLMs on my own. Okay, I don't need them anyway, so this webmonkey nonsense perhaps saved me a lot of time. However, the overall trend looks discouraging. Users need to switch to newer and newer versions of browsers just because a lot of sites on the Internet are made by complete idiots. Newer versions of browsers take more and more computational power — generally for nothing. Indeed, even a i386-based computer produced in the end of 1980s is perfectly capable of rendering a form with a single input field and sending the text back and forth through the Internet (hell guys, I myself have some experience of running a server on a 386 PC, back in 1995 or 1996); but it takes a computer many thousands times more powerful just to start a 'modern' browser.

Browsers became the only thing demanding 'modern' gear. Look, I still actively use a Lenovo S10 netbook, produced in 2008, with a 32 bit Atom processor running at 1.6 GHz, with only 2 GB of RAM. It acts as an SMTP speaker sending and receiving emails through an OpenVPN-based tunnel, so I can have as many email accounts on it as I want (and I do have seven of them). Typically I access its system remotely from my main workstation and use mutt to handle my mail. I also run an XMPP client on it, GUI-based, thanks to the X window system's network transparency. Besides, it stores my CVS repository, as a lot of my projects are still in CVS, and also I maintain clones of all my git repos on it. I take this computer with me on trips, so I always have access to my communications (and archives of them), my repositories and actually to all I need being away from home. Its form factor (heh, both size and weight) is well suited for travelling. It perfectly runs C/C++ compilers and the LaTeX typesetting system, allowing me to work on my software projects as well as writings. When I get tired, I'm able to watch movies on it — yes, yes, mplayer runs smoothly, the machine has more than enough resources for that. Sometimes I need to do some work with maps and GPS tracks, and I use viking to do so; yes, it runs on this old good computer, producing no problems at all.

Furthermore, I personally don't use 'office' software, because I hate the very idea of WYSIWYG, but, well, there are a lot of degenerates around used to send everyone weird things like text documents in the .doc/.docx format, spreadsheets like .xls/.xlsx and the like, so sometimes I have to deal with this kind of crap. Would it surprize you that LibreOffice runs good on my netbook, too?

Among all things I do with computers, there are two things I can't do with this one. First of them is video editing. Whenever I manage to film another video for my videoblog, I need something more powerul to prepare the final video, like my main workstation, and, okay, this makes sense as rendering videos is really demanding of computational power, for obvious reasons. And the second thing is, guess it — the damn web browsing. Actually, the machine runs browsers, but they tend to hang on 'modern' web sites. Generally I can access my own sites, as they are made without client-side scripting, so the browser only has to render static HTML; and some of other occasionally found sites.

Not too many people around really do video editing very often, so perhaps browsers are the only type of software for which an average computer user needs a computer newer than something 15+ years old. If only the industry didn't turn to this rotten way near the end of 2000s, almost all computers sold in shops around would have no point to be produced at all. Just think about this.

Browsers effectively form another 'abstraction layer' between the user-visible application and the rest of what the computer has, and this layer is horribly inefficient. A lot of people around play various browser games; typically those are a kind of games an average game console from mid-1990s could run, and being executed within the browser, it takes all the power of a 'modern' desktop computer. Together with the troubles I faced when I wanted to try LLMs (explained above), this makes a good picture of what's going on around.

All those 'web technologies' obviously cost too much to the users, and all this would be intolerable even in case they could save some labor and effort for programmers. But they don't do even this; scroll back for the explanation.


The last silver bullet candidate I mentioned at the beginning was the paradigm of object-oriented programming. This one, to my mind, differs from the things discussed earlier; at least I actively use the OOP paradigm, and I believe it really makes a lot of things a bit easier to achieve. There's only one small problem: the programmer has to be really capable of thinking in this paradigm to achieve anything positive. And yes, that's a problem. From the closer look, this problem is not so small.

By the way, OOP is far from being anything like a silver bullet even for those who really use it right. By the notion of 'silver bullet' Brooks originally meant a technology (or may be something else) that would raise programmers' productivity ten times. To my experience, OOP saves several percents of effort; it is hard to measure the real impact of it, but, well, 10%, to my mind, is a fairly optimistic estimate. OOP does not reduce the volume of the code to be written, it only makes it easier to decompose the task at hand down to well-separated subsystems, thus reducing its overall complexity. This is actually a lot in itself, as complexity is perhaps the most important and evil enemy of programmers. But OOP only works being applied right, and it may produce serious harm instead of profit being turned into yet another cargo cult.

To show how easily it may be cargo-culted, let me start with the notice that OOP is often (well, very often) confused for (and mixed up with) a completely different paradigm, named abstract data types. Support for both paradigms within programming languages is based on exactly the same thing — a compound data structure whose internals can't be accessed directly, but only through a fixed set of procedures (functions, methods, no matter). E.g., in C++ both paradigms are supported with the keywords class, private and public (with the keywords protected and friend surprisingly having nothing to do with either of the two paradigms; did you know this?). Once we mentioned C++, one of the most popular object-oriented instruments, perhaps we can continue with the statement that constructors (including the special role constructors, that is, default constructors, copy constructors and cast constructors), destructors and those member functions with the keyword operator in names, importantly including the assignment operator (which allows to control how the assignment is really done on the objects of the type) have absolutely nothing to do with the very notion of object-oriented programming, it is all about ADTs, not OOP.

What is OOP, then? Well, whenever inheritance is used, we can perhaps be sure it's OOP, but this doesn't answer the question, and, BTW, there are definitely tons of truly object-oriented code that uses no inheritance. I have to admit I don't know the right answer. At least I don't have a short answer, and I doubt it can ever exist. I saw a lot of attempts to give one, and every time it was just another failure.

Well, definitions are mostly useless, pointless and often misleading, at least outside of mathematics. But what if we ask another question: How do I do object-oriented programming right?

This time it is more or less clear there can not be any short answers, but the reality is much worse: long answers to this question don't exist either. Well, I'm wrong, they do exist. A lot of them. The only problem there are no useful answers to this question. Some of the attempts to address this problem are useless but at least harmless, like the well-known Object-Oriented Analysis and Design with Applications by Grady Booch, known for a lot of funny pictures; some other are devastatingly harmful, like Design Patterns by that 'Gang of Four' (well... I saw a team leader crying 'why the hell am I seeing that XFactoryFactory again?!'; and as of the 'Singleton' pattern, it definitely may be used as a criteria who on your team should get fired first, as effectively this 'pattern' offers to create a big bad global variable). Of what I have ever seen, no useful books on object-oriented programming exist, and perhaps there will never be any.

Furthermore, having 20+ years of teaching experience, I'm almost sure it is impossible to teach OOP in a class. I tried many times. The problem, as I see it, is that it is impossible to understand what OOP is all about for anyone who has no real programming experience; students typically only have experience of writing small and practically useless programs, and even most of the teachers don't have the experience allowing to fully understand the matter. Furthermore, it is virtually impossible to come up with an example task which is both simple enough to be explained in the class, and on the other hand is complicated enough to show how the paradigm of OOP actually works. Teachers often invent illustrative examples of the special kind, which can be explained by the phrase how to solve a simple problem in a weirdly overcomplicated fashion. Well, of what I saw recently:

$appendIterator = new AppendIterator();
$appendIterator->append(new LimitIterator(new InfiniteIterator(new ArrayIterator([" ", "*"])), null, 8));
$appendIterator->append(new ArrayIterator([PHP_EOL]));
$appendIterator->append(new LimitIterator(new InfiniteIterator(new ArrayIterator(["*", " "])), null, 8));
$appendIterator->append(new ArrayIterator([PHP_EOL]));

foreach(new LimitIterator(new InfiniteIterator($appendIterator), null, 8 * 9) as $bot) {
    echo $bot;
};

Okay, let's leave aside the fact this is PHP, which should never be used for any tasks, at all; as well as the fact any attempts to support OOP in a scripting language look too idiotic to tolerate. Okay, never mind. But guess what this horrible code does? It prints something like a chessboard. The following code (written in the same miserable language) does exactly the same:

for ($x = 0; $x < 4; $x++) {
  echo " * * * *\n";
  echo "* * * * \n";
}

Well, as of me, I'd write even simpler:

echo " * * * *\n* * * * \n * * * *\n* * * * \n * * * *\n* * * * \n * * * *\n* * * * \n";

Ah, sorry, it doesn't fit the line. The horrible iterator-based code above doesn't fit the line as well, but let's be good:

$cb = " * * * *\n* * * * \n";
echo "$cb$cb$cb$cb"

Let's also admit the following version has the right to exist, too, as perhaps it is the simplest to read and understand:

echo " * * * *\n";
echo "* * * * \n";
echo " * * * *\n";
echo "* * * * \n";
echo " * * * *\n";
echo "* * * * \n";
echo " * * * *\n";
echo "* * * * \n";

What's wrong with the code based on iterators, is basically that it is complicated, while there's a much simpler and obvious code that does the same thing. Okay, perhaps it is possible to find a situation in which all those iterators may be appropriate; doubtful but possible. What is almost impossible is to pick an example task simple enough to be explained in a class or within a lecture, for which any solution involving iterators won't look like ramblings of a madman.

What's the most terrible here is that students, being presented with examples of this kind, become convinced this is the way programming should be done. That's not just wrong, that's right the opposite to the obvious truth: in programming, we've got more than enough of complexity, and what we must do is fighting complexity, not growing it up. In particular, for any task the simplest possible solution must always be selected.

So, here is what makes teaching OOP almost impossible: for any example task simple enough to be explained to the students, the simplest possible solution will likely include nothing like OOP. Checkmate.

I can't tell for sure where new object-oriented programmers come from. I only know some obvious things regarding this matter.

First, I (personally) am capable of OOP. I actively use it and I gain from it. Despite nobody ever taught me. It was long ago, but from what I remember, I just took a look at the object-oriented features of Turbo Pascal 5.5, and eventually understood everything at once. Don't ask me how I did it, I don't know.

Second, a lot of professional programmers don't understand OOP, and nothing can help in this case. They may be very good programmers. They may be capable of things I can't do. They may be generally better than me in programming. But if a particular person, being a professional programmer, doesn't understand what OOP is for and what it actually is, there's no way to change this anyhow.

The third thing is my teaching experience. I saw students who get object-oriented programming easily, as if they were looking for something like that and I gave them the thing they wanted long before. Yes, this happens sometimes. Rarely. And it all looks like that, if they didn't have me as a teacher, they'd get OOP somewhere else, inevitably; I only made their way to the goal shorter a bit.

On the other hand, the vast majority of my students never really understood why the hell they may need all the mess with classes, methods and specially that damn inheritance. They performed the training tasks I gave them and passed their finals, just to never return to OOP again.

I saw a notable number of students who liked the idea of protecting the implementation details using the words class and private, but lost the path when it came to inheritance and virtual methods. They didn't really need OOP; what they liked was the idea of ADTs. Very good thing, too. But not OOP.

The conclusion is sad but true: there may be one programmer out of several dozen who can really do OOP, and the rest have no chance to become capable of it, ever.

Now let's see what happens around with OOP as a silver bullet candidate. Some people who make decisions in software companies believe that OOP may make programmers more efficient, so they demand programmers to use OOP. Furthermore, they demand recruiters to set OOP as a key requirement for open vacancies. People around, seeing this, decide they need to get capable in OOP, no matter they may dislike it. Professors of software engineering and computer science in various colleges and universities see the demand, too, so they come up with courses intended to teach OOP. Generally, it becomes a more or less universal belief that everyone who wants to be a programmer, must know OOP. Nobody gives a damn to the fact OOP can not be taught, actually no one agrees to recognize it. Specially the professors who believe they must teach students OOP, because it is widely demanded around, can't admit they simply don't understand what OOP is and can't therefore teach students — even if such a teaching could be generally possible. They may, in good faith, believe that they do understand OOP. Indeed, nothing and nobody is around to show them they're wrong.

Transformation into a cargo cult in such a situation is completely inevitable. A lot of people believe that OOP is when you write x.f() instead of f(x). Even more people are absolutely sure it's containers and iterators what makes OOP. Some even think OOP is when you throw exceptions. Argh, just scroll back a bit and take another look at these weird iterators from PHP. The statement this crap has nothing to do with OOP might surprize a lot of people around.

There's one thing about cargo cults which always remains true, no matter what happens around: cargo cult never works, it only wastes resources. Of all known 'design patterns', object factory is perhaps the most rarely needed, and at the same time it seems to be the most popular among those design patterns' fans. The key is likely that, with certain effort, this pattern can be pushed into almost any program; indeed, whenever you need to create some objects, it is not technically a problem to define a factory class that will do so. There may be absolutely no need for it, but nothing prevents another cargo cultist from practicing the cult. Some of the cultists don't even realize that, accorging to the Gang of Four, a factory class must have children, otherwise it is senseless; a lot of times I saw a class with the word 'factory' in its name and with no virtual methods at all. It is even more common to see an inheritance hierarchy of some classes, and another hierarchy that carefully duplicates the first, with the word 'factory' in their names. Instead of just using the new operator, authors of this type of code always call the appropriate factory's method, explicitly. They can't answer the question why don't they just use new.

Generally, with those design patterns, almost any code becomes an unreadable mess, with a lot more code dedicated to the cargo cult than to actual problem solution. People who really do understand how to do OOP properly, don't need any patterns; and for the rest, patterns can't help. There are no prosthetics for the brain. By the way, UML is just another attempt to prosthetize some missing parts of the brain. It never works. It just can't.

So, instead of a silver bullet we, again, have an iron ball with chain. This time with the words Design patterns engraved on it. OOP is great, but it only works for people who understand it (don't ask me how). Most people don't. Even worse, most good programmers don't. If you gather a team of, say, ten programmers, lucky you are if one of them really understands OOP. So you'd better not force the team into using OOP. The one who understands, will benefit from objects and classes, but all the others' attempts to program with objects will be harmful to the project, and harm is generally much easier to do than good. And specially, if someone on the team proposes to play with UML diagrams, don't allow them.


Perhaps it's time to make some conclusions. All the silver bullet role candidates I tried to discuss in this writing have one common property: they are all about reducing time, effort, labor intensity, whatever, at the cost of adding complexity, one way or another. Meanwhile, in computer programming, as well as in IT in general, complexity is the enemy. If you agree to add more of it, no matter why, no matter what you're trying to gain,— the game is lost.

  • When you use a library of any significant size, the whole program becomes obviously more complicated than if you implemented, on your own, exactly the features you need; the time you spend on their implementation doesn't matter on a long run, and all you 'saved' by using a library instead, will be all taken back by the additional complexity introduced by the library.
  • When you use higher level languages, you effectively program a fantasy non-existing machine instead of the real computer; the complexity is added to your project by the layer that emulates this fantasy machine, and yes, this layer is heavy enough to take all you gain, and much more.
  • If you add a whole virtual universe in which your programs are to run, like web browsers with their client-side code execution, the game becomes lost long before the first program of the kind starts, just because of the universe implementation. Actually, modern web browsers are a huge waste of computer users' money, despite the browsers themselves mostly come free of charge.
  • If you use something that only a select few understand, others will turn it into a cargo cult, doing much more harm than the limited good gained by the few.

If a silver bullet was possible, definitely we'd find we can't afford the silver.

However, there's nothing to regret here. We don't have perpetual motion engines, we don't have antigravity, teleportation, time machines. We don't have wings so we can't fly like birds. These are not problems, because a problem is something that can be solved.

The global IT industry is in a deplorable state now, and I have the audacity to say I know the reason why. There is a single source of all the troubles, and it is complexity. So here's the real problem: how to stop adding more and more complexity in a hopeless attempt to gain something, and how to get rid of all the complexity that has already been added in this way.