Building software is like building a house. Although it may appear otherwise, there are certain decisions that cannot be undone. What’s worse, you will be living with the consequences of these decisions for a long time. It will have a tremendous impact on how your business will scale and how the development will proceed. Making amendments to premature (or simply bad) decisions is usually expensive and time-consuming.
I have been working with business representatives, product owners and great people who have had impressive ideas, for over 10 years. What do you think is the most common statement I hear from clients, irrespective of scale, geographical or temporal context?
‘We don’t care as long as it works’.
On prima facie, it may appear to be a perfectly reasonable approach. A business has an idea and they pay money to engineers to make it come true. Owners are concerned with the effects, and technicians are concerned with the technological aspects because that is exactly what they are being paid for. There are two separate issues we have to rethink here:
- The owner should not be troubled unless the software does not work.
- The owner should not be technologically involved.
The first point refers to the product, while the latter to the process.
Let’s address the first issue using some analogies. Every car does exactly the same job – it transports us to a destination point. It works. This function, and some fundamental properties like having four wheels, is something that all cars have in common. So, why are there such big differences in everything else, especially price? Everyone knows the answer to that question. This is also the case when it comes to software. Your software may work. It may work now, but possibly not after future amendments. It may work, but slowly. It may work sometimes, but then fail. It may work worse than current or future competitors on the market. You may think it works, but what makes you so sure?
When you invest a lot of time, work and even more money into your ideas, it is essential that you know. Imagine you are buying a house. Despite the fact that everything looks fine, you check things that are not visible at first sight like the architectural conditions, foundations, installations, legal status and potential issues. No one says “well it looks good from here, I’ll buy it”. This is getting even more detailed and rigid when it comes to business estates. That’s why it’s in your interest as a future product owner to ask questions that concern technological issues.
This brings us to the issue of the involvement of the product owner in the technological process. If you are a product owner (or maybe just the concept owner for now), you should be aware of one very important fact:
You are the one who knows more than anyone else about your product.
This fact has two crucial consequences for the next stages of development. The first one is that you are the primary source of truth for your development team when it comes to business logic and direction. You should be always available to address any questions or doubts about how things should be working. The second fact is that you don’t know everything so you need to build a development team that complements your knowledge. If you are not a very technical person you should definitely think about hiring a CTO (or at the very least a good software architect). You still need to be a part of the decision-making process (remember this is your product), but you can’t go against your developers, be passive or ambivalent.
Moving forward, let’s take a look at things that you should consider, understand and discuss with your architect and finally, approve.
Performance
Performance is the first issue that has to be addressed when developing software. It is a label for quite a lot of factors. Intuitively, it may be defined as a set of factors measuring how the software operates, such as:
- How many requests can be handled per second.
- How long will a request or query take.
- How long will it take to start up.
- How much of the computer resources (CPU, GPU, RAM) is consumed.
- How much storage space is consumed.
Using different languages gives vastly different performance results.
C++ is the fastest non-assembly language, full stop. Ruby is a slow and time-consuming language, full stop. Java needs more hardware resources than Go, full stop. These are not opinions but measurable facts.
Using different frameworks gives significantly different performance results.
Performance differences grounded in how different languages abstract and execute are not the only consideration. Using the same language may still lead to a vast range of results by using different frameworks, libs or patterns. For example, comparing Node.js frameworks, Fastify can handle almost three times more requests per second than Hapi and Moleculer can handle 5 times more remote action calls than Seneca.
I would like to make a small digression regarding benchmarks. Please keep in mind that benchmark results may differ with different versions, environments or even micro-optimization. Thus, benchmarking particular versions with respect to a particular context is always a great piece of information, but not definitive.
Now, we encounter the point where things are getting more complex. As we know, performance is no one singular factor concerning the effectiveness of how things are done in the real world. Consider a scenario where you want to perform pure CPU computations. It’s commonly known that Java is faster than Node when it comes to that, so you should choose Java over Node. However, let’s look further into the context. The application is built around a microservices architecture and is cloud-native. The CPU computations are rarely performed and are not that heavy. We encapsulate this logic into a separate service that will start on-demand. Node microservice takes around 1 second to startup, while Java needs around 10s. Java will still provide us with a better CPU computation, but this factor is irrelevant right now. The “startup time” factor becomes more important because it may decide whether or not the operation will be performed or whether the business action will be dropped due to delay. This could end up with a lost customer.
Sometimes performance has to yield to the complexity factor. If we compare NATS to any other event broker it has much better performance. However, Kafka or Rabbit are much more complex solutions and you may be correct in choosing one of them due to missing features in NATS (like replication or persistent queues).
In summary, performance is very important but should always be considered in relative terms and in a particular context.
Development
Despite all the deliberations from the previous section, we may come to the conclusion that C++ still is the fastest language, full stop, as we have previously stated. No matter what the context is, everything written in C++ will be faster than its language counterpart. This raises the question:
If C++ is the fastest language then why do people choose other options?
All of the possible answers are related to development. The same qualities that make C++ the most performant make it inefficient when it comes to how the code is written. Due to the fact that C++ has hardly any abstraction it makes the development slower, more verbose, repetitive and simply more difficult.
How fast do I need my ideas to get developed?
The answer is always “fast” irrespective of scale, geographical or temporal context. In a lot of situations, this is why it is perfectly reasonable to sacrifice 40% of the performance to increase the development to be 5 times faster. In the end, no one will notice the difference between a 1ms and 2ms response time.
It may be cheaper to pay for more servers than for developers.
The development progress that a lot of projects encounter can be a very difficult problem:
When we started, developing new features was very fast. Now, it takes ages to develop new features or to even make amendments.
Of course, it is often caused by a non-qualified team of developers. However, as you already know, things are not as easy as they appear. If you decide to choose a poorly designed framework that lacks abstractions, separations of concerns, functional composition and code organisation, even the best developer can end up in a big mess. This is called an unbalanced code complexity curve. For example, frameworks like Express.js provide a linear (or even exponential) code complexity curve – it is very easy and rapid to start with, then the development gets much more complicated and less efficient. What we want to achieve is a logarithmic curve that offers predictable and stable growth. This is slower at the start when you need to provide all of the foundations, but then it gets easier and easier. A great example of this is Nest.js, which is actually a framework that is built on the top of Express.js but solves a lot of trouble that people used to have with Express.
Code complexity also has a tremendous impact on testing. There could be a lot of issues with testing badly designed code. Firstly, it is hard to come up with a proper test case if the code is vague and hard to understand. If the separation of concerns is done poorly, then it is hard to mock and isolate side effects. The worst-case scenario is that the code could even be untestable.
Responsibility
Besides the purely technological issues related to development, there is also another aspect which I like to call “being responsible”.
Technological decisions have real-world consequences.
Let’s start with the obvious – you have to find developers for your team. The difficulty of this may vary based on your decisions. It is easier to hire a Node developer than a Rust developer. It is easier to replace a Java developer than a C++ developer. It is easier to find a Rails developer than a Hemera developer.
The next thing to consider is the stability of the technology that you choose. You have to be sure that the version you’re using is production-ready since the last thing you want are unexpected bugs in a framework that no one knows how to solve.
You also don’t want to commit to tools that will be completely changed in the next version, consequently making your whole codebase automatically outdated.
You should always prefer solutions with a more active and growing community. “Small and growing” is usually better for new projects than “large and decreasing”. You can always check how the community looks on Github, package repositories, StackOverflow, etc…
These issues equate to what we may call the maturity of the technology. Sometimes, maturity maybe even more important than performance. Let’s compare two graph-oriented databases, Neo4J and DGraph. The first one is a mature solution with big support with a lot of published books, conferences and organized events. The latter is a more modern approach which is faster since it’s written in Go instead of Java. You should always investigate if you need an extra “oomph” in performance at the cost of less support. Maybe, you should choose a more conservative approach…
Summary
As I hope you can see, there are no universal solutions when it comes to technology. However, there are rights and wrongs. That is what makes the Software Architect role both indispensable and difficult. Our role is to carry out honest research, balance quite a lot of factors and help you make the best decision.