Every building needs a foundation, and every piece of software needs an architecture pattern that tells you what it is and how it works for people so that you can use it.
It has been more than 30 years since Mark Richards thought about how software should work. He is a software architect in Boston, and his thoughts have been on the subject for that long. His free book, Software Architecture Patterns, talks about five architectures that are used a lot in software systems.
Here are Richards’ five main architectures, broken down into a quick list of their strengths and weaknesses, as well as the best ways to use them. Many times, the best way to keep your code together is to choose one single architecture. For some people, it might be better to write each piece of code in the best way possible.
People often think of computer science as a kind of art even though they think of it as a science.
An n-tier architecture (layering)
This model is a good way to break up big problems into smaller, more manageable parts that can be done by different teams. This claim is a little like a self-fulfilling prophecy, but some people think it’s true. Several of the most popular and well-known software frameworks were built with this structure in mind. This means that many of the applications built with them naturally have a layered structure.
This way, the top layer takes in the data, and it moves down until it reaches the bottom layer, which is usually a database. The code is set up this way. Along the way, each layer has a specific job, like checking the data for consistency or reformatting the values so that they stay the same so that the whole thing works together. It’s common for different programmers to work on different layers at the same time.
The Model-View-Controller (MVC) structure, which is shown in the diagram below, is the standard way to build software for most popular web frameworks. It is clear that it has many layers.
The main benefit of a layered architecture is that each layer can focus on its own job. This means that each layer can only think about its own job. This means:
- A lot of people can play different “jobs.”
- It’s easy to change and improve layers on their own.
Good for some data science and AI applications because the different layers clean and prepare the data before the final analysis.
Properly layered architectures will have separate layers that aren’t affected by changes in other layers, which makes it easier to refactor the code. These open layers can also be used to get to shared services only in the business layer, but they can also be bypassed for speed. For example, there could be a service layer that allows the business layer to get to shared services only.
The architect has a hard time breaking down the tasks and figuring out how to separate them into different layers. In this case, it will be easy to separate the layers and give them to different programmers.
This method has some problems.
If the source code isn’t organised and the modules don’t have clear roles or relationships, it can become a “big ball of mud” that is hard to understand.
Some developers call it the “sinkhole anti-pattern,” and it can make code take a long time. Much of the code can be used to move data through layers without any thought.
A goal for this architecture is to keep layers separate, but that can also make it hard to understand the whole thing without knowing about each part.
Coders can skip over layers to make tight connections and make a logical mess with a lot of complicated interdependencies. A microkernel approach is shown below. Then it can start to look like this:
Monolithic deployment is often impossible to avoid. This means that even small changes can require the entire application to be redeployed.
- Apps that need to be made quickly.
- Enterprise or business apps that need to look like IT departments and processes
- With new developers who don’t know how to work with different types of architectures, teams
- Applications that need to be maintained and tested to very high standards
- Data pipelines that are made for data science in languages like R and Python are called data pipelines.
Events drive the design of a building
A lot of programmes have to wait a long time for things to happen to them. As a rule, this is true for computers that work with humans, but it’s also true for things like networks. During most of their lives, these machines wait for work to come.
In this case, the event-driven architecture makes it easier to write software for this job because it makes it easier to build a central unit that accepts all of the data and then sends it to the separate modules that handle a certain type of data. It is called a “event,” and it is passed on to the code that is assigned to that type.
In the end, the browser makes sure that only the right code is seeing the right events. In the browser, a lot of different types of events happen. Only the events that are relevant to the modules can be used by them. This is very different from the layered architecture, where all of the data usually goes through all of the layers at some point in time.
All in all, event-driven architecture:
Are very good at adapting to complex, often chaotic environments.
It’s easy to scale up and down.
When new types of events show up, they can be easily added to.
Are good for some of the new cloud models that use functions that only work when they’re called.
This method has some problems.
If the modules can interact with each other, testing can be hard to do. Individual modules can be tested on their own, but the way they work together can only be tested when the whole system is working.
Error handling can be hard to figure out, especially when there are a lot of different modules that have to deal with the same things.
When parts of the system go down, the central unit must have a plan in place.
It can take a long time to process messages, especially if the central unit has to store messages that come in waves, which can slow things down.
This can be hard to do when there are a lot of different things that each event needs.
Because the modules are so disconnected and independent, it’s hard to keep a transaction-based mechanism for consistency in place.
It’s best for:
Asynchronous systems have asynchronous data flow that happens only a few times a day.
Apps where only a few of the many modules interact with each other.
Apps that may not run very often or run a few times a week. The newer cloud function-as-a-service models can save a lot of money because they only charge when a function is called. They don’t cost anything to use most of the time.
Microkernel, or plug-in, architecture: This is a type of architecture.
Core operations or features are used again and again in different ways, depending on the job. Some applications have a set of operations or features that are used in many different ways. This is just one example of how a software development environment called Eclipse can help you work with files and start up other processes. When a button is pushed, the tool compiles the code and runs it. It does all of these things with Java code.
Here, the basics of how to show and edit files are in the microkernel. The Java compiler is just an extra piece that’s added on to the microkernel to help it support the most important features. Other programmers have made changes to Eclipse so that it can be used to write code for other languages with other compilers. A lot of people don’t use the Java compiler, but they all use the same basic tools for editing and annotating files.
Plugins are the extra features that are added on top of the main features. Many people call this “plugins architecture” instead.
Richards gives an example from the insurance business to show how this works: “Claims processing has to be complicated, but the steps aren’t. Rules are what make it hard.”
What needs to be done is to move some simple tasks, like asking for a name or checking on payment, into the microkernel. In the beginning, these can be tested on their own. Then, each business unit can write plugins for different types of claims by knitting together rules with calls to core services in the kernel.
Many operating systems today, such as Linux, have a kernel-style architecture. The number of features in the kernel (the so-called size) is the subject of a lot of debate. It’s hard to say which is better: microkernels, which are small, or macrokernels, which are bigger and more complicated but have a similar style.
This method has some problems.
It’s sometimes hard to figure out what should go in the microkernel. It should have the code that’s used the most.
Some code must be written into the plugins to tell the microkernel that it has been installed and ready to work.
When a lot of plugins depend on the microkernel, it can be very hard or even impossible to change it. The only way to fix this is to change the plugins, too.
Before the game starts, it’s hard to figure out the right granularity for the kernel functions. It’s almost impossible to change this later on.
It’s the best for:
People use a wide range of tools.
APPS that have a clear separation between basic routines and more complex rules
Applications that have a fixed set of core routines and a set of rules that need to be changed often.
The architecture of microservices
Software is like a kitten: When it’s small, it’s cute and fun, but when it’s big, it’s hard to steer and hard to change. The microservices architecture is meant to help developers keep their babies from becoming too big, too monolithic, and too hard to change.
There are many small programmes that can be used instead of one big one. Then, there is another little programme that sits on top and combines the data from all of them.
In order for Netflix’s UI to work, “you have to go to your iPad and look at Netflix’s UI,” Richards said. Ratings for movies you’ve already seen, recommendations, what’s next, and accounting information are all tracked by separate services and served up on their own separate pages.
Netflix, or any other microservice, is like a group of dozens of smaller websites that appear to be a single service at first glance.
Similar to the event-driven and microkernel approaches, this one is only used when the different tasks can be easily separated, like when they can be done one at a time. Most of the time, different tasks may need different amounts of processing and may be used for different things.
During Friday and Saturday nights, Netflix’s servers are put to the test. They need to be able to grow. On the other hand, the servers that keep track of DVD returns work most during the week, right after the mail is delivered for that day. This is when they do most of their work.
By setting these up as separate services, Netflix’s cloud can grow and shrink them as demand changes without having to deal with each one at the same time.
There are problems with the microservices approach.
The services must be mostly separate, or the cloud will become unbalanced when they interact with each other.
The tasks in some applications aren’t easy to break down into separate units.
There are some jobs in AI and data processing that don’t work well when they’re broken down into small parts.
when different microservices are used to do the same thing. The costs of communication can be very high.
A lot of microservices can confuse people, because some parts of a web page may not show up until after other parts.
This method is best for:
- Websites with a lot of small parts
- Corporations have data centres that have well-defined boundaries.
- Creating new businesses and web apps quickly
There are development teams that are spread out all over the world, and they work on the same project.
Architects who work with space.
It is the heart of many applications, and they work well if the database is running well. People start to use more, and the database gets behind because it’s writing a record of all the transactions. The whole site stops working.
It doesn’t happen with space-based architecture because it has a lot of servers that can act as backups. In this way, it splits up both the presentation and the storage of information, giving these jobs to different servers. It’s the same way that the responsibility for answering calls is spread out across all the nodes:
Some architects use the term “cloud architecture” to describe this type of design, which is vaguer. There’s a “tuple space” of users’ work that is cut up to divide work between nodes. The name “space-based” refers to this “tuple space.”
“It’s all in memory,” Richards said. “The space-based architecture is good for things that have sudden spikes because it doesn’t have a database.”
Stored in RAM, many jobs can be done much faster, and distributing the storage with the processing can make many of them easier.
There are problems with the space-based method.
People who use RAM databases have a harder time supporting transactions.
Getting enough people to use the system to test it can be hard, but each node can be tested on its own.
It’s hard to learn how to store the data so that it doesn’t get corrupted by multiple copies at the same time.
Some types of analysis may be more difficult because of the way the system is set up. When you need to do something that needs to be done across the whole dataset, like finding an average or doing a statistical analysis, you need to break the task into smaller jobs and spread them out across all of the nodes. When it’s done, all of the jobs will be combined together.
People who have a lot of data, like click streams and user logs, can use them to find things
There are well-known workloads with parts that need different amounts of computation. One part of the tuple space could get powerful computers with a lot of RAM, while another could get by with a lot less.
Low-value data can be lost from time to time without having a big effect. In other words, not bank transactions.
Social networks are groups of people who connect with each other
Also Read: Effective Teaching Skills To Master