Skip to content

πŸ“ƒ White paper for Backend developers

License

Notifications You must be signed in to change notification settings

torwig/backend-cheats

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logo

This repository is a visual cheatsheet on the main topics in Backend-development. All material is divided into chapters that include different topics. There are three main parts to each topic:

  • Visual part - various images/tables/cheatsheets for better understanding (may not be available). All pictures and tables are made from scratch, specifically for this repository.
  • Summary - A very brief summary with a list of key terms and concepts. The terms are hyperlinked to the appropriate section on Wikipedia or a similar reference resource.
  • References to sources - resources where you may find complete information on a particular issue (they are hidden under a spoiler, which opens when clicked). If possible, the most authoritative sources are indicated, or those that provide information in as simple and comprehensible language as possible.

🌐 Available translations: English Русский

πŸ›  The repository is under active development, so it is constantly updated and supplemented (see roadmap).

🀝 If you want to help the project, feel free to send your issues or pull requests.

πŸŒ™ For better experiense enable dark theme.

Contents

Network & Internet

Internet is a worldwide system that connects computer networks from around the world into a single network for storing/transferring information. The Internet was originally developed for the military. But soon it began to be implemented in universities, and then it could be used by private companies, which began to organize networks of providers that provide Internet access services to ordinary citizens. By early 2020, the number of Internet users exceeded 4.5 billion.

  • How the Internet works

    Internet

    Your computer does not have direct access to the Internet. Instead, it has access to your local network to which other devices are connected via a wired (Ethernet) or wireless (Wi-Fi) connection. The organizer of such a network is a special minicomputer - router. This device connects you to your Internet Service Provider (ISP), which in turn is connected to other higher-level ISPs. Thus, all these interactions make up the Internet, and your messages always transit through different networks before reaching the final recipient.

    • Host

      Any device that is on any network.

    • Server

      A special computer on the network that serves requests from other computers.

    Network topologies

πŸ”— References
  1. πŸ“„ How does the Internet work? – MDN
  2. πŸ“Ί How does the internet work? (Full Course) – YouTube
  3. πŸ“Ί What is a Server? Servers vs Desktops Explained – YouTube
  4. πŸ“Ί Network Topology – YouTube
  5. πŸ“Ί Network Topologies (Star, Bus, Ring, Mesh, Ad hoc, Infrastructure, & Wireless Mesh Topology) – YouTube
  • What is a domain name

    Domain name

    Domain Names are human-readable addresses of web servers available on the Internet. They consist of parts (levels) separated from each other by a dot. Each of these parts provides specific information about the domain name. For example country, service name, localization, etc.

    • Who owns domain names

      The ICANN Corporation is the founder of the distributed domain registration system. It gives accreditations to companies that want to sell domains. In this way a competitive domain market is formed.

    • How to buy a domain name

      A domain name cannot be bought forever. It is leased for a certain period of time. It is better to buy domains from accredited registrars (you can find them in almost any country).

πŸ”— References
  1. πŸ“„ What is a Domain Name? – MDN
  2. πŸ“Ί A Beginners Guide to How Domain Names Work! – YouTube
  • IP address

    IPv4-IPv6

    IP address is a unique numeric address that is used to recognize a particular device on the network.

    • Levels of visibility
      • External and publicly accessible IP address that belongs to your ISP and is used to access the Internet by hundreds of other users.
      • The IP address of your router in your ISP's local network, the same IP address from which you access the Internet.
      • The IP address of your computer in the local (home) network created by the router, to which you can connect your devices. Typically, it looks like 192.168.XXX.XXX.
      • The internal IP address of the computer, inaccessible from the outside and used only for communication between the running processes. It is the same for everyone - 127.0.0.1 or just localhost.
    • Port

      One device (computer) can run many applications that use the network. In order to correctly recognize where and which data coming over the network should be delivered (to which of the applications) a special numerical number - a port is used. That is, each running process on a computer which uses a network connection has its own personal port.

    • IPv4

      Version 4 of the IP protocol. It was developed in 1981 and limits the address space to about 4.3 billion (2^32) possible unique addresses.

    • IPv6

      Over time, the allocation of address space began to happen at a much faster rate, forcing the creation of a new version of the IP protocol to store more addresses. IPv6 is capable of issuing 2^128 (is huge number) unique addresses.

πŸ”— References
  1. πŸ“Ί IP addresses. Explained – YouTube
  2. πŸ“Ί Public IP vs. Private IP and Port Forwarding (Explained by Example) – YouTube
  3. πŸ“Ί Network Ports Explained – YouTube
  4. πŸ“Ί What is IP address and types of IP address - IPv4 and IPv6 – YouTube
  5. πŸ“Ί IP Address - IPv4 vs IPv6 Tutorial – YouTube
  6. πŸ“„ IP Address Subnet Cheat Sheet – freeCodeCamp
  • What is DNS

    DNS

    DNS (Domain Name System) is a decentralized Internet address naming system that allows you to create human-readable alphabetic names (domain names) corresponding to the numeric IP addresses used by computers.

    • Structure of DNS

      DNS consists of many independent nodes, each of which stores only those data that fall within its area of responsibility.

    • DNS Resolver

      A server that is located in close proximity to your Internet Service Provider. It is the server that searches for addresses by domain name, and also caches them (temporarily storing them for quick retrieval in future requests).

    • DNS record types
      • A record - associates the domain name with an IPv4 address.
      • AAAA record - links a domain name with an IPv6 address.
      • CNAME record - redirects to another domain name.
      • and others - MX record, NS record, PTR record, SOA record.
πŸ”— References
  1. πŸ“„ What is DNS? Domain Name System explained – freeCodeCamp
  2. πŸ“Ί DNS (Domain Name System) explained. Types of Domain Name Servers – YouTube
  3. πŸ“Ί DNS as Fast As Possible – YouTube
  4. πŸ“„ All about DNS records – Cloudflare
  5. πŸ“Ί DNS records explained (playlist) – YouTube
  • Web application design

    Modern web applications consist of two parts: Frontend and Backend. Thus implementing a client-server model.

    The tasks of the Frontend are:

    • Implementation of the user interface (appearance of the application)
      • A special markup language HTML is used to create web pages.
      • CSS style language is used to style fonts, layout of content, etc.
      • JavaScript programming language is used to add dynamics and interactivity.
        As a rule, these tools are rarely used in their pure form, as so-called frameworks and preprocessors exist for more convenient and faster development.
    • Creating functionality for generating requests to the server

      These are usually different types of input forms that can be conveniently interacted with.

    • Receives data from the server and then processes it for output to the client

    Tasks of the Backend:

    • Handling client requests

      Checking for permissions and access, all sorts of validations, etc.

    • Implementing business logic

      A wide range of tasks can be implied here: working with databases, information processing, computation, etc. This is, so to speak, the heart of the Backend world. This is where all the important and interesting stuff happens.

    • Generating a response and sending it to the client
πŸ”— References
  1. πŸ“„ Front-End vs. Back-End explained
  2. πŸ“Ί Everything You NEED to Know About WEB APP Architecture – YouTube
  • Browsers and how they work

    Browser

    Browser is a client which can be used to send requests to a server for files which can then be used to render web pages. In simple terms, a browser can be thought of as a program for viewing HTML files, which can also search for and download them from the Internet.

    • Working Principle

      Query handling, page rendering, and the tabs feature (each tab has its own process to prevent the contents of one tab from affecting the contents of the other).

    • Extensions

      Allow you to change the browser's user interface, modify the contents of web pages, and modify the browser's network requests.

    • Chrome DevTools

      An indispensable tool for any web developer. It allows you to analyze all possible information related to web pages, monitor their performance, logs and, most importantly for us, track information about network requests.

πŸ”— References
  1. πŸ“„ How browsers work – MDN
  2. πŸ“„ How browsers work: Behind the scenes of modern web browsers – web.dev
  3. πŸ“„ Inside look at modern web browser – Google
  4. πŸ“Ί What is a web browser? – YouTube
  5. πŸ“Ί Anatomy of the browser 101 (Chrome University 2019) – YouTube
  6. πŸ“Ί Chrome DevTools - Crash Course – YouTube
  7. πŸ“Ί Demystifying the Browser Networking Tab in DevTools – YouTube
  8. πŸ“Ί 21+ Browser Dev Tools & Tips You Need To Know – YouTube
  • VPN and Proxy

    Proxy & VPN

    The use of VPNs and Proxy is quite common in recent years. With the help of these technologies, users can get basic anonymity when surfing the web, as well as bypass various regional blockages.

    • VPN (Virtual Private Network)

      A technology that allows you to become a member of a private network (similar to your local network), where requests from all participants go through a single public IP address. This allows you to blend in with the general mass of requests from other participants.

      • Simple procedure for connection and use.
      • Reliable traffic encryption.
      • There is no guarantee of 100% anonymity, because the owner of the network knows the IP-addresses of all participants.
      • VPNs are useless for dealing with multi-accounts and some programs because all accounts operating from the same VPN are easily detected and blocked.
      • Free VPNs tend to be heavily loaded, resulting in unstable performance and slow download speeds.
    • Proxy (proxy server)

      A proxy is a special server on the network that acts as an intermediary between you and the destination server you intend to reach. When you are connected to a proxy server all your requests will be performed on behalf of that server, that is, your IP address and location will be substituted.

      • The ability to use an individual IP address, which allows you to work with multi-accounts.
      • Stability of the connection due to the absence of high loads.
      • Connection via proxy is provided in the operating system and browser, so no additional software is required.
      • There are proxy varieties that provide a high level of anonymity.
      • The unreliability of free solutions, because the proxy server can see and control everything you do on the Internet.
πŸ”— References
  1. πŸ“„ What is VPN? How It Works, Types of VPN – kaspersky.com
  2. πŸ“Ί VPN (Virtual Private Network) Explained – YouTube
  3. πŸ“Ί What Is a Proxy and How Does It Work? – YouTube
  4. πŸ“Ί What is a Proxy Server? – YouTube
  5. πŸ“Ί Proxy vs. Reverse Proxy (Explained by Example) – YouTube
  6. πŸ“Ί VPN vs Proxy Explained Pros and Cons – YouTube
  • Hosting

    Hosting

    Hosting is a special service provided by hosting providers, which allows you to rent space on a server (which is connected to the Internet around the clock), where your data and files can be stored. There are different options for hosting, where you can use not only the disk space of the server, but also the CPU power to run your network applications.

    • Virtual hosting

      One physical server that distributes its resources to multiple tenants.

    • VPS/VDS

      Virtual servers that emulate the operation of a separate physical server and are available for rent to the client with maximum privileges.

    • Dedicated server

      Renting a full physical server with full access to all resources. As a rule, this is the most expensive service.

    • Cloud hosting

      A service that uses the resources of several servers. When renting, the user pays only for the actual resources used.

    • Colocation

      A service that gives the customer the opportunity to install their equipment on the provider's premises.

πŸ”— References
  1. πŸ“„ What is Web Hosting? – namecheap.com
  2. πŸ“Ί What is Web Hosting and How Does It Work? – YouTube
  3. πŸ“Ί Different Hosting Types Explained – YouTube
  4. πŸ“„ Awesome Hosting – GitHub
  • OSI network model

    β„– Level Used protocols
    7 Application layer HTTP, DNS, FTP, POP3
    6 Presentation layer SSL, SSH, IMAP, JPEG
    5 Session layer APIs Sockets
    4 Transport layer TCP, UDP
    3 Network layer IP, ICMP, IGMP
    2 Data link layer Ethernet, MAC, HDLC
    1 Physical layer RS-232, RJ45, DSL

    OSI (The Open Systems Interconnection model) is a set of rules describing how different devices should interact with each other on the network. The model is divided into 7 layers, each of which is responsible for a specific function. All this is to ensure that the process of information exchange in the network follows the same pattern and all devices, whether it is a smart fridge or a smartphone, can understand each other without any problems.

    • Physical layer

      At this level, bits (ones/zeros) are encoded into physical signals (current, light, radio waves) and transmitted further by wire (Ethernet) or wirelessly (Wi-Fi).

    • Data link layer

      Physical signals from layer 1 are decoded back into ones and zeros, errors and defects are corrected, and the sender and receiver MAC addresses are extracted.

    • Network layer

      This is where traffic routing, DNS queries and IP packet generation take place.

    • Transport layer

      The layer responsible for data transfer. There are two important protocols:

      • TCP is a protocol that ensures reliable data transmission. TCP guarantees data delivery and preserves the order of the messages. This has an impact on the transmission speed. This protocol is used where data loss is unacceptable, such as when sending mail or loading web pages.
      • UDP is a simple protocol with fast data transfer. It does not use mechanisms to guarantee the delivery and ordering of data. It is used e.g. in online games where partial packet loss is not crucial, but the speed of data transfer is much more important. Also, requests to DNS servers are made through UDP protocol.
    • Session layer

      Responsible for opening and closing communications (sessions) between two devices. Ensures that the session stays open long enough to transfer all necessary data, and then closes quickly to avoid wasting resources.

    • Presentation layer

      Transmission, encryption/decryption and data compression. This is where data that comes in the form of zeros and ones are converted into desired formats (PNG, MP3, PDF, etc.)

    • Application layer

      Allows the user's applications to access network services such as database query handler, file access, email forwarding.

πŸ”— References
  1. πŸ“„ Layers of OSI Model – geeksForGeeks
  2. πŸ“Ί The OSI Model - Explained by Example – YouTube
  3. πŸ“Ί TCP vs UDP Crash Course – YouTube
  • HTTP Protocol

    HTTP (HyperText Transport Protocol) is the most important protocol on the Internet. It is used to transfer data of any format. The protocol itself works according to a simple principle: request -> response.

    • Structure of HTTP messages

      HTTP messages consist of a header section containing metadata about the message, followed by an optional message body containing the data being sent.

    HTTP

    • Headers

      Additional service information that is sent with the request/response.
      Common headers: Host, User-Agent, If-Modified-Since, Cookie, Referer, Authorization, Cache-Control, Content-Type, Content-Length, Last-Modified, Set-Cookie, Content-Encoding.

    • Request methods

      Main: GET, POST, PUT, DELETE.
      Others: HEAD, CONNECT, OPTIONS, TRACE, PATCH.

    • Response status codes

      Each response from the server has a special numeric code that characterizes the state of the sent request. These codes are divided into 5 main classes:

      • 1Ρ…Ρ… - Service information
      • 2Ρ…Ρ… - Successful request
      • 3Ρ…Ρ… - Redirect to another address
      • 4Ρ…Ρ… - Client side error
      • 5Ρ…Ρ… - Server side error
    • HTTPS

      Same HTTP, but with encryption support. Your apps should use HTTPS to be secure.

    • Cookie

      The HTTP protocol does not provide the ability to save information about the status of previous requests and responses. Cookies are used to solve this problem. Cookies allow the server to store information on the client side that the client can send back to the server. For example, cookies can be used to authenticate users or to store various settings.

    • CORS (Cross origin resource sharing)

      A technology that allows one domain to securely receive data from another domain.

    • CSP (Content Security Policy)

      A special header that allows you to recognize and eliminate certain types of web application vulnerabilities.

    • Evolution of HTTP
      • HTTP 1.0: Uses separate connections for each request/response, lacks caching support, and has plain text headers.
      • HTTP 1.1: Introduces persistent connections, pipelining, the Host header, and chunked transfer encoding.
      • HTTP 2: Supports multiplexing, header compression, server push, and support a binary data.
      • HTTP 3: Built on QUIC, offers improved multiplexing, reliability, and better performance over unreliable networks.
πŸ”— References
  1. πŸ“„ How HTTP Works and Why it's Important – freeCodeCamp
  2. πŸ“„ Hypertext Transfer Protocol (HTTP) – MDN
  3. πŸ“Ί Hyper Text Transfer Protocol Crash Course – YouTube
  4. πŸ“Ί Full HTTP Networking Course (5 hours) – YouTube
  5. πŸ“„ HTTP vs HTTPS – What's the Difference? – freeCodeCamp
  6. πŸ“Ί HTTP Cookies Crash Course – YouTube
  7. πŸ“Ί Cross Origin Resource Sharing (Explained by Example) – YouTube
  8. πŸ“Ί When to use HTTP GET vs POST? – YouTube
  9. πŸ“Ί How HTTP/2 Works, Performance, Pros & Cons and More – YouTube
  10. πŸ“Ί HTTP/2 Critical Limitation that led to HTTP/3 & QUIC – YouTube
  11. πŸ“Ί 304 Not Modified HTTP Status (Explained with Code Example and Pros & Cons) – YouTube
  12. πŸ“Ί What is the Largest POST Request the Server can Process? – YouTube
  • TCP/IP stack

    TCP/IP

    Compared to the OSI model, the TCP/IP stack has a simpler architecture. In general, the TCP/IP model is more widely used and practical, and the OSI model is more theoretical and detailed. Both models describe the same principles, but differ in the approach and protocols they include at their levels.

    • Link layer

      Defines how data is transmitted over the physical medium, such as cables or wireless signals.
      Protocols: Ethernet, Wi-Fi, Bluetooth, Fiber optic.

    • Internet Layer

      Routing data across different networks. It uses IP addresses to identify devices and routes data packets to their destination.
      Protocols: IP, ARP, ICMP, IGMP

    • Transport Layer

      Data transmission between two devices. It uses protocols such as TCP - reliable, but slow and UDP - fast, but unreliable.

    • Application Layer

      Provides services to the end user, such as web browsing, email, and file transfer. It interacts with the lower layers of the stack to transmit data over the network.
      Protocols: HTTP, FTP, SMTP, DNS, SNMP.

πŸ”— References
  1. πŸ“„ What is the TCP/IP Model? Layers and Protocols Explained – freeCodeCamp
  2. πŸ“Ί What is TCP/IP? – YouTube
  3. πŸ“Ί How TCP really works. Three-way handshake. TCP/IP Deep Dive – YouTube
  • Network problems

    Problems

    The quality of networks, including the Internet, is far from ideal. This is due to the complex structure of networks and their dependence on a huge number of factors. For example, the stability of the connection between the client device and its router, the quality of service of the provider, the power and performance of the server, the physical distance between the client and the server, etc.

    • Latency

      The time it takes for a data packet to travel from sender to receiver. It depends more on the physical distance.

    • Packet loss

      Not all packets traveling over the network can reach their destination. This happens most often when using wireless networks or due to network congestion.

    • Round Trip Time (RTT)

      The time it takes for the data packet to reach its destination + the time to respond that the packet was received successfully.

    • Jitter

      Delay fluctuations, unstable ping (for example, 50ms, 120ms, 35ms...).

    • Packet reordering

      The IP protocol does not guarantee that packets are delivered in the order in which they are sent.

πŸ”— References
  1. πŸ“„ Understanding latency – MDN
  2. πŸ“Ί What is latency? What affects latency? – YouTube
  3. πŸ“Ί Basics of network bandwidth, latency, and jitter – YouTube
  4. πŸ“Ί Round Trip Time (RTT) – YouTube
  5. πŸ“Ί What Causes Packet Loss and How to Eliminate It In Your Network – YouTube
  • Network diagnostics

    Traceroute

    • Traceroute

      A procedure that allows you to trace to which nodes, with which IP addresses, a packet you send before it reaches its destination. Tracing can be used to identify computer network related problems and to examine/analyze the network.

    • Ping scan

      The easiest way to check the server for performance.

    • Checking for packet loss

      Due to dropped connections, not all packets sent over the network reach their destination.

    • Wireshark

      A powerful program with a graphical interface for analyzing all traffic that passes through the network in real time.

πŸ”— References
  1. πŸ“Ί How does traceroute work? – YouTube
  2. πŸ“Ί Traceroute (tracert) Explained - Network Troubleshooting – YouTube
  3. πŸ“Ί Nmap - Host Discovery With Ping Sweep – YouTube
  4. πŸ“Ί Internet Troubleshooting - Pathping Packet Loss – YouTube
  5. πŸ“Ί Wireshark crash course (playlist) – YouTube

PC device

  • Main components (hardware)

    • Motherboard

      The most important PC component to which all other elements are connected.

      • Chipset - set of electronic components that responsible for the communication of all motherboard components.
      • CPU socket - socket for mounting the processor.
      • VRM (Voltage Regulator Module) – module that converts the incoming voltage (usually 12V) to a lower voltage to run the processor, integrated graphics, memory, etc.
      • Slots for RAM.
      • Expansion slots PCI-Express - designed for connection of video cards, external network/sound cards.
      • Slots M.2 / SATA - designed to connect hard disks and SSDs.
    • CPU (Central processing unit)

      The most important device that executes instructions (programme code). Processors only work with 1 and 0, so all programmes are ultimately a set of binary code.

      • Registers - the fastest memory in a PC, has an extremely small capacity, is built into the processor and is designed to temporarily store the data being processed.
      • Cache - slightly less fast memory, which is also built into the processor and is used to store a copy of data from frequently used cells in the main memory.
      • Processors can have different architectures. Currently, the most common are the x86 architecture (desktop and laptop computers) and ARM (mobile devices as well as the latest Apple computers).
    • RAM (Random-access memory)

      Fast, low capacity memory (4-16GB) designed to temporarily store program code, as well as input, output and intermediate data processed by the processor.

    • Data storage

      Large capacity memory (256GB-1TB) designed for long-term storage of files and installed programmes.

    • GPU (Graphics card)

      A separate card that translates and processes data into images for display on a monitor. This device is also called a discrete graphics card. Usually needed for those who do 3D modelling or play games.
      Built-in graphics card is a graphics card built into the processor. It is suitable for daily work.

    • Network card

      A device that receives and transmits data from other devices connected to the local network.

    • Sound card

      A device that allows you to process sound, output it to other devices, record it with a microphone, etc.

    • Power supply unit

      A device designed to convert the AC voltage from the mains to DC voltage.

πŸ”— References
  1. πŸ“„ Everything You Need to Know About Computer Hardware
  2. πŸ“„ Putting the "You" in CPU: explainer how your computer runs programs, from start to finish
  3. πŸ“Ί What does what in your computer? Computer parts Explained – YouTube
  4. πŸ“Ί Motherboards Explained – YouTube
  5. πŸ“Ί The Fetch-Execute Cycle: What's Your Computer Actually Doing? – YouTube
  6. πŸ“Ί How a CPU Works in 100 Seconds // Apple Silicon M1 vs Intel i9 – YouTube
  7. πŸ“Ί Arm vs x86 - Key Differences Explained – YouTube
  • Operating system design

    OS

    Operating system (OS) is a comprehensive software system designed to manage a computer's resources. With operating systems, people do not have to deal directly with the processor, RAM or other parts of the PC.

    OS can be thought of as an abstraction layer that manages the hardware of a computer, thereby providing a simple and convenient environment for user software to run.

    • Main features
      • RAM management (space allocation for individual programms)
      • Loading programms into RAM and their execution
      • Execution of requests from user's programms (inputting and outputting data, starting and stopping other programms, freeing up memory or allocating additional memory, etc.)
      • Interaction with input and output devices (mouse, keyboard, monitor, etc.)
      • Interaction with storage media (HDDs and SSDs)
      • Providing a user's interface (console shell or graphical interface)
      • Logging of software errors (saving logs)
    • Additional functions (may not be available in all OSs)
      • Organise multitasking (simultaneous execution of several programms)
      • Delimiting access to resources for each process
      • Inter-process communication (data exchange, synchronisation)
      • Organise the protection of the operating system itself against other programms and the actions of the user
      • Provide multi-user mode and differentiate rights between different OS users (admins, guests, etc.)
    • OS kernel

      The central part of the operating system which is used most intensively. The kernel is constantly in memory, while other parts of the OS are loaded into and unloaded from memory as needed.

    • Bootloader

      The system software that prepares the environment for the OS to run (puts the hardware in the right state, prepares the memory, loads the OS kernel there and transfers control to it (the kernel).

    • Device drivers

      Special software that allows the OS to work with a particular piece of equipment.

πŸ”— References
  1. πŸ“„ What is an OS? Operating System Definition for Beginners – freeCodeCamp
  2. πŸ“„ Windows vs MacOS vs Linux – Operating System Handbook – freeCodeCamp
  3. πŸ“Ί Operating Systems: Crash Course Computer Science – YouTube
  4. πŸ“Ί Operating System Basics – YouTube
  5. πŸ“Ί Operating System in deep details (playlist) – YouTube
  6. πŸ“„ Awesome Operating System Stuff – GitHub
  • Processes and threads

    Process

    • Process

      A kind of container in which all the resources needed to run a program are stored. As a rule, the process consists of:

      • Executable program code
      • Input and output data
      • Call stack (order of instructions for execution)
      • Heap (a structure for storing intermediate data created during the process)
      • Segment descriptor
      • File descriptor
      • Information about the set of permissible powers
      • Processor status information
    • Thread

      An entity in which sequences of program actions (procedures) are executed. Threads are within a process and use the same address space. There can be multiple threads in a single process, allowing multiple tasks to be performed. These tasks, thanks to threads, can exchange data, use shared data or the results of other tasks.

πŸ”— References
  1. πŸ“Ί Difference Between Process and Thread – YouTube
  2. πŸ“Ί How Do CPUs Use Multiple Cores – YouTube
  3. πŸ“Ί What is Hyper Threading Technology – YouTube
  • Concurrency and parallelism

    Concurrency-parallelism

    • Parallelism

      The ability to perform multiple tasks simultaneously using multiple processor cores, where each individual core performs a different task.

    • Concurrency

      The ability to perform multiple tasks, but using a single processor core. This is achieved by dividing tasks into separate blocks of commands which are executed in turn, but switching between these blocks is so fast that for users it seems as if these processes are running simultaneously.

πŸ”— References
  1. πŸ“„ Concurrency, parallelism, and the many threads of Santa Claus – freeCodeCamp
  2. πŸ“Ί Concurrency vs Parallelism – YouTube
  3. πŸ“Ί Concurrency is not Parallelism by Rob Pike – YouTube
  • Inter-process communication

    A mechanism which allows to exchange data between threads of one or different processes. Processes can be run on the same computer or on different computers connected by a network. Inter-process communication can be done in different ways.

    • File

      The easiest way to exchange data. One process writes data to a certain file, another process reads the same file and thus receives data from the first process.

    • Signal (IPC)

      Asynchronous notification of one process about an event which occurred in another process.

    • Network socket

      In particular, IP addresses and ports are used to communicate between computers using the TCP/IP protocol stack. This pair defines a socket (socket corresponding to the address and port).

    • Semaphore

      A counter over which only 2 operations can be performed: increasing and decreasing (and for 0 the decreasing operation is blocked).

    • Message passing & Message queue
    • Pipelines

      Redirecting the output of one process to the input of another (similar to a pipe).

πŸ”— References
  1. πŸ“„ Interprocess Communications – Microsoft
  2. πŸ“Ί Interprocess Communication – YouTube
  3. πŸ“Ί Inter Process Communication – YouTube

Linux Basics

Operating systems based on Linux kernel are the standard in the world of server development, since most servers run on such operating systems. Using Linux on servers is profitable because it is free and open source, secure and works fast on cheap hardware.

There are a huge number of Linux distributions (preinstalled software bundles) to suit all tastes. One of the most popular is Ubuntu. This is where you can start your dive into server development.

Install Ubuntu on a separate PC or laptop. If this is not possible, you can use a special program Virtual Box where you can run other OS on top of the main OS. You can also run Docker Ubuntu image container (Docker is a separate topic that is exists in this repository).

  • Working with shell

    Shell (or console, terminal) is a computer program which is used to operate and control a computer by entering special text commands. Generally, servers do not have graphical interfaces (GUI), so you will definitely need to learn how to work with shells. The are many Unix shells, but most Linux distributions come with a Bash shell by default.

    • Basic commands for navigating the file system
      ls # list directory contents
      cd [PATH] # go to specified directory
      cd .. # move to a higher level (to the parent directory)
      touch [FILE] # create a file
      cat > [FILE] # enter text into the file (overwrite)
      cat >> [FILE] # enter text at the end of the file (append)
      cat/more/less [FILE] # to view the file contents
      head/tail [FILE] # view the first/last lines of a file
      pwd # print path to current directory
      mkdir [NAME] # create a directory
      rmdir [NAME] # delete a directory
      cp [FILE] [PATH] # copy a file or directory
      mv [FILE] [PATH] # moving or renaming
      rm [FILE] # deleting a file or directory
      find [STRING] # file system search
      du [FILE] # output file or directory size
      grep [PATTERN] [FILE] # print lines that match patterns
    • Commands for help information
      man [COMMAND] # allows you to view a manual for any command
      apropos [STRING] # search for a command with a description that has a specified word
      man -k [STRING] # similar to the command above
      whatis [COMMAND] # a brief description of the command
    • Super user rights

      Analogue to running as administrator in Windows

      sudo [COMMAND] # executes a command with superuser privileges
    • Text editor

      Study any in order to read and edit files freely through the terminal. The easiest – nano. Something in the middle - micro. The most advanced – Vim.

πŸ”— References
  1. πŸ“„ 31 Linux Commands Every Ubuntu User Should Know
  2. πŸ“„ The Linux Command Handbook – freeCodeCamp
  3. πŸ“„ A to Z: List of Linux commands
  4. πŸ“Ί The 50 Most Popular Linux & Terminal Commands – YouTube
  5. πŸ“Ί Nano Editor Fundamentals – YouTube
  6. πŸ“Ί Vim Tutorial for Beginners – YouTube
  7. πŸ“„ Awesome Terminals – GitHub
  8. πŸ“„ Awesome CLI-apps – GitHub
  • Package manager

    The package manager is a utility that allows you to install/update software packages from the terminal.

    Linux distributions can be divided into several groups, depending on which package manager they use: apt (in Debian based distributions), RPM (the Red Hat package management system) and Pacman (the package manager in Arch-like distributions)

    Ubuntu is based on Debian, so it uses apt (advanced packaging tool) package manager.

    • Basic commands
      apt install [package] # install the package
      apt remove [package] # remove the package, but keep the configuration
      apt purge [package] # remove the package along with the configuration
      apt update # update information about new versions of packages
      apt upgrade # update the packages installed in the system
      apt list --installed # list of packages installed on the system
      apt list --upgradable # list of packages that need to be updated
      apt search [package] # searching for packages by name on the network
      apt show [package] # package information
    • aptitude

      Interactive console utility for easy viewing of packages to install, update and uninstall them.

    • Repository management

      Package managers typically work with software repositories. These repositories contain a collection of software packages that are maintained and provided by the distribution's community or official sources.

      add-apt-repository [repository_url] # add a new repository
      add-apt-repository --remove [repository_url] # remove a repo
          # don\'t forget to update after this operations - apt update
      /etc/apt/sources.list # a file contains a list of configured repo links
      /etc/apt/sources.list.d # a directory contains files for thrid party repos
    • dpkg

      Low-level tool to install, build, remove and manage Debian packages.

πŸ”— References
  1. πŸ“Ί Linux Crash Course - The apt Command – YouTube
  2. πŸ“Ί Linux Package Management | Debian, Fedora, and Arch Linux – YouTube
  3. πŸ“„ sudo apt-get update vs upgrade – What is the Difference? – freeCodeCamp
  4. πŸ“„ Repositories in Ubuntu
  • Bash scripts

    You can use scripts to automate the sequential input of any number of commands. In Bash you can create different conditions (branching), loops, timers, etc. to perform all kinds of actions related to shell input.

    • Basics of Bash Scripts

      The most basic and frequently used features such as: variables, I/O, loops, conditions, etc.

    • Practice

      Solve challenges on sites like HackerRank and Codewars. Start using Bash to automate routine activities on your computer. If you're already a programmer, create scripts to easily build your project, to install settings, and so on.

    • ShellCheck script analysis tool

      It will point out possible mistakes and teach you best practices for writing really good scripts.

    • Additional resources

      Repositories such as awesome bash and awesome shell have entire collections of useful resources and tools to help you develop even more skills with Bash and shell in general.

πŸ”— References
  1. πŸ“„ Shell Scripting for Beginners – freeCodeCamp
  2. πŸ“Ί Bash Scripting Full Course 3 Hours – YouTube
  3. πŸ“„ HackerRank challenges for Bash with solutions
  • Users, groups and permissions

    Linux-based operating systems are multi-user. This means that several people can run many different applications at the same time on the same computer. For the Linux system to be able to "recognize" a user, he must be logged in and therefore each user must have a unique name and a secret password.

    • Working with users
      useradd [name] [flags] # create a new user
      passwd [name] # set a password for the user
      usermod [name] [flags] # edit a user
      usermod -L [name] # block a user
      usermod -U [name] # unblock a user
      userdel [name] [flags] # delete a user
      su [name] # switch to other user
    • Working with groups
      groupadd [group] [flags] # create a group
      groupmod [group] [flags] # edit group
      groupdel [group] [flags] # delete group
      usermod -a -G [groups] [user] # add a user to groups
      gpasswd --delete [user] [groups] # remove a user from groups
    • System files
      /etc/passwd # a file containing basic information about users
      /etc/shadow # a file containing encrypted passwords
      /etc/group # a file containing basic information about groups
      /etc/gshadow # a file containing encrypted group passwords

    In Linux, it is possible to share privileges between users, limit access to unwanted files or features, control available actions for services, and much more. In Linux, there are only three kinds of rights - read, write and execute - and three categories of users to which they can be applied - file owner, file group and everyone else.

    chmod

    • Basic commands for working with rights
      chown <user> <file> # changes the owner and/or group for the specified files
      chmod <rights> <file> # changes access rights to files and directories
      chgrp <group> <file> # allows users to change groups
    • Extended rights SUID and GUID, sticky bit
    • ACL (Access control list)

      An advanced subsystem for managing access rights.

πŸ”— References
  1. πŸ“„ Managing Users, Groups and Permissions in Linux
  2. πŸ“„ Linux User Groups Explained – freeCodeCamp
  3. πŸ“Ί Linux Users and Groups – YouTube
  4. πŸ“„ An Introduction to Linux Permissions – Digital Ocean
  5. πŸ“„ File Permissions in Linux – How to Use the chmod Command – freeCodeCamp
  6. πŸ“Ί Understanding File & Directory Permissions – YouTube
  • Working with processes

    Linux processes can be described as containers in which all information about the state of a running program is stored. If a program hangs and you need to restore it, then you need the skills to manage the processes.

    • Basic Commands
      ps # display a snapshot of the processes of all users
      top # real-time task manager
      [command] & # running the process in the background, (without occupying the shell)
      jobs # list of processes running in the background
      fg [PID] # return the process back to the active mode by its number
       # You can press [Ctrl+Z] to return the process to the background
      bg [PID] # start a stopped process in the background
      kill [PID] # terminate the process by PID
      killall [program] # terminate all processes related to the program
πŸ”— References
  1. πŸ“„ How to Show Process Tree in Linux
  2. πŸ“„ How to Manage Linux Processes – freeCodeCamp
  3. πŸ“„ How To Use ps, kill, and nice to Manage Processes in Linux – Digital Ocean
  4. πŸ“Ί Linux processes, init, fork/exec, ps, kill, fg, bg, jobs – YouTube
  • Working with SSH

    SSH allows remote access to another computer's terminal. In the case of a personal computer, this may be needed to solve an urgent problem, and in the case of a server, it is generally the primary method of connection.

    • Basic commands
      apt install openssh-server # installing SSH (out of the box almost everywhere)
      service ssh start # start SSH
      service ssh stop # stop SSH
      ssh -p [port] [user]@[remote_host] # connecting to a remote machine via SSH
    • Passwordless login
      ssh-keygen -t rsa # RSA key generation for passwordless login
      ssh-copy-id -i ~/.ssh/id_rsa [user]@[remote_host] # copying a key to a remote machine
    • Config files
      /etc/ssh/sshd_config # ssh server global config
      ~/.ssh/config # ssh server local config
      ~/.ssh/authorized_keys # file with saved public keys
πŸ”— References
  1. πŸ“„ What the hell is SSH?
  2. πŸ“Ί Learn SSH In 6 Minutes - Beginners Guide to SSH Tutorial – YouTube
  3. πŸ“Ί SSH Crash Course | With Some DevOps – YouTube
  4. πŸ“„ SSH config file for OpenSSH client
  5. πŸ“„ Awesome SSH – GitHub
  • Network utils

    For Linux there are many built-in and third-party utilities to help you configure your network, analyze it and fix possible problems.

    • Simple utils
      ip address # show info about IPv4 and IPv6 addresses of your devices
      ip monitor # real time monitor the state of devices
      ifconfig # config the network adapter and IP protocol settings
      traceroute <host> # show the route taken by packets to reach the host
      tracepath <host> # traces the network host to destination discovering MTU
      ping <host> # check connectivity to host
      ss -at # show the list of all listening TCP connections
      dig <host> # show info about the DNS name server
      host <host | ip-address> # show the IP address of a specified domain
      mtr <host | ip-address> # combination of ping and traceroute utilities
      nslookup # query Internet name servers interactively
      whois <host> # show info about domain registration
      ifplugstatus # detect the link status of a local Linux ethernet device
      iftop # show bandwidth usage
      ethtool <device name> # show detalis about your ethernet device
      nmap # tool to explore and audit network security
      bmon # bandwidth monitor and rate estimator
      firewalld # add, configure and remove rules on firewall
      ipref # perform network performance measurement and tuning
      speedtest-cli # check your network download/upload speed
      wget <link> # download files from the Internet
    • tcpdump

      A console utility that allows you to intercept and analyze all network traffic passing through your computer.

    • netcat

      Utility for reading from and writing to network connections using TCP or UDP. It includes port scanning, transferring files, and port listening: as with any server, it can be used as a backdoor.

    • iptables

      User-space utility program that allows configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules. The filters are organized in different tables, which contain chains of rules for how to treat network traffic packets.

    • curl

      Command-line tool for transferring data using various network protocols.

πŸ”— References
  1. πŸ“„ 21 Basic Linux Networking Commands You Should Know
  2. πŸ“„ Using tcpdump Command in Linux to Analyze Network
  3. πŸ“Ί tcpdump - Traffic Capture & Analysis – YouTube
  4. πŸ“Ί tcpdumping Node.js server – YouTube
  5. πŸ“„ Beginner’s guide to Netcat for hackers
  6. πŸ“„ Iptables Tutorial
  7. πŸ“„ An intro to cURL: The basics of the transfer tool
  8. πŸ“Ί Basic cURL Tutorial – YouTube
  9. πŸ“Ί Using curl better - tutorial by curl creator Daniel Stenberg – YouTube
  10. πŸ“„ Awesome console services – GitHub
  • Task scheduler

    cron

    Schedulers allow you to flexibly manage the delayed running of commands and scripts. Linux has a built-in cron scheduler that can be used to easily perform necessary actions at certain intervals.

    • Main commands
      crontab -e # edit the crontab file of the current user
      crontab -l # output the contents of the current schedule file
      crontab -r # deleting the current schedule file
    • Files and directories
      /etc/crontab # base config
      /etc/cron.d/ # a dir with crontab files used to manage the entire system
      
       # dirs where you can store scripts that runs:
      /etc/cron.daily/ # every day
      /etc/cron.weekly/ # every week
      /etc/cron.monthly/ # every month
πŸ”— References
  1. πŸ“„ How to schedule and manage tasks using crontab – dev.to
  2. πŸ“Ί Cron Jobs For Beginners | Linux Task Scheduling – YouTube
  3. πŸ“„ How to Check Crontab logs in Linux
  • System logs

    Log files are special text files that contain all information about the operation of a computer, program, or user. They are especially useful when bugs and errors occur in the operation of a program or server. It is recommended to periodically review log files, even if nothing suspicious happens.

    • Main log files
      /var/log/syslog or /var/log/messages # information about the kernel,
      # various services detected, devices, network interfaces, etc.
      /var/log/auth.log or /var/log/secure # user authorization information
      /var/log/faillog # failed login attempts
      /var/log/dmesg # information about device drivers
      /var/log/boot.log # operating system boot information
      /var/log/cron # cron task scheduler report
    • lnav utility

      Designed for easy viewing of log files (highlighting, reading different formats, searching, etc.)

    • Log rotation with logrotate

      Allows you to configure automatic deletion (cleaning) of log files so as not to clog memory.

    • Demon journald

      Collects data from all available sources and stores it in binary format for convenient and dynamic control

πŸ”— References
  1. πŸ“Ί Linux Crash Course - Understanding Logging – YouTube
  2. πŸ“Ί Linux Monitoring and Logging – YouTube
  3. πŸ“„ 3 ways to watch logs in real time in Linux
  4. πŸ“„ Analyzing logs in Linux with journalctl command
  5. πŸ“„ Linux File Structure Explained
  • Main issues with Linux

    • Software installation and package management issues
    • Problems with drivers

      All free Linux drivers are built right into its kernel. Therefore, everything should work "out of the box" after installing the system (problems may occur with brand new hardware which has just been released on the market). Drivers whose source code is closed are considered proprietary and are not included in the kernel but are installed manually (like Nvidia graphics drivers).

    • File system issues
      • Check disk space availability using the df command and ensure that critical partitions are not full.
      • Use the fsck command to check and repair file system inconsistencies.
      • In case of data loss or accidental deletion, utilize data recovery tools like extundelete or testdisk.
    • Performance and resource management
      • Check system resource usage, including CPU, memory, and disk space, using free, df, or du commands.
      • Identify resource-intensive processes using tools like top, htop, or systemd-cgtop.
      • Disable unnecessary startup services or background processes to improve performance.
    • Network connectivity issues
    • Problems with kernel

      Kernel panic - can occur due to an error when mounting the root file system. This is best helped by the skill of reading the logs to find problems (dmesg command).

πŸ”— References
  1. πŸ“Ί Linux Drivers Explained – YouTube
  2. πŸ“Ί How Do Linux Kernel Drivers Work? – YouTube

General knowledge

  • Numeral systems

    Numeral system is a set of symbols and rules for denoting numbers. In computer science, it is customary to distinguish four main number systems: binary, octal, decimal, and hexadecimal. It is connected, first of all, with their use in various branches of programming.

    • Binary number

      The most important system for computing technology. Its use is justified by the fact that the logic of the processor is based on only two states (on/off, open/closed, high/low, true/false, yes/no, high/low).

    Binary

    • Octal

      It is used e.g. in Linux systems to grant access rights.

    Octal

    • Decimal

      A system that is easy to understand for most people.

    • Hexadecimal

      The letters A, B, C, D, E, F are additionally used for recording. It is widely used in low-level programming and computer documentation because the minimum addressable memory unit is an 8-bit byte, the values of which are conveniently written in two hexadecimal digits.

    Hex

    • Translation between different number systems

      You can try online converter for a better understanding.

πŸ”— References
  1. πŸ“Ί Number Systems Introduction - Decimal, Binary, Octal & Hexadecimal – YouTube
  2. πŸ“„ Number System in Maths – GeeksGorGeeks
  • Logical connective

    Logical connective are widely used in programming to handle boolean types (true/false or 1/0). The result of a boolean expression is also a value of a boolean type.

    AND

    a b a AND b
    0 0 0
    0 1 0
    1 0 0
    1 1 1

    OR

    a b a OR b
    0 0 0
    0 1 1
    1 0 1
    1 1 1

    XOR

    a b a XOR b
    0 0 0
    0 1 1
    1 0 1
    1 1 0
πŸ”— References
  1. πŸ“Ί Logical Operators βˆ’ Negation, Conjunction & Disjunction – YouTube
  2. πŸ“Ί Logical Operators βˆ’ Exclusive OR – YouTube
  • Data structures

    Data structures are containers in which data is stored according to certain rules. Depending on these rules, the data structure will be effective in some tasks and ineffective in others. Therefore, it is necessary to understand when and where to use this or that structure.

    • Array

      A data structure that allows you to store data of the same type, where each element is assigned a different sequence number.

    Array

    • Linked list

      A data structure where all elements, in addition to the data, contain references to the next and/or previous element. There are 3 varieties:

      • A singly linked list is a list where each element stores a link to the next element only (one direction).
      • A doubly linked list is a list where the items contain links to both the next item and the previous one (two directions).
      • A circular linked list is a kind of bilaterally linked list, where the last element of the ring list contains a pointer to the first and the first to the last.

    Linked list

    • Stack

      Structure where data storage works on the principle of last in - first out (LIFO).

    Stack

    • Queue

      Structure where data storage is based on the principle of first in - first out (FIFO).

    Queue

    • Hash table

      In other words, it is an associative array. Here, each of the elements is accessed with a corresponding key value, which is calculated using hash function according to a certain algorithm.

    Hash Table

    • Tree

      Structure with a hierarchical model, as a set of related elements, usually not ordered in any way.

    Tree

    • Heap

      Similar to the tree, but in the heap, the items with the largest key is the root node (max-heap). But it may be the other way around, then it is a min heap.

    Heap

    • Graph

      A structure that is designed to work with a large number of links.

    Graph

πŸ”— References
  1. πŸ“Ί 10 Key Data Structures We Use Every Day – YouTube
  2. πŸ“Ί CS50 2022 - Lecture about Data Structures – YouTube
  3. πŸ“Ί Data Structures Easy to Advanced Course – YouTube
  4. πŸ“„ Free courses to learn data structures and algorithms in depth – freeCodeCamp
  5. πŸ“„ Data Structures: collection of topics – GeeksForGeeks
  6. πŸ“„ JavaScript Data Structures and Algorithms – GitHub
  7. πŸ“„ Go Data Structures – GitHub
  • Basic algorithms

    Algorithms refer to sets of sequential instructions (steps) that lead to the solution of a given problem. Throughout human history, a huge number of algorithms have been invented to solve certain problems in the most efficient way. Accordingly, the correct choice of algorithms in programming will allow you to create the fastest and most resource-intensive solutions.

    There is a very good book about algorithms for beginners – Grokking algorithms. You can start learning a programming language in parallel with reading it.

    • Binary search

      Maximum efficient search algorithm for sorted lists.

    • Selection sort

      At each step of the algorithm, the minimum element is searched for and then swapped with the current iteration element.

    • Recursion

      When a function can call itself and so on to infinity. On the one hand, recursion-based solutions look very elegant, but on the other hand, this approach quickly leads to stack overflow and is recommended to be avoided.

    • Bubble sort

      At each iteration neighboring elements are sequentially compared, and if the order of the pair is wrong, the elements are swapped.

    • Quicksort

      Improved bubble sorting method.

    • Breadth-first search

      Allows to find all shortest paths from a given vertex of the graph.

    • Dijkstra's algorithm

      Finds the shortest paths between all vertices of a graph and their length.

    • Greedy algorithm

      An algorithm that at each step makes locally the best choice in the hope that the final solution will be optimal.

πŸ”— References
  1. πŸ“„ Code for the book Grokking Algorithms – GitHub
  2. πŸ“Ί Algorithms and Data Structures Tutorial – YouTube
  3. πŸ“„ Largest open-source algorithm library
  4. πŸ“Ί Sorting Algorithms Explained Visually – YouTube
  • Algorithm complexity

    BigO

    In the world of programming there is a special unit of measure Big O notation. It describes how the complexity of an algorithm increases with the amount of input data. Big O estimates how many actions (steps/iterations) it takes to execute the algorithm, while always showing the worst case scenario.

    • Main types of complexity
      • Constant O(1) – the fastest.
      • Linear O(n)
      • Logarithmic O(log n)
      • Linearimetric O(n * log n)
      • Quadratic O(n^2)
      • Stepwise O(2^n)
      • Factorical O(!n) – the slowest.
    • Time complexity

      When you know in advance on which machine the algorithm will be executed, you can measure the execution time of the algorithm. Again, on very good hardware the execution time of the algorithm can be quite acceptable, but the same algorithm on a weaker hardware can run for hundreds of milliseconds or even a few seconds. Such delays will be very sensitive if your application handles user requests over the network.

    • Space complexity

      In addition to time, you need to consider how much memory is spent on the work of an algorithm. It is important when you working with limited memory resources.

πŸ”— References
  1. πŸ“„ Big O Algorithm Complexity cheatsheet
  2. πŸ“Ί Big O Notation - Full Course – YouTube
  • Data storage formats

    Different file formats can be used to store and transfer data over the network. Text files are human-readable, so they are used for configuration files, for example. But transferring data in text formats over the network is not always rational, because they weigh more than their corresponding binary files.

    • Text formats

    • Binary formats

    • Image formats

      • JPEG (Joint Photographic Experts Group)

        It is best suited for photographs and complex images with a wide range of colors. JPEG images can achieve high compression ratios while maintaining good image quality, but repeated editing and saving can result in loss of image fidelity.

      • PNG (Portable Network Graphics)

        It is a lossless compression format that supports transparency. It is commonly used for images with sharp edges, logos, icons, and images that require transparency. PNG images can have a higher file size compared to JPEG, but they retain excellent quality without degradation during repeated saves.

      • GIF (Graphics Interchange Format)

        Used for simple animations and low-resolution images with limited colors. It supports transparency and can be animated by displaying a sequence of frames.

      • SVG (Scalable Vector Graphics)

        XML-based vector image format defined by mathematical equations rather than pixels. SVG images can be scaled to any size without losing quality and are well-suited for logos, icons, and graphical elements.

      • WebP

        Modern image format developed by Google. It supports both lossy and lossless compression, providing good image quality with smaller file sizes compared to JPEG and PNG. WebP images are optimized for web use and can include transparency and animation.

    • Video formats

      • MP4 (MPEG-4 Part 14)

        Widely used video format that supports high-quality video compression, making it suitable for streaming and storing videos. MP4 files can contain both video and audio.

      • AVI (Audio Video Interleave)

        Is a multimedia container format developed by Microsoft. It can store audio and video data in a single file, allowing for synchronized playback. However, they tend to have larger file sizes compared to more modern formats.

      • MOV (QuickTime Movie)

        Is a video format developed by Apple for use with their QuickTime media player. It is widely used with Mac and iOS devices. MOV files can contain both video and audio, and they offer good compression and quality, making them suitable for editing and professional use.

      • WEBM

        Best for videos embedded on your personal or business website. It is lightweight, load quickly and stream easily.

    • Audio formats

      • MP3 (MPEG-1 Audio Layer 3)

        The most popular audio format known for its high compression and small file sizes. It achieves this by removing some of the audio data that may be less perceptible to the human ear. Suitable for music storage, streaming, and sharing.

      • WAV (Waveform Audio File Format)

        Is an uncompressed audio format that stores audio data in a lossless manner, resulting in high-quality sound reproduction. WAV files are commonly used in professional audio production and editing due to their accuracy and fidelity. However, they tend to have larger file sizes compared to compressed formats.

      • AAC (Advanced Audio Coding)

        Is a widely used audio format known for its efficient compression and good sound quality. It offers better sound reproduction at lower bit rates compared to MP3. AAC files are commonly used for streaming music, online radio, and mobile devices, as they deliver good audio quality while conserving bandwidth and storage.

πŸ”— References
  1. πŸ“Ί Data Formats: XML, JSON, and YAML – YouTube
  2. πŸ“Ί Serialization formats: JSON and Protobuf – YouTube
  3. πŸ“Ί Protocol Buffers Crash Course – YouTube
  4. πŸ“Ί Explaining Image File Formats – YouTube
  5. πŸ“Ί What's the difference between a JPEG, PNG, GIF, etc...? – YouTube
  • Text encodings

    Computers work only with numbers, or more precisely, only with 0 and 1. It is already clear how to convert numbers from different number systems to binary. But you can't do that with text. That's why special tables called encodings were invented, in which text characters are assigned numeric equivalents.

    • ASCII (American standard code for information interchange)

      The simplest encoding created specifically for the American alphabet. Consists of 128 characters.

    • Unicode

      This is an international character table that, in addition to the English alphabet, contains the alphabets of almost all countries. It can hold more than a million different characters (the table is currently incomplete).

    • UTF-8 (Unicode Transformation Format)

      UTF-8 is a variable-length encoding that can be used to represent any unicode character.

    • UTF-16

      Its main difference from UTF-8 is that its structural unit is not one but two bytes. That is, in UTF-16 any Unicode character can be encoded by either two or four bytes.

πŸ”— References
  1. πŸ“Ί Unicode, in friendly terms: ASCII, UTF-8 and more – YouTube
  2. πŸ“„ Understanding the ASCII Table
  3. πŸ“Ί Unicode Encoding! UTF-32, UCS-2, UTF-16, & UTF-8! – YouTube

Programming Language

At this stage you have to choose one programming language to study. There is plenty of information on various languages in the Internet (books, courses, thematic sites, etc.), so you should have no problem finding information.

Below is a list of specific languages that personally, in my opinion are good for backend development (⚠️ may not agree with the opinions of others, including those more competent in this matter).

  • Python

    A very popular language with a wide range of applications. Easy to learn due to its simple syntax.

  • JavaScript

    No less popular and practically the only language for full-fledged Web-development. Thanks to the platform Node.js last few years is gaining popularity in the field of backend development as well.

  • Go

    A language created internally by Google. It was created specifically for high-load server development. Minimalistic syntax, high performance and rich standard library.

  • Kotlin

    A kind of modern version of Java. Simpler and more concise syntax, better type-safety, built-in tools for multithreading. One of the best choices for Android development.

Find a good book or online tutorial in English at this repository. There is a large collection for different languages and frameworks.

Look for a special awesome repository - a resource that contains a huge number of useful links to materials for your language (libraries, cheat sheets, blogs and other various resources).

  • Classification of programming languages

    There are many programming languages. They are all created for a reason. Some languages may be very specific and used only for certain purposes. Also, different languages may use different approaches to writing programs. They may even run differently on a computer. In general, there are many different classifications, which would be useful to understand.

    • Depending on language level
      • Low level languages

        As close to machine code, complex to write, but as productive as possible. As a rule, it provides access to all of the computer's resources.

      • High-level languages

        They have a fairly high level of abstraction, which makes them easy to write and easy to use. As a rule, they are safer because they do not provide access to all of the computer's resources.

    • Depending on implementation
      • Compilation

        Allows you to convert the source code of a program to an executable file.

      • Interpretation

        The source code of a program is translated and immediately executed (interpreted) by a special interpreter program.

      • Virtual machine

        In this approach, the program is not compiled into a machine code, but into machine-independent low-level code - bytecode. This bytecode is then executed by the virtual machine itself.

    • Depending on the programming paradigm
      • Imperative

        Focuses on describing the steps to solve a problem through a sequence of statements or commands.

      • Declarative

        Focuses on describing what the program should do, rather than how it should do it. Examples of declarative languages include SQL and HTML.

      • Functional

        Based on the idea of treating computation as the evaluation of mathematical functions. It emphasizes immutability, avoiding side effects, and using higher-order functions. Examples of functional languages include Haskell, Lisp, and Clojure.

      • Object-Oriented

        Revolves around creating objects that contain both data and behavior, with the goal of modeling real-world concepts. Examples of object-oriented languages include Java, Python, and C++.

      • Concurrent

        Focused on handling multiple tasks or threads at the same time, and is used in systems that require high performance and responsiveness. Examples of concurrent languages include Go and Erlang.

πŸ”— References
  1. πŸ“„ Classifying Programming Languages
  2. πŸ“Ί What are the Types of Programming Languages? – YouTube
  3. πŸ“Ί Functional Programming in 40 Minutes – YouTube
  4. πŸ“Ί The Art of Functional Programming – YouTube
  • Language Basics

    By foundations are meant some fundamental ideas present in every language.

    • Variables and constants

      Are names assigned to a memory location in the program to store some data.

    • Data types

      Define the type of data that can be stored in a variable. The main data types are integers, floating point numbers, symbols, strings, and boolean.

    • Operators

      Used to perform operations on variables or values. Common operators include arithmetic operators, comparison operators, logical operators, and assignment operators.

    • Flow control

      Loops, conditions if else, switch case statements.

    • Functions

      Are blocks of code that can be called multiple times in a program. They allow for code reusability and modularization. Functions are an important concept for understanding the scope of variables.

    • Data structures

      Special containers in which data are stored according to certain rules. Main data structures are arrays, maps, trees, graphs.

    • Standard library

      This refers to the language's built-in features for manipulating data structures, working with the file system, network, cryptography, etc.

    • Error handling

      Used to handle unexpected events that can occur during program execution.

    • Regular expressions

      A powerful tool for working with strings. Be sure to familiarize yourself with it in your language, at least on a basic level.

    • Modules

      Writing the code of the whole program in one file is not at all convenient. It is much more readable to break it up into smaller modules and import them into the right places.

    • Package Manager

      Sooner or later, there will be a desire to use third-party libraries.

    After mastering the minimal base for writing the simplest programs, there is not much point in continuing to learn without having specific goals (without practice, everything will be forgotten). You need to think of/find something that you would like to create yourself (a game, a chatbot, a website, a mobile/desktop application, whatever). For inspiration, check out these repositories: Build your own x and Project based learning.

    At this point, the most productive part of learning begins: You just look for all kinds of information to implement your project. Your best friends are Google, YouTube, and Stack Overflow.

πŸ”— References
  1. πŸ“Ί CS50 2022 – Harvard University's course about programming – YouTube
  2. πŸ“Ί Harvard CS50’s Web Programming with Python and JavaScript – YouTube
  3. πŸ“„ Free Interactive Python Tutorial
  4. πŸ“Ί Harvard CS50’s Introduction to Programming with Python – YouTube
  5. πŸ“Ί Python Tutorial for Beginners – YouTube
  6. πŸ“„ Python cheatsheet – Learn X in Y minutes
  7. πŸ“„ Python cheatsheet – quickref.me
  8. πŸ“„ Free Interactive JavaScript Tutorial
  9. πŸ“Ί JavaScript Programming - Full Course – YouTube
  10. πŸ“„ The Modern JavaScript Tutorial
  11. πŸ“„ JavaScript cheatsheet – Learn X in Y minutes
  12. πŸ“„ JavaScript cheatsheet – quickref.me
  13. πŸ“„ Go Tour – learn most important features of the language
  14. πŸ“Ί Learn Go Programming - Golang Tutorial for Beginners – YouTube
  15. πŸ“„ Go cheatsheet – Learn X in Y minutes
  16. πŸ“„ Go cheatsheet – quickref.me
  17. πŸ“„ Learn Go by Examples
  18. πŸ“„ Get started with Kotlin
  19. πŸ“Ί Learn Kotlin Programming – Full Course for Beginners – YouTube
  20. πŸ“„ Kotlin cheatsheet – Learn X in Y minutes
  21. πŸ“„ Kotlin cheatsheet – devhints.io
  22. πŸ“„ Learn Regex step by step, from zero to advanced
  23. πŸ“„ Projectbook – The Great Big List of Software Project Ideas
  • Object-oriented programming

    OOP is one of the most successful and convenient approaches for modeling real-world things. This approach combines several very important principles which allow to write modular, extensible and loosely coupled code.

    • Understanding Classes

      A class can be understood as a custom data type (a kind of template) in which you describe the structure of future objects that will implement the class. Classes can contain properties (these are specific fields in which data of a particular data type can be stored) and methods (these are functions that have access to properties and the ability to manipulate, modify them).

    • Understanding objects

      An object is a specific implementation of a class. If, for example, the name property with type string is described in a class, the object will have a specific value for that field, for example "Alex".

    • Inheritance principle

      Ability to create new classes that inherit properties and methods of their parents. This allows you to reuse code and create a hierarchy of classes.

    • Encapsulation principle

      Ability to hide certain properties/methods from external access, leaving only a simplified interface for interacting with the object.

    • Polymorphism principle

      The ability to implement the same method differently in descendant classes.

    • Composition over inheritance

      Often the principle of inheritance can complicate and confuse your program if you do not think carefully about how to build the future hierarchy. That is why there is an alternative (more flexible) approach called composition. In particular, Go language lacks classes and many OOP principles, but widely uses composition.

    • Dependency injection (DI)

      Dependency injection is a popular OOP pattern that allows objects to receive their dependencies (other objects) from the outside rather than creating them internally. It promotes loose coupling between classes, making code more modular, maintainable, and easier to test.

πŸ”— References
  1. πŸ“Ί Intro to Object Oriented Programming - Crash Course – YouTube
  2. πŸ“„ OOP Meaning – What is Object-Oriented Programming? – freeCodeCamp
  3. πŸ“Ί OOP in Python (CS50 lecture) – YouTube
  4. πŸ“„ OOP tutorial from Python docs
  5. πŸ“Ί OOP in JavaScript: Made Super Simple – YouTube
  6. πŸ“„ OOP in Go by examples
  7. πŸ“Ί Object Oriented Programming is not what I thought - Talk by Anjana Vakil – YouTube
  8. πŸ“Ί The Flaws of Inheritance (tradeoffs between Inheritance and Composition) – YouTube
  9. πŸ“Ί Dependency Injection, The Best Pattern – YouTube
  • Server development

    • Understand sockets

      A socket is an endpoint of a two-way communication link between two programs running over a network. You need to know how to create, connect, send, and receive data over sockets.

    • Running a local TCP, UDP and HTTP servers

      These protocols are the most important, you need to understand the intricacies of working with each of them.

    • Handing out static files

      You need to know how to hosting HTML pages, pictures, PDF documents, music/video files, etc.

    • Routing

      Creation of endpoints (URLs) which will call the appropriate handler on the server when accessed.

    • Processing requests

      As a rule, HTTP handlers have a special object which receives all information about user request (headers, method, request body, query parameters and so on)

    • Processing responses

      Sending an appropriate message to a received request (HTTP status and code, response body, headers, etc.)

    • Error handling

      You should always be prepared for the possibility that something will go wrong: the user will send incorrect data, the database will not perform the operation, or an unexpected error will simply occur in the application. It is necessary for the server not to crash, but to send a response with information about the error.

    • Middleware

      An intermediate component between the application and the server. It used for handling authentication, validation, caching data, logging requests, and so on.

    • Sending requests

      Often, within one application, you will need to access another application over the network. That's why it's important to be able to send HTTP requests using the built-in features of the language.

    • Template processor

      Is a special module that uses a more convenient syntax to generate HTML based on dynamic data.

πŸ”— References
  1. πŸ“„ Learn Django – Python-based web framework
  2. πŸ“Ί Python Django 7 Hour Course – YouTube
  3. πŸ“„ A curated list of awesome things related to Django – GitHub
  4. πŸ“Ί Python Web Scraping for Beginners – YouTube
  5. πŸ“Ί Build servers in pure Node.js – YouTube
  6. πŸ“„ Node.js HTTP Server Examples – GitHub
  7. πŸ“„ Learn Express – web framework for Node.js
  8. πŸ“Ί Express.js 2022 Course – YouTube
  9. πŸ“„ A curated list of awesome Express.js resources – GitHub
  10. πŸ“„ How to build servers in Go
  11. πŸ“Ί Golang server development course – YouTube
  12. πŸ“„ Web services in Go – GitBook
  13. πŸ“„ List of libraries for working with network in Go – GitHub
  14. πŸ“„ Learn Ktor – web framework for Kotlin
  15. πŸ“Ί Ktor - REST API Tutorials – YouTube
  16. πŸ“„ Kotlin for server side
  • Asynchronous programming

    Asynchronous programming is an efficient way to write programs with a large number of I/O (input/output) operations. Such operations may include reading files, requesting to a database or remote server, reading user input, and so on. In these cases, the program spends a lot of time waiting for external resources to respond, and asynchronous programming allows the program to perform other tasks while waiting for the response.

    • Callback

      This is function that is passed as an argument to another function and is intended to be called by that function at a later time. The purpose of a callback is to allow the calling function to continue executing while the called function performs a time-consuming or asynchronous task. Once the task is complete, the called function will invoke the callback function, passing it any necessary data as arguments.

    • Event-driven architecture (EDA)

      A popular approach to writing asynchronous programs. The logic of the program is to wait for certain events and process them as they arrive. This can be useful in web applications that need to handle a large number of concurrent connections, such as chat applications or real-time games.

    • Asynchronous in particular languages
      • In Python, asynchronous programming can be done using the asyncio module, which provides an event loop and coroutine-based API for concurrency. There are also other third-party libraries like Twisted and Tornado that provide asynchronous capabilities.
      • In JavaScript, asynchronous programming is commonly achieved through the use of promises, callbacks, async/await syntax and the event loop.
      • Go has built-in support for concurrency through goroutines and channels, which allow developers to write asynchronous code that can communicate and synchronize across multiple threads.
      • Kotlin provides coroutines are similar to JavaScript's async/await and Python's asyncio, and can be used with a variety of platforms and frameworks.
πŸ”— References
  1. πŸ“Ί Synchronous vs Asynchronous Applications (Explained by Example) – YouTube
  2. πŸ“„ Async IO in Python: A Complete Walkthrough
  3. πŸ“„ Asynchronous Programming in JavaScript – Guide for Beginners – freeCodeCamp
  4. πŸ“„ A roadmap for asynchronous programming in JavaScript
  5. πŸ“Ί Master Go Programming With These Concurrency Patterns – YouTube
  6. πŸ“Ί Kotlin coroutines: new ways to do asynchronous programming – YouTube
  • Multitasking

    Computers today have processors with several physical and virtual cores, and if we take into account server machines, their number can reach up to hundreds. All of these available resources would be good to use to the fullest, for maximum application performance. That is why modern server development cannot do without implementing multitasking and paralleling.

    • How it works

      Multitasking refers to the concurrent execution of multiple threads of control within a single program. A thread is a lightweight process that runs within the context of a process, and has its own stack, program counter, and register set. Multiple threads can share the resources of a single process, such as memory, files, and I/O devices. Each thread executes independently and can perform a different task or part of a task.

    • Multitasking types
      • Cooperative multitasking: each program or task voluntarily gives up control of the CPU to allow other programs or tasks to run. Each program or task is responsible for yielding control to other programs or tasks at appropriate times. This approach requires programs or tasks to be well-behaved and to avoid monopolizing the CPU. If a program or task does not yield control voluntarily, it can cause the entire system to become unresponsive. Cooperative multitasking was commonly used in early operating systems and is still used in some embedded systems or real-time operating systems.
      • Preemptive multitasking: operating system forcibly interrupts programs or tasks at regular intervals to allow other programs or tasks to run. The operating system is responsible for managing the CPU and ensuring that each program or task gets a fair share of CPU time. This approach is more robust than cooperative multitasking and can handle poorly behaved programs or tasks that do not yield control. Preemptive multitasking is used in modern operating systems, such as Windows, macOS, Linux, and Android.
    • Main problems and difficulties
      • Race conditions: When multiple threads access and modify shared data concurrently, race conditions can occur, resulting in unpredictable behavior or incorrect results.
      • Deadlocks: Occur when two or more threads are blocked waiting for resources that are held by other threads, resulting in a deadlock.
      • Debugging: Multitasking programs can be difficult to debug due to their complexity and non-deterministic behavior. You need to use advanced debugging tools and techniques, such as thread dumps, profilers, and logging, to diagnose and fix issues.
    • Synchronizing primitives

      Needed to securely exchange data between different threads.

      • Semaphore: It is essentially a counter that keeps track of the number of available resources and can block threads or processes that try to acquire more than the available resources.
      • Mutex: (short for mutual exclusion) allows only one thread or process to access the resource at a time, ensuring that there are no conflicts or race conditions.
      • Atomic operations: operations that are executed as a single, indivisible unit, without the possibility of interruption or interference by other threads or processes.
      • Condition variables: allows threads to wait for a specific condition to be true before continuing execution. It is often used in conjunction with a mutex to avoid busy waiting and improve efficiency.
    • Working with particular language
πŸ”— References
  1. πŸ“Ί Multithreading Code - Computerphile – YouTube
  2. πŸ“Ί Threading vs multiprocessing in Python – YouTube
  3. πŸ“Ί When is NodeJS Single-Threaded and when is it Multi-Threaded? – YouTube
  4. πŸ“Ί How to use Multithreading with "worker threads" in Node.js? – YouTube
  5. πŸ“Ί Concurrency in Go – YouTube
  6. πŸ“Ί Kotlin coroutines – YouTube
  7. πŸ“„ Multithreading in practice – GitHub
  • Advanced Topics

    • Garbage collector

      A process that has made high-level languages very popular - it allows the programmer not to worry about memory allocation and freeing. Be sure to familiarize yourself with the subtleties of its operation in your own language.

    • Debuger

      Handy tool for analyzing program code and identifying errors.

    • Compilers, interpreters and virtual machines

      Depending on what your language uses, you can explore in detail the process of converting your code to machine code (a set of zeros and ones). As a rule, compilation/interpretation/virtualization processes consist of several steps. By understanding them you can optimize your programs for faster builds and efficient execution.

πŸ”— References
  1. πŸ“Ί Garbage Collection (Mark & Sweep) – YouTube
  2. πŸ“Ί How to Use a Debugger - Debugger Tutorial – YouTube
  3. πŸ“„ Understanding The Python Interpreter – medium
  4. πŸ“„ How Node.js works - JavaScript runtime environment – freeCodeCamp
  5. πŸ“„ How Compilers Work
  6. πŸ“„ The Magic Behind Compilers – medium
  7. πŸ“„ Overview of the сompiler in Go – medium
πŸ”— References
  1. πŸ“„ KISS, SOLID, YAGNI And Other Fun Acronyms
  2. πŸ“Ί Naming Things in Code – YouTube
  3. πŸ“Ί Why You Shouldn't Nest Your Code – YouTube
  4. πŸ“Ί Why you shouldn't write comments in your code – YouTube
  5. πŸ“Ί How principled coders outperform the competition – YouTube
  6. πŸ“Ί Uncle Bob SOLID principles – YouTube
  7. πŸ“„ SOLID Principles explained in Python – medium
  8. πŸ“„ SOLID Principles in JavaScript – freeCodeCamp
  9. πŸ“„ Google style guides – GitHub

Databases

Databases (DB) – a set of data that are organized according to certain rules (for example, a library is a database for books).

Database management system (DBMS) is a software that allows you to create a database and manipulate it conveniently (perform various operations on the data). An example of a DBMS is a librarian. He can easily and efficiently work with the books in the library: give out requested books, take them back, add new ones, etc.

  • Database classification

    Databases can differ significantly from each other and therefore have different areas of application. To understand what database is suitable for this or that task, it is necessary to understand the classification.

    • Relational DB

      These are repositories where data is organized as a set of tables (with rows and columns). Interactions between data are organized on the basis of links between these tables. This type of database provides fast and efficient access to structured information.

    • Object-oriented DB

      Here data is represented as objects with a set of attributes and methods. Suitable for cases where you need high-performance processing of data with a complex structure.

    • Distributed DB

      Composed of several parts located on different computers (servers). Such databases may completely exclude information duplication, or completely duplicate it in each distributed copy (for example, as Blockchain).

    • NoSQL

      Stores and processes unstructured or weakly structured data. This type of database is subdivided into subtypes:

πŸ”— References
  1. πŸ“„ Comparing database types: how database types evolved to meet different needs
  2. πŸ“„ SQL vs NoSQL Database – A Complete Comparison
  3. πŸ“Ί 7 Database Paradigms – YouTube
  • Relational database

    The most popular relational databases: MySQL, PostgreSQL, MariaDB, Oracle. A special language SQL (Structured Query Language) is used to work with these databases. It is quite simple and intuitive.

    • SQL basics

      Learn the basic cycle of creating/receiving/updating/deleting data. Everything else as needed.

    • Merging tables
      • Querying data from multiple tables

        Operator JOIN; Combinations with other operators; JOIN types.

      • Relationships between tables

        References from one table to another; foreign keys.

    • Subquery Expressions

      Query inside another SQL query.

    • Indexes

      Data structure that allows you to quickly determine the position of the data of interest in the database.

    • Transactions

      Sequences of commands that must be executed completely, or not executed at all.

      • Command START TRANSACTION
      • Commands COMMIT and ROLLBACK
    • Working with a programming language

      To do this, you need to install a database driver (adapter) for your language. (For example psycopg2 for Python, node-postgres for Node.js, pgx for Go)

    • ORM (Object-Relational Mapping) libraries

      Writing SQL queries in code is difficult. It's easy to make mistakes and typos in them, because they are just strings that are not validated in any way. To solve this problem, there are so-called ORM libraries, which allow you to execute SQL queries as if you were simply calling methods on an object. Unfortunately, even with them all is not so smooth, because "under the hood" queries that are generated by these libraries are not the most optimal in terms of performance (so be prepared to work with ORM, as well as with pure SQL).
      Popular ORMs: SQLAlchemy for Python, Prisma for Node.js, GORM for Go.

    • Optimization and performance
πŸ”— References
  1. πŸ“Ί SQL Crash Course - Beginner to Intermediate – YouTube
  2. πŸ“Ί SQL Tutorial for Beginners (and Technical Interview Questions Solved) – YouTube
  3. πŸ“Ί SQL Tutorial - Full Database Course for Beginners – YouTube
  4. πŸ“Ί MySQL - The Basics. Learn SQL in 23 Easy Steps – YouTube
  5. πŸ“„ MySQL command-line client commands
  6. πŸ“Ί Learn PostgreSQL Tutorial - Full Course for Beginners – YouTube
  7. πŸ“„ Postgres Cheat Sheet
  8. πŸ“Ί Database Indexing Explained (with PostgreSQL) – YouTube
  9. πŸ“„ SQL Indexing and Tuning e-Book
  10. πŸ“Ί What is a Database transaction? – YouTube
  11. πŸ“Ί SQL Server Performance Essentials – Full Course – YouTube
  12. πŸ“Ί ORM: The Good, the Great, and the Ugly – YouTube
  13. πŸ“Ί I Would Never Use an ORM, by Matteo Collina – YouTube
  14. πŸ“„ Awesome SQL – GitHub
  • MongoDB

    MongoDB is a popular NoSQL database that stores data in flexible, JSON-like documents, allowing for dynamic and scalable data structures. It offers high performance, horizontal scalability, and a powerful query language, making it a preferred choice for modern web applications.

    • Basic commands

      Learn the basic cycle of creating/reading/updating/deleting data. Everything else as needed.

    • Aggregations

      MongoDB provides a powerful aggregation framework for performing complex queries and calculations. Learn how to use aggregation pipelines.

    • Working with Indexes

      Indexing is an important concept in MongoDB for improving performance.

    • Working with a programming language

      For this you need to install MongoDB driver for your language.

    • Best practices

      Learn best practices for schema design, indexing, and query optimization. Read up on these to ensure your applications are performant and scalable.

    • Scaling

      Learn about scaling to handle large datasets and high traffic. MongoDB provides sharding and replica sets for scaling horizontally and vertically.

πŸ”— References
  1. πŸ“Ί MongoDB in 100 Seconds – YouTube
  2. πŸ“Ί MongoDB Crash Course 2022 – YouTube
  3. πŸ“„ MongoDB β€” Complete Guide
  4. πŸ“„ MongoDB Cheat Sheet
  5. πŸ“Ί MongoDB Tutorial For Beginners (playlist) – YouTube
  6. πŸ“„ Awesome MongoDB – GitHub
  • Redis

    Redis is a fast data storage working with key-value structures. It can be used as a database, cache, message broker or queue.

    • Data types

      String / Bitmap / Bitfield / List / Set / Hash / Sorted sets / Geospatial / Hyperlog / Stream

    • Basic operations
      SET key "value" # setting the key with the value "value"
      GET key # retrieve a value from the specified key
      SETNX key "data" # setting the value / creation of a key
      MSET key1 "1" key2 "2" key3 "3" # setting multiple keys
      MGET key1 key2 key3 # getting values for several keys at once
      DEL key # remove the key-value pair
      INCR someNumber # increase the numeric value by 1
      DECR someNumber # decrease the numeric value by 1
      EXPIRE key 1000 # set a key life timer of 1000 seconds
      TTL key # get information about the lifetime of the key-value pair
          # -1 the key exists, but has no expiration date
          # -2 the key does not exist
          # <another number> key lifetime in seconds
      SETEX key 1000 "value" # consolidation of commands SET and EXPIRE
    • Transactions

      MULTI β€” start recording commands for the transaction.
      EXEC β€” execute the recorded commands.
      DISCARD β€” delete all recorded commands.
      WATCH β€” command that provides execution only if other clients have not changed the value of the variable. Otherwise EXEC will not execute the written commands.

πŸ”— References
  1. πŸ“Ί Redis in 100 Seconds – YouTube
  2. πŸ“Ί Redis In-Memory Database Crash Course – YouTube
  3. πŸ“Ί Redis Course - In-Memory Database Tutorial – YouTube
  4. πŸ“Ί Redis Crash Course - Transactions – YouTube
  5. πŸ“Ί Python and Redis Tutorial - Caching API Responses – YouTube
  6. πŸ“Ί Top 5 Redis Use Cases – YouTube
  7. πŸ“„ How To Run Transactions in Redis – Digital Ocean
  8. πŸ“„ Redis cheatsheet – QuickRef
  9. πŸ“„ Awesome Redis – GitHub
  • ACID Requirements

    ACID is an acronym consisting of the names of the four main properties that guarantee the reliability of transactions in the database.

    • Atomicity

      Guarantees that the transaction will be executed completely or not executed at all.

    • Consistency

      Ensures that each successful transaction captures only valid results (any inconsistencies are excluded).

    • Isolation

      Guarantees that one transaction cannot affect the other in any way.

    • Durability

      Guarantees that the changes made by the transaction are saved.

πŸ”— References
  1. πŸ“Ί ACID Transactions (Explained by Example) – YouTube
  2. πŸ“Ί Relational Database Atomicity Explained By Example – YouTube
  3. πŸ“Ί ACID Properties in DBMS With Examples | In-depth Explanation – YouTube
  4. πŸ“„ How SQLite Helps You Do ACID
  • Designing databases

    Database design is a very important topic that is often overlooked. A well-designed database will ensure long-term scalability and ease of data maintenance. There are several basic steps in database design:

    • Definition of entities

      An entity is an object, concept, or event that has its own set of attributes. For example, if you're designing a database for a library, entities might include books, authors, publishers, and borrowers.

    • Define the attributes to each entity

      Each entity has a set of specific attributes. For example, attributes of a book might include its title, author, ISBN, and publication date. Each attribute has a specific data type, be it a string, an integer, a boolaen, and so on.

    • Add constraints

      Attribute values may have certain limitations. For example, strings can only be unique or have a limit on the maximum number of characters.

    • Define relationships

      Entities can be linked to one another by one type of relationship: one to one, one to many or many to many. For example, a book might have one or more authors, and an author might write one or more books. You can represent these relationships by creating a foreign key in one table that references the primary key in another table.

    • Normalization

      It is the process of separating data into separate related tables. Normalization eliminates data redundancy and thus avoids data integrity violations when data changes.

    • Optimize for performance

      Create indexes on frequently queried columns, tune the database configuration, and optimize the queries that you use to access the data.

πŸ”— References
  1. πŸ“Ί How to Create a Database Design From an Idea – YouTube
  2. πŸ“Ί Database Design Course - Learn how to design and plan a database for beginners – YouTube
  3. πŸ“Ί 7 Database Design Mistakes to Avoid (With Solutions) – YouTube
  4. πŸ“„ Dbdiagram – simple tool to draw ER diagrams

API development

API (Application Programming Interface) an interface which describes a certain set of rules by which different programs (applications, bots, websites...) can interact with each other. With API calls you can execute certain functions of a program without knowing how it works.

When developing server applications, different API formats can be used, depending on the tasks and requirements.

  • REST API

    REST (Representational State Transfer) an architectural approach that describes a set of rules for how a programmer organizes the writing of server application code so that all systems can easily exchange data and the application can be easily scaled. When building a REST API, HTTP protocol methods are widely used.

    Basic rules for writing a good REST API:

    • Using HTTP methods

      As a rule, a single URL route is used to work on a particular data model (e.g. for users - /api/user). To perform different operations (get/create/edit/delete), this route must implement handlers for the corresponding HTTP methods (GET/POST/PUT/DELETE).

    • Use of plural names

      For example, a URL to retrieve one user by id looks like this: /user/42, and to retrieve all users like this: /users.

    • Sending the appropriate HTTP response codes

      The most commonly used: 200, 201, 204, 304, 400, 401, 403, 404, 405, 410, 415, 422, 429.

    • Versioning

      Over time you may want or need to fundamentally change the way your REST API service works. To avoid breaking applications using the current version, you can leave it where it is and implement the new version over a different URL route, e.g. /api/v2.

πŸ”— References
  1. πŸ“„ What Is Restful API? – AWS
  2. πŸ“Ί What is REST API? – YouTube
  3. πŸ“Ί APIs for Beginners 2023 - How to use an API (Full Course) – YouTube
  4. πŸ“Ί Build Web APIs with Python – Django REST Framework Course – YouTube
  5. πŸ“Ί Build an API from Scratch with Node.js Express – YouTube
  6. πŸ“Ί Build REST API on Vanilla Node.js – YouTube
  7. πŸ“Ί Build a Rest API with GoLang – YouTube
  8. πŸ“Ί Spring Kotlin - Building a Rest API Tutorial – YouTube
  9. πŸ“„ REST API design full guide – GitHub
  10. πŸ“„ Awesome REST – GitHub
  • GraphQL

    GraphQL is a query language and server-side runtime for APIs that allows you to retrieve and modify data from a server using a single URL endpoint. It provides several benefits, including the ability to retrieve only the data you need (reducing traffic consumption), aggregation of data from multiple sources and a strict type system for describing data.

    • Schema and types

      Learn how to describe data using GraphQL schema and general types.

    • Queries and Mutations

      Queries are used to retrieve data from a server, while Mutations are used to modify (create, update or delete) data on a server.

    • Resolvers

      Resolvers are functions that determine how to retrieve the data for a particular field in the GraphQL schema.

    • Data sources

      Are places where you retrieve data from, such as databases or APIs. Data sources are connected to the GraphQL server through resolvers.

    • Performance optimization
    • Best Practices
πŸ”— References
  1. πŸ“Ί What Is GraphQL? REST vs. GraphQL – YouTube
  2. πŸ“„ Why use GraphQL?
  3. πŸ“„ Learn GraphQL from zero to production
  4. πŸ“Ί Python with GraphQL tutorial – YouTube
  5. πŸ“Ί Modern GraphQL with Node.js Crash Course – YouTube
  6. πŸ“Ί GraphQL in Go - GQLGen Tutorial – YouTube
  7. πŸ“„ Awesome list of GraphQL – GitHub
  • WebSockets

    WebSockets is an advanced technology that allows you to open a persistent bidirectional network connection between the client and the server. With its API you can send a message to the server and receive a response without making an HTTP request, thereby implementing real-time communication.

    The basic idea is that you do not need to send requests to the server for new information. When the connection is established, the server itself will send a new batch of data to connected clients as soon as that data is available. Web sockets are widely used to create chat rooms, online games, trading applications, etc.

    • Opening a web socket

      Sending an HTTP request with a specific set of headers: Connection: Upgrade, Upgrade: websocket, Sec-WebSocket-Key, Sec-WebSocket-Version.

    • Connection states

      CONNECTING, OPEN, CLOSING, CLOSED.

    • Events

      Open, Message, Error, Close.

    • Connection closing codes

      1000, 1001, 1006, 1009, 1011, etc.

πŸ”— References
  1. πŸ“Ί A Beginner's Guide to WebSockets – YouTube
  2. πŸ“Ί WebSockets Crash Course - Handshake, Use-cases, Pros & Cons and more – YouTube
  3. πŸ“„ Introducing WebSockets - Bringing Sockets to the Web
  4. πŸ“Ί WebSockets with Python tutorial – YouTube
  5. πŸ“Ί WebSockets with Node.js tutorial – YouTube
  6. πŸ“Ί WebSockets with Go tutorial – YouTube
  7. πŸ“„ Awesome WebSockets – GitHub
  • RPC (Remote Procedure Call)

    RPC is simply a function call to the server with a set of specific arguments, which returns the response usually encoded in a certain format, such as JSON or XML. There are several protocols that implement RPC.

    • XML-based protocols

      The are two main protocols: XML-RPC and SOAP (Simple Object Access Protocol)
      They are considered deprecated and not recommended for new projects because they are heavyweight and complex compared to newer alternatives such as REST, GraphQL and newer RPC protocols.

    • JSON-RPC

      A protocol with a very simple specification. All requests and responses are serialized in JSON format.

      • A request to the server includes: method - the name of the method to be invoked; params - object or array of values to be passed as parameters to the defined method; id - identificator used to match the response with the request.
      • A response includes: result - data returned by the invoked method; error - object with error or null for success; id - the same as in the request.
    • gRPC

      RPC framework developed by Google. It works by defining a service using Protocol Buffers, a language-agnostic binary serialization format, that generates to client and server code for various programming languages.

πŸ”— References
  1. πŸ“Ί What is RPC? gRPC Introduction – YouTube
  2. πŸ“„ Learning gRPC with an Example
  3. πŸ“Ί gRPC Crash Course - Modes, Examples, Pros & Cons and more – YouTube
  4. πŸ“Ί This is why gRPC was invented – YouTube
  5. πŸ“Ί gRPC with Python - microservice complete tutorial – YouTube
  6. πŸ“Ί Implementing a gRPC client and server in Typescript with Node.js – YouTube
  7. πŸ“Ί Build a gRPC server with Go - Step by step tutorial – YouTube
  8. πŸ“„ Awesome gRPC – GitHub
  • WebRTC

    WebRTC an open-source project for streaming data (video, audio) in a browser. WebRTC operation is based on peer to peer connection, however, there are implementations that allow you to organize complex group sessions. For example, the video-calling service Google Meet makes extensive use of WebRTC.

πŸ”— References
  1. πŸ“Ί WebRTC Crash Course – YouTube
  2. πŸ“„ Everything You Ever Wanted To Know About WebRTC
  3. πŸ“„ HTTP, WebSocket, gRPC or WebRTC: Which Communication Protocol is Best For Your App?

Software

  • Git version control system

    Git a special system for managing the history of changes to the source code. Any changes that are made to Git can be saved, allowing you to rollback (revert) to a previously saved copy of the project. Git is currently the standard for development.

πŸ”— References
  1. πŸ“Ί Git It? How to use Git and Github – YouTube
  2. πŸ“Ί Git and GitHub for Beginners - Crash Course – YouTube
  3. πŸ“Ί 13 Advanced (but useful) Git Techniques and Shortcuts – YouTube
  4. πŸ“„ Understanding Git through images – dev.to
  5. πŸ“„ Learn git concepts, not commands – GitHub
  6. πŸ“„ Git Cheat Sheet – 50 Git Commands You Should Know – freeCodeCamp
  7. πŸ“„ Git Commit Patterns – dev.to
  8. πŸ“„ Collection of .gitignore templates – GitHub
  • Docker

    Docker a special program that allows you to run isolated sandboxes (containers) with different preinstalled environments (be it a specific operating system, a database, etc.). Containerization technology, that Docker provides is similar to virtual machines, but unlike virtual machines, containers use the host OS kernel, which requires far fewer resources.

    • Docker image

      A special fixed template that contains a description of the environment to run the application (OS, source code, libraries, environment variables, configuration files, etc.). The images can be downloaded from official site and used to create your own.

    • Docker container

      An isolated environment created from an image. It is essentially a running process on a computer which internally contains the environment described in the image.

    • Console commands
      docker pull [image_name] # Download the image
      docker images  # List of available images
      docker run [image_id] # Running a container based on the selected image
          # Some flags for the run command:
          -d # Starting with a return to the console
          --name [name] # Name the container
          --rm # Remove the container after stopping
          -p [local_port][port_iside_container] # Port forwarding
      docker build [path_to_Dockerfile] # Creating an image based on a Dockerfile
      docker ps # List of running containers
      docker ps -a # List of all containers
      docker stop [id/container_name] # Stop the container
      docker start [id/container_name] # Start an existing container
      docker attach [id/container_name] # Connect to the container console
      docker logs [id/container_name] # Output the container logs
      docker rm [id/container_name] # Delete container
      docker container prune # Delete all containers
      docker rmi [image_id] # Delete image
    • Instructions for Dockerfile

      Dockerfile is a file with a set of instructions and arguments for creating images.

      FROM [image_name] # Setting a base image
      WORKDIR [path] # Setting the root directory inside the container
      COPY [path_relative_Dockefile] [path_in_container] # Copying files
      ADD [path] [path] # Similar to the command above
      RUN [command] # A command that runs only when the image is initialized
      CMD ["command"] # The command that runs every time you start the container
      ENV KEY="VALUE" # Setting Environment Variables
      ARG KEY=VALUE # Setting variables to pass to Docker during image building
      ENTRYPOINT ["command"] # The command that runs when the container is running
      EXPOSE port/protocol # Indicates the need to open a port
      VOLUME ["path"] # Creates a mount point for working with persistent storage
    • Docker-compose

      A tool for defining and running multi-container Docker applications. It allows you to define the services that make up your application in a single file, and then start and stop all of the services with a single command. In a sense, it is a Dockerfile on maximal.

πŸ”— References
  1. πŸ“Ί Learn Docker in 7 Easy Steps - Full Beginner's Tutorial – YouTube
  2. πŸ“Ί Never install locally – YouTube
  3. πŸ“Ί Docker Crash Course Tutorial (playlist) – YouTube
  4. πŸ“„ The Ultimate Docker Cheat Sheet
  5. πŸ“Ί Docker Compose Tutorial – YouTube
  6. πŸ“Ί Docker networking – everything you need to know – YouTube
  7. πŸ“„ Awesome Docker – GitHub
  8. πŸ“„ What Is a Dockerfile And How To Build It – Best Practices – Spacelift
  • Postman/Insomnia

    When creating a server application, it is necessary to test it's workability. This can be done in different ways. One of the easiest is to use the console utility curl. But this is good for very simple applications. Much more efficient is to use special software for testing, which have a user-friendly interface and all the necessary functionality to create collections of queries.

    • Postman

      A very popular and feature-rich program. It definitely has everything you might need and more: from the trivial creation of collections to raising mock-servers. The basic functionality of the application is free of charge.

    • Insomnia

      Not as popular, but a very nice tool. The interface in Insomnia, minimalist and clear. It has less functionality, but everything you need: collections, variables, work with GraphQL, gRPC, WebSocket, etc. It is possible to install third-party plugins.

πŸ”— References
  1. πŸ“Ί What is Postman? How to use Postman? Tool For Beginners – YouTube
  2. πŸ“Ί Postman Beginner's Course - API Testing – YouTube
  3. πŸ“Ί Postman API Test Automation for Beginners – YouTube
  4. πŸ“Ί Insomnia API Client Tutorial – YouTube
  5. πŸ“Ί Insomnia Tutorial: API Design, Testing and Collaboration – YouTube
  • Web servers

    Web server

    A web server is a program designed to handle incoming HTTP requests. In addition, it can keep error logs (logs), perform authentication and authorization, store rules for file processing, etc.

    • What is it for?

      Not all languages can have a built-in web server (e.g. PHP). Therefore, to run web applications written in such languages, a third-party one is needed.
      A single server (virtual or dedicated) can run several applications, but only one external IP address. A configured web server solves this problem and can redirect incoming requests to the right applications.

    • Popular web servers

      Nginx – the most popular at the moment.
      Apache – also popular, but already giving up its position.
      Caddy – a fairly young web server with great potential.

πŸ”— References
  1. πŸ“Ί What are web servers and how do they work – YouTube
  2. πŸ“Ί Web Server Concepts and Examples – YouTube
  3. πŸ“Ί The NGINX Crash Course – YouTube
  4. πŸ“Ί Nginx Server Complete Course – YouTube
  5. πŸ“„ 6 Best Courses to learn Nginx in depth – medium
  6. πŸ“„ NGINX: Advanced Load Balancer, Web Server, & Reverse Proxy – dev.to
  7. πŸ“„ Awesome NGINX – GitHub
  • Message brokers

    Message queue

    When creating a large-scale backend system, the problem of communication between a large number of microservices may arise. In order not to complicate existing services (establish a reliable communication system, distribute the load, provide for various errors, etc.) you can use a separate service, which is called a message broker (or message queue).

    The broker takes the responsibility of creating a reliable and fault-tolerant system of communication between services (performs balancing, guarantees delivery, monitors recipients, maintains logs, buffering, etc.)

    A message is an ordinary HTTP request/response with data of a certain format.

    • RabbitMQ - specializes in message queuing and supports various messaging patterns, including publish/subscribe and point-to-point communication.
    • Apache Kafka - excels in handling large-scale, real-time data streams and offers high throughput, fault tolerance, and horizontal scalability.
    • NATS - known for its simplicity, speed, and lightweight design, making it ideal for building fast and efficient distributed systems.
πŸ”— References
  1. πŸ“Ί What is a Message Queue and When should you use Messaging Queue Systems – YouTube
  2. πŸ“Ί What is a Message Queue? – YouTube
  3. πŸ“„ Understanding RabbitMQ – medium
  4. πŸ“Ί RabbitMQ course (playlist) – YouTube
  • Ngrok

    Ngrok is a tool for creating public tunnels on the Internet that allows local network applications (web servers, websites, bots, etc.) to be accessible from outside.

    • How does it work?

      Ngrok creates a temporary public URL that can be used to access your local server from the Internet. Once Ngrok is started, you have access to the console, where you can monitor requests, handling and responses to those requests, and configure additional features such as authentication and encryption.

    • What to use it for?

      For example, to test web sites and APIs, to demonstrate running applications on a local server, to access local network applications over the Internet without having to set up a router, firewall, proxy server, etc.

πŸ”— References
  1. πŸ“Ί Expose Local WebSocket, HTTP and HTTPS WebServers to the Public Internet with Ngrok – YouTube
  • AI tools

    Artificial intelligence systems have made an incredible leap recently. Every day there are more and more tools that can write code for you, generate documentation, do code reviews, help you learn new technologies, and so on. Many people are still skeptical about the capabilities and quality of content that AI creates. But at least by now, a lot of time and resources can be saved to increase the productivity of any developer.

    • ChatGPT

      The highest quality LLM at the moment. Works like a normal chat bot and has no problem understanding human speech in several languages.

    • Bard

      Developed by Goolge as an alternative and direct competitor to ChatGPT.

    • GitHub Copilot

      AI-powered code completion tool developed by GitHub in collaboration with developers of ChatGPT. It integrates with popular code editors and provides real-time suggestions and completions for code as you write.

    • Tabnine

      An alternative to GitHub Copilot that provides context-sensitive code suggestions based on patterns it learns from millions of publicly available code repositories.

πŸ”— References
  1. πŸ“„ Awesome ChatGPT Prompts – GitHub
  2. πŸ“Ί ChatGPT Tutorial for Developers - 38 Ways to 10x Your Productivity – YouTube
  3. πŸ“Ί GitHub Copilot in 7 Minutes – YouTube

Security

  • Web application vulnerabilities

    • Cross-site scripting (XSS)

      An attack that allows an attacker to inject malicious code through a website into the browsers of other users.

    • SQL injection

      An attack is possible if the user input that is passed to the SQL query is able to change the meaning of the statement or add another query to it.

    • Cross-site request forgery (CSRF)

      When a site uses a POST request to perform a transaction, the attacker can forge a form, such as in an email, and send it to the victim. The victim, who is an authorized user interacting with this email, can then unknowingly send a request to the site with the data that the attacker has set.

    • Clickjacking

      The principle is based on the fact that an invisible layer is placed on top of the visible web page, in which the page the intruder wants is loaded, while the control (button, link) needed to perform the desired action is combined with the visible link or button the user is expected to click on.

    • Denial of Service (DoS attack)

      A hacker attack that overloads the server running the web application by sending a huge number of requests.

    • Man-in-the-Middle attack

      A type of attack in which an attacker gets into the chain between two (or more) communicating parties to intercept a conversation or data transmission.

    • Incorrect security configuration

      Using default configuration settings can be dangerous because it is common knowledge. For example, a common vulnerability is that network administrators leave the default logins and passwords admin:admin.

πŸ”— References
  1. πŸ“Ί 7 Security Risks and Hacking Stories for Web Developers – YouTube
  2. πŸ“„ Top 10 Web Application Security Risks
  3. πŸ“Ί Web App Vulnerabilities - DevSecOps Course for Beginners – YouTube
  4. πŸ“Ί DDoS Attack Explained – YouTube
  5. πŸ“Ί Securing Web Applications – MIT lecture – YouTube
  6. πŸ“Ί Scan for Vulnerabilities on Any Website Using Nikto – YouTube
  7. πŸ“Ί OWASP API Security Top 10 Course – YouTube
  • Environment variables

    Often your applications may use various tokens (e.g. to access a third-party paid API), logins and passwords (to connect to a database), various secret keys for signatures and so on. All this data should not be known and available to outsiders, so you can't leave them in the program code in any case. To solve this problem, there are environment variables.

    • The .env file

      A special file in which you can store all environment variables.

    • Parsing the .env file

      Variables are passed to the program using command line arguments. To do the same with the .env file, you need to use a special library for your language.

    • Storage and transfer .env files

      Learn how to upload .env files to the hosting services and remember that such files cannot be commited to remote repositories, so do not forget to add them to exceptions via the .gitignore file.

πŸ”— References
  1. πŸ“Ί How to use environment variables in a Python script – YouTube
  2. πŸ“Ί Configure Node.js Environment Variables for Local Development & Production – YouTube
  3. πŸ“Ί GoLang Environment Variables – YouTube
  • Hashing

    Hashing

    Cryptographic algorithms based on hash functions are widely used for network security.

    • Hashing

      The process of converting an array of information (from a single letter to an entire literary work) into a unique short string of characters (called hash), which is unique to that array of information. Moreover, if you change even one character in this information array, the new hash will differ dramatically.
      Hashing is an irreversible process, that is, the resulting hash cannot be recovered from the original data.

    • Checksums

      Hashes can be used as checksums that serve as proof of data integrity.

    • Collisions

      Cases where hashing different sets of information results in the same hash.

    • Salt (in cryptography)

      A random string of data, which is added to the input data before hashing, to calculate the hash. This is necessary to make brute-force hacking more difficult.

    Popular hashing algorithms:

πŸ”— References
  1. πŸ“Ί What is Hashing? Hash Functions Explained Simply – YouTube
  2. πŸ“Ί Passwords & hash functions (Simply Explained) – YouTube
  3. πŸ“Ί Hashing Algorithms and Security - Computerphile – YouTube
  4. πŸ“Ί SHA: Secure Hashing Algorithm - Computerphile – YouTube
  5. πŸ“Ί How secure is 256 bit security? – YouTube
  • Authentication and authorization

    Authentication is a procedure that is usually performed by comparing the password entered by the user with the password stored in the database. Also, this often includes identification - a procedure for identifying the user by his unique identifier (usually a regular login or email). This is needed to know exactly which user is being authenticated.

    Authorization - the procedure of granting access rights to a certain user to perform certain operations. For example, ordinary users of the online store can view products and add them to cart. But only administrators can add new products or delete existing ones.

    • Basic Authentication

      The simplest authentication scheme where the username and password of the user are passed in the Authorization header in unencrypted (base64-encoded) form. It is relatively secure when using HTTPS.

    • SSO (Single Sign-On)

      Technology that implements the ability to move from one service to another (not related to the first), without reauthorization.

    • OAuth / OAuth 2.0

      Authorization protocol, which allows you to register in various applications using popular services (Google, Facebook, GitHub, etc.)

    • OpenID

      An open standard that allows you to create a single account for authenticating to multiple unrelated services.

    • JWT (Json Web Token)

      An authentication standard based on access tokens. Tokens are created by the server, signed with a secret key and transmitted to the client, who then uses the token to verify his identity.

πŸ”— References
  1. πŸ“Ί HTTP Basic Authentication explained – YouTube
  2. πŸ“Ί What Is Single Sign-on (SSO)? How It Works – YouTube
  3. πŸ“Ί OAuth 2 explained in very simple terms – YouTube
  4. πŸ“Ί OpenID Connect explained – YouTube
  5. πŸ“Ί What Is JWT and Why Should You Use JWT – YouTube
  • SSL/TLS

    SSL (Secure Socket Layer) and TLS (Transport Layer Security) are cryptographic protocols that allow secure transmission of data between two computers on a network. These protocols work essentially the same and there are no differences. SSL is considered obsolete, although it is still used to support older devices.

    • Certificate Authority (CA)

      TLS/SSL uses digital certificates issued by a certificate authority. One of the most popular is Let’s Encrypt.

    • Certificate configuration and installation

      You need to know how to generate certificates and install them properly to make your server work over HTTPS.

    • Handshake process

      To establish a secure connection between the client and the server, a special process must take place which includes the exchange of secret keys and information about encryption algorithms.

πŸ”— References
  1. πŸ“Ί SSL, TLS, HTTPS Explained – YouTube
  2. πŸ“Ί Transport Layer Security, TLS 1.2 and 1.3 (Explained by Example) – YouTube
  3. πŸ“Ί Let's Encrypt Explained: Free SSL – YouTube
  4. πŸ“Ί How to Install a Free SSL Certificate with Let's Encrypt – YouTube

Testing

Testing is the process of assessing that all parts of the program behave as expected of them. Covering the product with the proper amount of testing, allows you to quickly check later to see if anything in the application is broken after adding new or changing old functionality.

  • Unit Tests

    The simplest kind of tests. As a rule, about 70-80% of all tests are exactly unit-tests. "Unit" means that not the whole system is tested, but small and separate parts of it (functions, methods, components, etc.) in isolation from others. All dependent external environment is usually covered by mocks.

    • What are the benefits of unit tests?

      To give you an example, let's imagine a car. Its "units" are the engine, brakes, dashboard, etc. You can check them individually before assembly and, if necessary, replace or repair them. But you can assemble the car without having tested the units, and it will not go. You will have to disassemble everything and check every detail.

    • What do I need to start writing unit tests?

      As a rule, the means of the standard language library are enough to write quality tests. But for more convenient and faster writing of tests, it is better to use third-party tools. For example:

πŸ”— References
  1. πŸ“Ί Software Testing Explained in 100 Seconds – YouTube
  2. πŸ“„ How to write your first Unit Test – medium
  3. πŸ“Ί Testing JavaScript with Cypress – Full Course – YouTube
  4. πŸ“Ί How To Write Unit Tests For Existing Python Code – YouTube
  5. πŸ“Ί Learn How to Test your JavaScript Application – YouTube
  6. πŸ“Ί GoLang Unit Testing and Mock Testing Tutorial – YouTube
  • Integration tests

    Integration testing involves testing individual modules (components) in conjunction with others (that is, in integration). What was covered by a stub during Unit testing is now an actual component or an entire module.

    • Why it's needed?

      Integration tests are the next step after units. Having tested each component individually, we cannot yet say that the basic functionality of the program works without errors. Potentially, there may still be many problems that will only surface after the different parts of the program interact with each other.

    • Strategies for writing integration tests
      • Big Bang: Most of the modules developed are connected together to form either the whole system or most of it. If everything works, you can save a lot of time this way.
      • incremental approach: By connecting two or more logically connected modules and then gradually adding more and more modules until the whole system is tested.
      • Bottom-up approach: each module at lower levels is tested with the modules of the next higher level until all modules have been tested.
πŸ”— References
  1. πŸ“Ί Unit testing vs integration testing – YouTube
  2. πŸ“Ί PyTest REST API Integration Testing with Python – YouTube
  3. πŸ“„ Integration Testing – Software testing fundamentals
  • E2E tests

    Testing pyramid

    End-to-end tests imply checking the operation of the entire system as a whole. In this type of testing, the environment is implemented as close to real-life conditions as possible. We can draw the analogy that a robot sits at the computer and presses the buttons in the specified order, as a real user would do.

    • When to use?

      E2E is the most complex type of test. They take a long time to write and to execute, because they involve the whole application. So if your application is small (e.g. you are the only one developing it), writing Unit and some integration tests will probably be enough.

πŸ”— References
  1. πŸ“„ What is End-to-End Testing and When Should You Use It? – freeCodeCamp
  2. πŸ“Ί End to End Testing - Explained – YouTube
  3. πŸ“Ί Testing Node.js Server with Jest and Supertest – YouTube
  4. πŸ“Ί End to End - Test Driven Development (TDD) to create a REST API in Go – YouTube
  5. πŸ“Ί How to test HTTP handlers in Go – YouTube
  6. πŸ“„ Awesome Testing – GitHub
  • Load testing

    When you create a large application that needs to serve a large number of requests, there is a need to test this very ability to withstand heavy loads. There are many utilities available to create artificial load.

    • JMeter

      User-friendly interface, cross-platform, multi-threading support, extensibility, excellent reporting capabilities, support for many protocols for queries.

    • LoadRunner

      It has an interesting feature of virtual users, who do something with the application under test in parallel. This allows you to understand how the work of some users actively doing something with the service affects the work of others.

    • Gatling

      A very powerful tool oriented to more experienced users. The Scala programming language is used to describe the scripts.

    • Taurus

      A whole framework for easier work on JMeter, Gatling and so on. JSON or YAML is used to describe tests.

πŸ”— References
  1. πŸ“Ί Getting started with API Load Testing (Stress, Spike, Load, Soak) – YouTube
  2. πŸ“„ How to Load Test: A developer’s guide to performance testing – medium
  • Regression testing

    Regression testing is a type of testing aimed at detecting errors in already tested portions of the source code.

    • Why use it?

      Statistically, the reappearance of the same bugs in code is quite frequent. And, most interestingly, the patches/fixes issued for them also stop working in time. Therefore it is considered good practice to create a test for it when fixing a bug and run it regularly for next modifications.

πŸ”— References
  1. πŸ“„ What Is Regression Testing? Definition, Tools, Method, And Example
  2. πŸ“Ί Regression testing – What, Why, When, and How to Run It? – YouTube
  3. πŸ“Ί Top-5 Tools for Regression Testing – YouTube

Deployment (CI/CD)

  • Cloud services

    Before you can deploy your code, you need to decide where you want to host it. You can rent your own server or use the services of cloud providers, which have great functionality for process automation, monitoring, load balancing, data storing and so on.

    • AWS (Amazon Web Services)

      Provides a wide range of services for computing, storage, database management, networking, security, and more. AWS is one of the oldest and most established cloud service providers.

    • Google Cloud

      It is known for its focus on machine learning and artificial intelligence, as well as its integration with other Google services like Google Analytics and Google Maps.

    • Microsoft Azure

      Azure is known for its integration with other Microsoft services like Office 365 and Dynamics 365, as well as its support for a wide range of programming languages and frameworks.

    • Digital Ocean

      This service provides virtual private servers (VPS) for developers and small businesses. It is also known for its simplicity and ease of use, as well as its competitive pricing.

    • Heroku

      Heroku is known for its ease of use and integration with popular development tools like Git, as well as its support for multiple programming languages and frameworks. It was a very popular choice for open source projects as long as there was a free plan (it costs money now).

    As a rule, all of these services have an intuitive simple interface, detailed documentation, as well as many video tutorials on YouTube.

πŸ”— References
  1. πŸ“Ί Big Vs Small Public Cloud Providers – YouTube
  2. πŸ“Ί Top 50+ AWS Services Explained in 10 Minutes – YouTube
  3. πŸ“Ί AWS Certified Cloud Practitioner Certification Course – YouTube
  4. πŸ“„ Awesome AWS (list of libraries, open source repos, guides, blogs) – GitHub
  5. πŸ“Ί Google Cloud Associate Cloud Engineer Course – YouTube
  6. πŸ“„ Awesome Google Cloud Platform – GitHub
  7. πŸ“Ί Microsoft Azure Fundamentals Certification Course – YouTube
  8. πŸ“Ί Full DigitalOcean Crash Course – YouTube
  9. πŸ“„ Awesome Digital Ocean – GitHub
  • Container orchestration

    Container orchestration is the process of managing and automating the deployment, scaling, and maintenance of containerized applications and dependencies into a portable, lightweight container format to use them in a cluster of machines.

    • Docker in production

      The easiest way to manage containers is to use Docker directly, following a list of rules to keep your applications stable and safe in a production environment.

      • Store your Docker images in a private registry to prevent unauthorized access and ensure security.
      • Use secure authentication mechanisms for access to your Docker registry and implement security measures such as firewall rules to limit access to your Docker environment.
      • Keep the size of your containers as small as possible by minimizing the number of unnecessary packages and dependencies.
      • Use separate containers for different services (ex. application server, database, cache, metrics etc.).
      • Use Docker volumes to store persistent data such as database files, logs, and configuration files.
    • Docker swarm

      It is a native orchestration tool for Docker to manage, scale and automate tasks such as container updates, recovery, traffic balancing, service discovery and so on.

    • Kubernetes (K8s)

      Is a very popular orchestration platform that can work with a variety of container runtimes including Docker. Kubernetes offers a more comprehensive set of features (than Docker swarm), including advanced scheduling, storage orchestration, and self-healing capabilities.

πŸ”— References
  1. πŸ“„ How To Optimize Docker Images for Production – Digital Ocean
  2. πŸ“„ Docker Compose in production
  3. πŸ“„ Top 8 Docker Best Practices for using Docker in Production – dev.to
  4. πŸ“Ί Best practices around creating a production web app with Docker and Docker Compose – YouTube
  5. πŸ“Ί Docker Swarm Tutorial – YouTube
  6. πŸ“„ Awesome Swarm – GitHub
  7. πŸ“„ Kubernetes VS Docker Swarm – What is the Difference?
  8. πŸ“„ Kubernetes Roadmap
  9. πŸ“„ Kubernetes Learning Roadmap – GitHub
  10. πŸ“Ί Docker Containers and Kubernetes Fundamentals – Full Hands-On Course – YouTube
  11. πŸ“Ί Kubernetes Course - Full Beginners Tutorial (Containerize Your Apps!) – YouTube
  12. πŸ“„ Awesome Kubernetes Resources – GitHub
  • Automation tools

    To streamline the process of building, testing, deploying code changes, integrate with other tools in the development ecosystem, such as code repositories, issue trackers, monitoring systems to provide a more comprehensive development workflow you can use some automation tools and services.

    • Github Actions

      CI/CD tool built into the Github platform, which enables developers to automate workflows for their repositories. A great choice if you already use GitHub. There are a large number of pre-built actions. One of the most useful feature is ability to trigger workflows based on various events, such as pull requests or other repository activity.

    • Jenkins

      Highly configurable and extensible open source tool with a large ecosystem of plugins available to customize its functionality. Jenkins can be used in various environments, including on-premise, cloud-based and hybrid setups.

    • Circle CI

      It is a cloud-based CI/CD platform designed to be fast and easy to set up, with a focus on developer productivity. Circle CI integrates with various cloud-based services, such as AWS, Google Cloud and Microsoft Azure. You can also host it locally on your network.

    • Travis CI

      It is also a cloud-based CI/CD platform. It can be easily integrated with GitHub or Bitbucket. Travis CI supports multiple programming languages and frameworks. It also can be hosted as your local platform.

πŸ”— References
  1. πŸ“Ί GitHub Actions: The Full Course - Learn by Doing (playlist) – YouTube
  2. πŸ“„ Awesome GitHub Actions – GitHub
  3. πŸ“Ί Learn Jenkins! Complete Jenkins Course - Zero to Hero – YouTube
  4. πŸ“Ί CircleCI Tutorial for Beginners | Learn CircleCI In 30 Minutes – YouTube
  5. πŸ“Ί Travis CI Complete Tutorial for DevOps Engineers – YouTube
  • Monitoring and logs

    Logs capture detailed information about events, errors, and activities within your applications, facilitating troubleshooting and debugging processes. They provide a historical record of system behavior, allowing you to investigate issues, understand root causes, and improve overall system reliability and stability.

    • Libraries for your lang

      The easiest way to log an application is to use the tools of the standard language library or third-party packages. For example, in Python you can use logging module or Loguru. In Node.js – Winston, Pino. And in Go – log package, Logrus.

    • Loki

      Designed to collect log data from various sources and provides fast searching and filtering capabilities.

    • Graylog

      Comprehensive log management platform that also centralizes log data from different sources. Graylog offers features like log ingestion, indexing, searching, and analysis.

    • ELK Stack (Elasticsearch, Logstash, Kibana)

      Is a combination of three open-source tools used for log management and analysis. Elasticsearch is a distributed search and analytics engine that stores and indexes logs. Logstash is a log ingestion and processing pipeline that collects, filters, and transforms log data. Kibana is a web interface that allows you to search, visualize, and analyze logs stored in Elasticsearch.

    Metrics help track key performance indicators, resource utilization, and system behavior, enabling you to identify bottlenecks, optimize performance, and ensure efficient resource allocation.

    • Prometheus

      Open-source monitoring system that can collect metrics data from various sources. It employs a pull-based model, periodically scraping targets to collect metrics. The collected data is stored in a time-series database, allowing for powerful querying and analysis. Prometheus provides a flexible query language and a user-friendly interface to visualize and monitor metrics. It also includes an alerting system to define and trigger alerts based on specified rules and thresholds.

    • Grafana

      Tool for visualization and monitoring. It allows you to create visually appealing dashboards and charts to analyze and monitor metrics data from various sources, including databases and monitoring systems like Prometheus and InfluxDB.

    • InfluxDB

      Time-series database designed specifically for storing and querying metrics and events data. Offers a simple and flexible query language to extract valuable insights from the stored data. With its focus on time-series data, InfluxDB allows for easy aggregation, downsampling, and retention policies.

πŸ”— References
  1. πŸ“Ί Grafana Loki a log aggregation system for everything – YouTube
  2. πŸ“Ί Graylog guide to getting started log management – YouTube
  3. πŸ“Ί Overview of the Elastic Stack (formerly ELK stack) – YouTube
  4. πŸ“„ Awesome Elasticsearch – GitHub
  5. πŸ“Ί How Prometheus Monitoring works – YouTube
  6. πŸ“„ Awesome Prometheus – GitHub
  7. πŸ“Ί Server Monitoring: Prometheus and Grafana Tutorial – YouTube
  8. πŸ“Ί InfuxDB: Overview, Key Concepts and Demo – YouTube

Optimization

  • Profiling

    Profiling is a program performance analysis, which reveals bottlenecks where the highest CPU and/or memory load occurs.

πŸ”— References
  1. πŸ“Ί Optimize Your Python Programs: Code Profiling with cProfile – YouTube
  2. πŸ“Ί A New Way to Profile Node.js – YouTube
  3. πŸ“Ί Go (Golang) Profiling Tutorial – YouTube
  4. πŸ“„ Awesome utilities for performance profiling – GitHub
  • Benchmarks

    Benchmark (in software) is a tool for measuring the execution time of program code. As a rule, the measurement is done by multiple runs of the same code (or a certain part of it), where the average time is then calculated, and can also provide information about the number of operations performed and the amount of memory allocated.

    There are benchmarks to measure the performance of networked applications, where you can get detailed information about the average request processing time, the maximum number of supported connections, data transfer rates and so on (see list of HTTP benchmarks).

πŸ”— References
  1. πŸ“Ί Premature Optimization – YouTube
  2. πŸ“Ί Professional Benchmarking in Python – YouTube
  3. πŸ“Ί JavaScript tips β€” Measuring performance using console.time – YouTube
  4. πŸ“Ί Go (Golang) Benchmark Tutorial – YouTube
  • Caching

    Caching is one of the most effective solutions for optimizing the performance of web applications. With caching, you can reuse previously received resources (static files), thereby reducing latency, reducing network traffic, and reducing the time it takes to fully load content.

    CDN

    • CDN (Content Delivery Network)

      A system of servers located around the world. Such servers allow you to store duplicate static content and deliver it much faster to users who are in close geographical proximity. Also when using CDN reduces the load on the main server.

    • Browser-based (client-side) caching

      Based on loading pages and other static data from the local cache. To do this, the browser (client) is given special headers: 304 Not Modified, Expires, Strict-Transport-Security.

    • Memcached

      A daemon program that implements high-performance RAM caching based on key-value pairs. Unlike Redis it cannot be a reliable and long-term storage, so it is only suitable for caches.

πŸ”— References
  1. πŸ“Ί How Caching Works? | Why is Caching Important? – YouTube
  2. πŸ“Ί Basic Caching Techniques Explained – YouTube
  3. πŸ“Ί HTTP Caching with E-Tags - (Explained by Example) – YouTube
  4. πŸ“Ί What Is A CDN? How Does It Work? – YouTube
  5. πŸ“Ί Everything you need to know about HTTP Caching – YouTube
  6. πŸ“Ί Memcached Architecture - Crash Course with Docker, Telnet, NodeJS – YouTube
  • Load balancing

    CDN

    When the entire application code is maximally optimized and the server capacity is reaching its limits, and the load keeps growing, you have to resort to the clustering and balancing mechanisms. The idea is to combine groups of servers into clusters, where the load is distributed between them using special methods and algorithms, called balancing.

    • Balancing at the network level
      • DNS Balancing. For one domain name is allocated several IP-addresses and the server to which the request will be redirected is determined by an algorithm Round Robin.
      • Building a NLB cluster. Used to manage two or more servers as one virtual cluster.
      • Balancing by territory. An example is the Anycast mailing method.
    • Balancing on the transport level

      Communication with the client is locked to the balancer, which acts as a proxy. It communicates with servers on its own behalf, passing information about the client in additional data and headers. Example – HAProxy.

    • Balancing at the application level

      The balancer analyzes client requests and redirects them to different servers depending on the nature of the requested content. Examples are Upstream module in Nginx (which is responsible for balancing) and pgpool from the PostgreSQL database (for example, it can be used to distribute read requests to one server and write requests to another).

    • Balancing algorithms
      • Round Robin. Each request is sent in turn to each server (first to the first, then to the second and so on in a circle).
      • Weighted Round Robin. Improved algorithm Round Robin, which also takes into account the performance of the server.
      • Least Connections. Each subsequent request is sent to the server with the smallest number of supported connections.
      • Destination Hash Scheduling. The server that processes the request is selected from a static table based on the recipient's IP address.
      • Source Hash Scheduling. The server that will process the request is selected from the table by the sender's IP address.
      • Sticky Sessions. Requests are distributed based on the user's IP address. Sticky Sessions assumes that requests from the same client will be routed to the same server rather than bouncing around in a pool.
πŸ”— References
  1. πŸ“Ί What is a Load Balancer? – YouTube
  2. πŸ“Ί Learn Load Balancing right now – YouTube
  3. πŸ“Ί Load Balancing with NGINX – YouTube
  4. πŸ“Ί Load Balancers id depth – YouTube

Documentation

  • Markdown

    A standard in the development world. An incredibly simple, yet powerful markup language for describing your projects. As a matter of fact, the resource you are reading right now is written with Markdown.

    • Markdown cheatsheet

      A cheatsheet on all the syntactic possibilities of the language.

    • Awesome Markdown

      A collection of various resources for working with Markdown.

    • Awesome README

      A collection of beautifull README.md files (this is the main file of any repository on GitHub that uses Markdown).

    • Markdown for your notes

      Markdown is not only used for writing documentation. This incredible tool is great for learning - creating digital notes. Personally, I use Obsidian editor for outlining new material.

πŸ”— References
  1. πŸ“Ί How To Write a USEFUL README On Github – YouTube
  2. πŸ“Ί Obsidian As A Second Brain: The ULTIMATE Tutorial – YouTube
  • Documentation inside code

    For every modern programming language there are special tools which allow you to write documentation directly in the program code. So you can read the description of methods, functions, structures and so on right inside your IDE. As a rule, this kind of documentation is done in the form of ordinary comments, taking into account some syntactic peculiarities.

    • Why do you need it?

      To make your work and the work of other developers easier. In the long run this will save more time than traveling through the code to figure out how everything works, what parameters to pass to functions or to find out what methods this or that class has. Over time you will inevitably forget your own code, so already written documentation will be useful to you personally.

    • What does it take to get started?

      For each language, it's different. Many have their own well-established approaches:

πŸ”— References
  1. πŸ“Ί How To Use Developer Documentation – YouTube
  2. πŸ“Ί How to use JSDoc - Basics & Introduction – YouTube
  3. πŸ“Ί Godocs - Effortless documentation for your go packages – YouTube
  • API Documentation

    Easy-to-understand documentation will allow other users to understand and use your product faster. Writing documentation from scratch is a tedious process. There are common specifications and auto-generation tools to solve this problem.

    • OpenAPI

      A specification that describes how the API should be documented so that it is readable by humans and machines alike.

    • Swagger

      A set of tools that allows you to create convenient API documentation based on the OpenAPI specification.

    • Swagger UI

      A tool that allows you to automatically generate interactive documentation, which you can not only read but also actively interact with it (send HTTP requests).

    • Swagger editor

      A kind of playground in which you can write documentation and immediately see the result of the generated page. You can use YAML or JSON format file for this.

    • Swagger codegen

      Allows you to automatically create API client libraries, server stubs and documentation.

πŸ”— References
  1. πŸ“Ί REST API and OpenAPI: It’s Not an Either/Or Question – YouTube
  2. πŸ“Ί Swagger API documentation with Django REST Framework – YouTube
  3. πŸ“Ί NodeJS Swagger API Documentation Tutorial Using Swagger JSDoc – YouTube
  4. πŸ“Ί Golang Microservices: REST APIs - OpenAPI / Swagger – YouTube
  • Static generators

    Over time, when your project grows and has many modules, one README page on GitHub may not be enough. It will be appropriate to create a separate site for the documentation of your project. You don't need to learn how to make it, because there are many generators for creating nice-looking and handy documentation.

    • GitBook

      Probably the most popular documentation generator using GitHub/Git and Markdown.

    • Docusaurus

      Open-source generator from Facebook (Meta).

    • MkDocs

      A simple and widely customizable Markdown documentation generator.

    • Slate

      Minimalistic documentation generator for REST API.

    • Docsify

      Another simple, light and minimalistic static generator.

    • Astro

      A generator with a modern and advanced design.

    • mdBook

      A static generator from the developers of the Rust language.

    • And others...
πŸ”— References
  1. πŸ“Ί Build a Markdown Documentation Site with Docusaurus (Step-by-Step) – YouTube
  2. πŸ“Ί Create template layouts for your HTML with Astro SSG – YouTube

Building architecture

  • Architectural patterns

    • Layered

      Used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.

    • Client-server

      The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients.

    • Master-slave

      The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.

    • Pipe-filter

      Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.

    • Broker pattern

      A broker component is responsible for the coordination of communication among components.

    • Peer-to-peer

      Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.

    • Event-bus

      Has 4 major components; event source, event listener, channel and event bus. Sources publish messages to particular channels on an event bus.

    • Model-view-controller

      Separate internal representations of information from the ways information is presented to, and accepted from, the user.

    • Blackboard

      Useful for problems for which no deterministic solution strategies are known.

    • Interpreter

      Used for designing a component that interprets programs written in a dedicated language.

πŸ”— References
  1. πŸ“„ 10 Common Software Architectural Patterns in a nutshell
  2. πŸ“Ί 10 Architecture Patterns Used In Enterprise – YouTube
πŸ”— References
  1. πŸ“„ Design Patterns Cheat Sheet
  2. πŸ“Ί 10 Design Patterns Explained in 10 Minutes – YouTube
  3. πŸ“Ί Design Patterns with examples in Python – YouTube
  4. πŸ“Ί Design Patterns with examples in JavaScript – YouTube
  5. πŸ“Ί Design Patterns with examples in Go – YouTube
  • Monolithic and microservice architecture

    Monolith and microservices

    A monolith is a complete application that contains a single code base (written in a single technology stack and stored in a single repository) and has a single entry point to run the entire application. This is the most common approach for building applications alone or with a small team.

    • Advantages:
      • Ease of development (everything in one style and in one place).
      • Ease of deployment.
      • Easy to scale at the start.
    • Disadvantages:
      • Increasing complexity (as the project grows, the entry threshold for new developers increases).
      • Time to assemble and start up is growing.
      • Making it harder to add new functionality that affects old functionality.
      • It is difficult (or impossible) to apply new technologies.

    A microservice is also a complete application with a single code base. But, unlike a monolith, such an application is responsible for only one functional unit. That is, it is a small service that solves only one task, but well.

    • Advantages:
      • Each individual microservice can have its own technology stack and be developed independently.
      • Easy to add new functionality (just create a new microservice).
      • A lower entry threshold for new developers.
      • Low time required for buildings and startups.
    • Disadvantages:
      • The complexity of implementing interaction between all microservices.
      • More difficult to operate than several copies of the monolith.
      • Complexity of performing transactions.
      • Changes affecting multiple microservices must be coordinated.
πŸ”— References
  1. πŸ“Ί What are Microservices? – YouTube
  2. πŸ“Ί Microservices Explained and their Pros & Cons – YouTube
  3. πŸ“Ί Microservice Architecture and System Design with Python & Kubernetes – Full Course – YouTube
  4. πŸ“Ί NodeJS Microservices Full Course - Event-Driven Architecture with RabbitMQ – YouTube
  5. πŸ“Ί Building Microservices in Go (playlist) – YouTube
  6. πŸ“„ Awesome Microservices: collection of principles and technologies – GitHub
  7. πŸ“„ Patterns for Microservices
  • Horizontal and vertical scaling

    Horizontal and vertical scaling

    Over time, when the load on your application starts to grow (more users come, new functionality appears and, as a consequence, more CPU time is involved), it becomes necessary to increase the server capacity. There are 2 main approaches for this:

    • Vertical scaling

      It means increasing the capacity of the existing server. For example, this may include increasing the size of RAM, installing faster storage or increasing its volume, as well as the purchase of a new processor with a high clock frequency and/or a large number of cores and threads. Vertical scaling has its own limit, because we cannot increase the capacity of a single server for a long time.

    • Horizontal scaling

      The process of deploying new servers. This approach requires building a robust and scalable architecture that allows you to distribute the logic of the entire application across multiple physical machines.

πŸ”— References
  1. πŸ“Ί System Design: What is Horizontal vs Vertical Scaling? – YouTube
  2. πŸ“„ Vertical vs. Horizontal Scaling: Which one to choose

Additional and similar resources

Made with β™₯
LICENSE 2022-2023

About

πŸ“ƒ White paper for Backend developers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published