top of page

The only five backend side projects you need

Updated: Jun 13

Spend five minutes on a Google search for side projects for backend developers and you will be inundated with hundreds of projects. Easy projects, difficult ones, the ones that are apparently the most difficult — you get the idea. There are hundreds of articles on the subject.


This one is a bit different.


In this blog, we’re going to talk about five types of projects that backend developers should do. Also, the way we’ve selected these is a bit different from what others usually do.


We are a job board. So, these are projects that are meant to impress recruiters.


The way we’ve arrived at these projects is by creating a huge list of skills that companies are looking for right now and then picking projects that show off these skills.


So, for each project, you’ll see why the recruiter would like the project, the skills you need for it, a few important things to know, a broad how-to guide, and a couple of example projects that you could try.


Let’s get started.

Types of backend projects - image

Backend projects that will help you in your job hunt:

Distributed Data Processing System

What will recruiters like about this project?

  • Real-world Relevance: As data continues to grow exponentially, distributed data processing becomes increasingly pertinent. A project of this nature illustrates your ability to tackle modern, large-scale data issues.

  • Advanced Skills Demonstration: This project allows you to showcase your competence in distributed systems, big data, and related technologies - key attributes for backend developers.

  • Scalability Expertise: Creating a distributed data processing system shows that you understand how to develop scalable systems, a trait admired by recruiters.

  • Fault Tolerance: As these systems handle potential node failures, you can highlight your ability to design robust, reliable systems.


Skills needed for this project

  • Distributed Systems Knowledge: You must be familiar with the principles and challenges of distributed systems, including data consistency and partition tolerance.

  • Algorithmic Proficiency: Efficient handling of large data sets will require strong skills in data structures and algorithms.

  • Big Data Technology Familiarity: Experience with Apache Hadoop, Spark, Kafka, and similar technologies is necessary for implementing this project.

  • Programming Skills: Proficiency in a language such as Java, Python, or Scala that's often used in data processing will be required.


Things to know

The heart of distributed data processing lies in dealing with large data volumes by breaking them down and processing subsets in parallel. This approach provides immense improvements in processing speed, fault tolerance, and data consistency.


Data Partitioning involves dividing data into smaller, more manageable chunks or partitions, which are then distributed across multiple nodes for parallel processing.


Parallel Processing refers to the concurrent execution of computations, either across different CPU cores or across multiple machines, to reduce computation time.


Fault Tolerance is a system's ability to continue operating correctly even if some of its components fail. In distributed data processing, this means the system can continue processing data even if a node fails.


Consistency in distributed systems ensures all nodes agree on the state of the data. It's particularly challenging in distributed data processing due to simultaneous processing across different data partitions.


Technologies involved in distributed data processing include Apache Hadoop, used for storing and processing large data sets; Spark, which enables in-memory processing for faster performance; and Kafka for real-time data ingestion and processing.


Project Implementation

  • Define the Problem: Choose a specific problem that requires processing a large data set. The problem should be complex enough to warrant the use of distributed data processing.

  • Identify the Data Source: This could be a public dataset, simulated data, or data generated from other sources like IoT devices.

  • Design the System Architecture: Sketch out the architecture of your system. Identify where the data will be ingested from, how it will be partitioned, and how it will be processed.

  • Select the Technologies: Decide on the technologies and tools you'll use for data ingestion, partitioning, processing, and storing the results. Apache Hadoop, Spark, and Kafka are common choices, but feel free to explore others.

  • Implement Data Ingestion: Set up your system to consume data from your chosen source. This could involve setting up Kafka topics and producers or creating scripts to read data from a database or a file system.

  • Implement Data Processing: Develop the logic for partitioning and processing your data. This could involve setting up MapReduce jobs in Hadoop or creating Spark jobs.

  • Implement Data Storage and Retrieval: Once your data has been processed, you need to store the results in a meaningful way. This could involve writing the results back into a database, or a file system, or visualizing the results using a tool like Tableau or Power BI.

  • Test the System: Test your system to ensure it works as expected. This could involve running test data through the system and validating the output.

  • Optimize and Refine: After initial testing, identify areas for improvement. This could involve optimizing the data partitioning and processing logic or improving the system's fault tolerance.


Example 1: Log Analysis System

In this project, you create a distributed system that ingests logs from various sources, processes them, and provides insights.

  • Data source: Logs from different services of a hypothetical application.

  • Ingestion: Use a distributed messaging system like Kafka to ingest logs in real time.

  • Processing: Use a processing system like Apache Spark to process and analyze the data. For example, you could calculate the number of logs per service, identify the most common error messages, detect anomalies, etc.

  • Storage: Store the processed data in a distributed file system like Hadoop HDFS.

  • Output: Create a simple dashboard to visualize the insights derived from the processed data.


Example 2: Real-time Twitter Sentiment Analysis

In this project, you’re building a system that fetches tweets in real-time, processes them to understand the sentiment behind the tweets, and stores the analysis for future use.

  • Data source: Twitter's Streaming APIs for real-time tweets.

  • Ingestion: Use Twitter API to ingest tweets in real time.

  • Processing: Use Apache Flink or Storm to process the tweets and perform sentiment analysis using a machine-learning model (like a pre-trained model from the Natural Language Toolkit).

  • Storage: Store the processed data in a NoSQL database like Apache Cassandra.

  • Output: Develop a simple web interface that displays the live sentiment analysis, for instance, the percentage of positive, negative, and neutral tweets.


Suggested: Questions that matter — 10 questions for every backend developer (and how to answer them)


Microservices-based E-commerce Platform

What will recruiters like about this project?

  • Real-world Applications: E-commerce platforms are ubiquitous and provide a tangible use case for the microservices architecture. This project provides a practical, relatable context to apply your skills.

  • Scalability & Flexibility: Building a microservices-based system demonstrates your ability to create flexible, scalable applications, a must-have skill for modern back-end developers.

  • Understanding of Microservices: This project showcases your grasp of microservices and related technologies, a popular architectural style many recruiters look for.

  • Portfolio Differentiator: The complexity and real-world applicability of an e-commerce platform will make your portfolio stand out to potential employers.

Skills needed for this project

  • Microservices Understanding: A strong grasp of the principles and advantages of the microservices architectural style is essential.

  • Familiarity with Containerization & Orchestration: Knowledge of Docker and Kubernetes, or similar tools, will be required for containerizing services and managing service deployments.

  • Proficiency in APIs: Experience in designing RESTful APIs or gRPC services for communication between services is necessary.

  • Database Design: Each microservice will have its own database, requiring a good understanding of database design principles and techniques.

  • Understanding of Load Balancing and Service Discovery: These are important aspects of maintaining performance and reliability in a microservices architecture.


Things to know

A microservices architecture involves decomposing an application into loosely coupled services, where each service is responsible for a single business capability.


Microservices allow for independent development, deployment, scaling, and failure isolation, which can significantly enhance the flexibility and reliability of applications.


An API gateway is a server that acts as an entry point into a microservices-based application, providing a single point of interaction for the client and routing requests to appropriate services.


Service discovery allows services to find and communicate with each other without hard-coded addresses, crucial for systems where services may be dynamically relocated.


Load balancing involves distributing network traffic across multiple servers to ensure no single server bears too much demand, enhancing system performance and resilience.


Key technologies involved in microservices-based applications include Docker for containerization of services, Kubernetes for container orchestration, RESTful APIs or gRPC for inter-service communication, and various database technologies for data persistence.


Project Implementation

  • Identify the Services: Start by identifying the business capabilities your application requires (e.g., User Management, Product Catalog, Shopping Cart, Order Processing, Payment Handling, etc.) and map them to individual services.

  • Design the APIs: Each service will need to expose APIs for communication. Determine the operations each service needs to provide and design your APIs accordingly.

  • Choose Your Technologies: Decide on the technologies you'll use for each component. This could include programming languages, database technologies, and tools for containerization and orchestration.

  • Containerize Your Services: Once you've developed your services, containerize them using Docker or a similar tool. This encapsulates each service with its environment, enhancing portability and consistency.

  • Implement Service Discovery: Choose a service discovery method compatible with your chosen orchestration tool. This allows your services to communicate with each other dynamically.

  • Implement Load Balancing: Depending on your orchestration tool, you may need to set up load balancing to distribute network traffic efficiently across your services.

  • Test Your Application: Test your application as a whole, as well as individual services. This will help ensure each service works correctly and the system works as expected.


Example 1: Online bookstore

In this project, you’re creating a distributed system for an online book store. The system should be able to handle a variety of tasks such as inventory management, user management, order processing, and payment processing.


  • User Service: Handles user registration, login, and profile management. It could use JWT (JSON Web Token) for user authentication.

  • Inventory Service: Manages the book inventory. It's responsible for tasks like adding new books to the inventory, updating the stock, etc.

  • Order Service: Handles all the tasks related to orders, such as creating a new order, updating the order status, and so on.

  • Payment Service: Processes payments. It could integrate with a third-party payment gateway for actual payment processing.

  • These services could be containerized using Docker and orchestrated with Kubernetes. You could use RESTful APIs or gRPC for communication between services.


Example 2: Fashion retailer

For a more complex project, consider a microservices-based system for a hypothetical fashion retailer.


This platform could include features such as product management, cart management, order management, user management, and search functionality.

  • User Service: Handles user registration, login, and profile management.

  • Product Service: Manages all tasks related to products such as adding new products, updating product details, etc.

  • Cart Service: Manages the shopping cart. It allows users to add items to the cart, update quantities, and remove items from the cart.

  • Order Service: Handles order creation, tracking, and history.

  • Search Service: Provides product search functionality. This could use Elasticsearch to enable full-text search and filter capabilities.


Each service could have its own database to achieve data decoupling and ensure microservices are loosely coupled.


Communication between services could be achieved through RESTful APIs, or an event-driven approach could be used with a message broker like RabbitMQ or Kafka.


Suggested: The most comprehensive guide to getting a backend developer job in 2023


A real-time analytics engine

What will recruiters like about this project?

  • Real-time Processing Relevance: As industries move toward real-time data insights for making business decisions, having a real-time analytics project in your portfolio showcases your readiness to work on current technologies.

  • Complex System Understanding: This project exhibits your ability to build and maintain complex systems, a quality sought after by many recruiters.

  • Data Stream Processing: Knowledge of stream processing and real-time analytics is becoming increasingly important in many fields, from finance to IoT, enhancing your attractiveness as a candidate.

  • Visualization Skills: A critical part of analytics is making data accessible and understandable. This project demonstrates your ability to present data in a clear, visual format.


Skills needed for this project

  • Understanding of Stream Processing: You'll need a solid grasp of concepts like event time, processing time, and windowing.

  • Familiarity with Real-time Processing Tools: Knowledge of Apache Storm, Flink, or similar technologies is essential.

  • Database Skills: Understanding of database technologies, especially those suited for real-time data, like Elasticsearch, is required.

  • Data Visualization Skills: You should be comfortable with data visualization tools like Kibana.


Things to know

Stream Processing involves handling data in real-time as it arrives, rather than batch processing where data is collected over a period and processed together. This enables near-instant insights and responses, critical in areas like fraud detection or system monitoring.


Windowing in stream processing refers to the division of continuous data streams into discrete chunks (windows) based on specific criteria, such as time or the number of events.


Understanding Event time vs Processing time is key in stream processing. Event time is when an event actually occurred, while processing time is when the event is processed by the system. Handling the discrepancy between these two can be a complex but crucial part of stream processing.


Key technologies used in real-time analytics include Apache Storm or Flink for stream processing, Elasticsearch for storing and querying real-time data, and Kibana for data visualization.


Project Implementation

  • Identify Your Data Source: This could be a live feed from a social media platform, a stream of log data from a web server, or any other source of real-time data.

  • Design the System Architecture: Identify where the data will be ingested from, how it will be processed, and where it will be stored. Plan how you will visualize the data.

  • Select the Technologies: Choose which technologies to use for each part of your system. This could include Apache Storm or Flink for processing, Elasticsearch for data storage, and Kibana for visualization.

  • Implement Data Ingestion: Set up your system to consume data from your source. This may involve setting up listeners or data collectors.

  • Implement Data Processing: Implement your stream processing logic. This might include filtering, aggregating, or analyzing data as it arrives.

  • Implement Data Storage and Retrieval: Store your processed data in a way that supports efficient real-time queries. Depending on your chosen technology, this may involve setting up Elasticsearch indices or similar structures.

  • Implement Data Visualization: Set up visualizations for your real-time data. This could include time-series graphs, pie charts, or other visual formats that provide insight into your data.

  • Test Your System: Run your system with test data to verify that it works correctly. This should cover individual components and the system as a whole.


Example 1: Real-time hashtag tracker

Here, you’re creating a real-time analytics system that ingests tweets from Twitter’s streaming API, tracks usage of specific hashtags and presents the data in a real-time dashboard.

  • Data Ingestion: Use Twitter's streaming API to ingest tweets in real time.

  • Stream Processing: Use Apache Flink or Storm to process the incoming tweets, filter them by hashtags, and count the occurrence of each hashtag.

  • Data Storage: Use a database like Redis or Cassandra to store the hashtag counts.

  • Real-time Dashboard: Use a visualization tool like Kibana or Grafana to display the hashtag counts in real-time.

Example 2: Real-time web analytics

For this project, you could build a system that collects, processes, and visualizes data from a website in real-time, similar to Google Analytics but focused on real-time data.


  • Data Collection: Use JavaScript on your website to send data about each user's actions to your analytics system. This could include data like page views, time on page, and click events.

  • Stream Processing: Use a stream processing technology like Apache Storm or Flink to process the data in real time. This could include tasks like sessionization (grouping pageviews into sessions), counting pageviews, and tracking user paths through the site.

  • Data Storage: Store the processed data in a suitable database like Elasticsearch.

  • Real-time Dashboard: Use a tool like Kibana to display the data in real-time. This could include information like the current number of active users, the most viewed pages, and the most common user paths.


Suggested: The ultimate guide to writing a fantastic developer resume (with examples)


Machine Learning Deployment Pipeline

What will recruiters like about this project?

  • Modern Skillset: Building an ML deployment pipeline exposes you to the cutting-edge intersection of software engineering and machine learning, a highly sought-after skillset.

  • System Design: Creating a deployment pipeline demands a high-level understanding of system design and efficiency, crucial attributes for any backend developer.

  • Operational ML Experience: Understanding how machine learning models go from development to deployment and maintenance is vital in modern software organizations. This project showcases your competency in this area.

  • Scalability: A well-designed deployment pipeline should be scalable and robust, demonstrating your ability to handle large workloads and complex systems


Skills needed for this project

  • Machine Learning Basics: Familiarity with the basics of training models and hyperparameter tuning.

  • Deployment Knowledge: Understanding of deploying machine learning models in a production environment.

  • Versioning and Monitoring: Skills in version control, model tracking, and monitoring.

  • Familiarity with Relevant Tools: Working knowledge of tools such as TensorFlow, PyTorch, Kubeflow, and Seldon Core.


Things to know

Training models is the process of learning patterns from data. Machine learning frameworks like TensorFlow or PyTorch can be used for this purpose.


Hyperparameter tuning involves adjusting the parameters of the learning algorithm to improve the model's performance.


Versioning is important in machine learning to keep track of different versions of datasets, features, and models. It allows developers to reproduce experiments and roll back to previous models if needed.


Deployment is the process of integrating a machine learning model into an existing production environment where it can take in input data and return output.


Monitoring involves keeping an eye on the deployed models, checking if they're working as expected, and retraining them if necessary.


Key technologies involved in the machine learning deployment pipeline are machine learning libraries like TensorFlow or PyTorch, pipeline tools like Kubeflow, and model-serving frameworks like Seldon Core.


Project Implementation

  • Identify the Problem: Start by deciding what problem your machine learning model will solve. This could be a classic prediction problem, a classification problem, or something else.

  • Collect and Preprocess Data: Gather the data you'll use to train your model. Preprocess it to make it suitable for use in your chosen machine learning library.

  • Train a Model: Use your chosen machine learning library to train a model on your data. This will involve choosing a learning algorithm and possibly tuning its hyperparameters.

  • Version Your Model: Use version control to keep track of your model, your training data, and any hyperparameters or preprocessing steps you used.

  • Deploy Your Model: Use a model serving tool to deploy your model in a way that it can be used to make predictions. This might involve setting up a web service or integrating it with an existing system.

  • Set Up Monitoring: Once your model is deployed, set up a way to monitor its performance. This should involve collecting some form of feedback on the model's predictions so that you can know if and when it starts to degrade.

  • Implement Retraining: Set up a way to retrain your model on new data. This might be done periodically, or it might be triggered by your monitoring system if the model's performance degrades.


Example 1: Predictive Maintenance System

This isn’t a very complicated project. You design a machine learning deployment pipeline for a predictive maintenance system. It can be built using ML models to predict the likelihood of machinery failures based on historical data.

  • Model Training: Develop a predictive model using a machine learning framework like TensorFlow or PyTorch, and train it with historical data of machinery failures.

  • Model Validation and Selection: Validate the performance of your model using suitable metrics and techniques, and choose the best-performing model.

  • Model Serving: Use a tool like TensorFlow Serving or Seldon Core to serve your model, allowing it to receive prediction requests.

  • Continuous Training and Deployment: Implement a pipeline for continuous training and deployment. This might involve retraining your model with new data, comparing the performance with the currently deployed model, and replacing the old model if the new one performs better.


Example 2: Fraud Detection System

For a more challenging project, you could create a deployment pipeline for a fraud detection system. This system could use machine learning models to identify potentially fraudulent transactions in real time.

  • Model Training: Train a model to identify fraudulent transactions using a machine learning framework like TensorFlow or PyTorch. You'll likely need to address challenges like the imbalanced classes in your training data (since most transactions are not fraudulent).

  • Model Validation and Selection: Validate your model's performance and select the best model based on appropriate metrics (considering both precision and recall may be important in this context).

  • Model Serving: Serve your model with a tool like TensorFlow Serving or Seldon Core so it can receive and respond to prediction requests.

  • Continuous Training and Deployment: Design a pipeline for retraining the model with new data, validating its performance, and deploying the updated model if it performs better than the current one.


Suggested: What are your weaknesses? What should developers say?


Blockchain-based Decentralized Application (DApp)

What will recruiters like about this project?

  • Emerging Technologies: Blockchain technology is an emerging field with increasing demand. Building a DApp is an excellent way to demonstrate your ability to learn and utilize cutting-edge technology.

  • Understanding of Decentralization: This project demonstrates your understanding of the concepts and benefits of decentralization, which is critical in blockchain applications.

  • Security Skills: Developing a DApp requires a solid understanding of security, as blockchain applications need to be secure by design.

  • Complex System Understanding: Like the other projects, this one also proves your ability to design and manage complex systems, but with the added twist of the decentralized blockchain environment.


Skills needed for this project

  • Blockchain Knowledge: Understanding of blockchain basics, consensus mechanisms, and smart contracts.

  • Programming Skills: Familiarity with blockchain-specific languages like Solidity and Web3.js.

  • Backend Development: Ability to design and implement the backend for a decentralized application.

  • Data Management: Knowledge of decentralized storage systems like IPFS.


Things to know

Consensus mechanisms are methods used in blockchain to agree on a single version of the blockchain's state. They are central to blockchain's decentralization and security.


Smart contracts are self-executing contracts with the agreement directly written into lines of code, stored on the blockchain.


Decentralized applications (DApps) are applications that run on a P2P network of computers rather than a single computer, leveraging blockchain technology.


Key technologies involved in developing DApps include the Ethereum blockchain, the Solidity programming language for writing smart contracts, Web3.js for interacting with the Ethereum blockchain, and IPFS for decentralized storage.


Project Implementation

  • Identify a Use Case: Decide what kind of application you want to build. This could be a decentralized voting system, a token exchange, or any other application that could benefit from blockchain's transparency and security.

  • Design Your Smart Contracts: Figure out what functionality should be encoded into smart contracts. This will generally involve any operations that need to be trustless and transparent.

  • Write Your Smart Contracts: Use Solidity to code your smart contracts. Make sure to follow best practices for security and efficiency.

  • Test Your Smart Contracts: Before deploying them to the blockchain, thoroughly test your smart contracts. This is crucial because once they are deployed, they cannot be modified.

  • Deploy Your Smart Contracts: After testing, deploy your smart contracts to the Ethereum blockchain.

  • Develop Your Application Backend: While much of your application's functionality will be on the blockchain, you'll likely still need a traditional backend for certain tasks. Develop this backend to interact with your smart contracts using Web3.js.

  • Design and Implement Your Frontend: Create a user interface for your application. This should provide users with a way to interact with your smart contracts.

  • Test Your Application: Make sure to thoroughly test all components of your application before releasing it.


Example 1: Decentralized Marketplace

In this project, you can create a peer-to-peer marketplace where buyers and sellers can trade goods or services directly without the need for an intermediary.

  • Smart Contract: Write a smart contract in Solidity for the Ethereum blockchain. The contract should include functions for listing a new item for sale, buying an item, and withdrawing funds.

  • Frontend Application: Use Web3.js or ethers.js to interact with the smart contract from a frontend application. This could be a simple web app where users can browse items for sale, list new items, and make purchases.

  • IPFS: Use InterPlanetary File System (IPFS) for storing item images and descriptions in a decentralized way.


Example 2: Decentralized Voting System

For a more socially impactful project, consider building a decentralized voting system. This could use the blockchain to ensure transparency and prevent vote tampering.

  • Smart Contract: Write a smart contract in Solidity for the voting process. This could include functions for registering candidates, casting a vote, and declaring results.

  • Frontend Application: Develop a frontend application that interacts with the blockchain. This could allow users to register to vote, view the list of candidates, cast their vote, and view the current results.

  • Identity Verification: Implement a form of identity verification to ensure that each person can only vote once. This could be based on Ethereum addresses, or you could explore more sophisticated methods.


Suggested: Soft Skills for developers — the non-technical skills employers are looking for


Conclusion

So, there we are — the five types of backend developer projects that’ll help you land a job.


Now, you don’t have to do all five types. In fact, doing two or even one type of project really well will demonstrate to potential employers that you’re highly skilled.


Remember, that’s the goal here — to show recruiters that you’re an expert in your field. So, instead of starting all these projects at once, pick one, and then do it amazingly well. Great job opportunities won’t be far behind.


If you’re already looking for high-paying backend developer jobs, check out Simple Job Listings. Every job we list is verified, usually pays really well, and is always remote.


Visit Simple Job Listings and find amazing remote developer jobs. Good luck!


0 comments
bottom of page