-->

Cloud Decision Document — Reference Architecture

Cloud Infrastructure Decisions for a Lagos Logistics Startup

Running a business today requires expert hands and a clear direction for the organisation. Every business is different, and because of that, the technology powering those businesses needs to be tailored to its specific needs.

This document maps out the cloud infrastructure needs of a 30-person logistics startup based in Lagos, Nigeria. It evaluates three cloud service models; IaaS, PaaS, and SaaS, across three workloads, and makes informed recommendations based on a defined set of assumptions.

Walk with me.

The Three Workloads

The company has three workloads under evaluation:

For each workload, three cloud deployment options are presented (one per service model) using real products to ground the comparison in practical terms.

Customer-Facing Website

IaaS

AWS EC2

The team would spin up a virtual machine, deploy the application, and configure the environment to match their needs. The provider handles the physical servers and network infrastructure. The team is responsible for the operating system, security patches, runtime, application code, scaling, and deployment pipeline. The main advantage is full ownership of the system and data, with low upfront costs. The tradeoff is the time and cost involved in building and maintaining this infrastructure. Without a dedicated DevOps engineer, this becomes a significant burden for a small team.

PaaS

Render

The team would push their application code to the platform and let it handle the rest. Render takes care of security, automatic scaling, and server configuration. The provider manages the operating system, runtime, and infrastructure. The main advantage is fast deployment with minimal infrastructure knowledge required. The tradeoff is moderate vendor lock-in and a higher cost per unit compared to IaaS, which starts to matter when traffic grows consistently high.

SaaS

Webflow

The team can build and launch a website quickly without needing deep technical skills. The platform handles all backend setup, configuration, and hosting, down to the infrastructure level. The team only manages content, design choices, and integrations within what the platform allows. The tradeoff is that custom backend logic, such as booking systems or real-time shipment tracking, cannot be built inside Webflow. This limits how far the business can grow its web features without eventually moving to a different solution.

Internal HR System

IaaS

AWS EC2

The team would spin up a virtual machine and take full ownership of the environment. This includes configuring the database, managing backups, and handling security. The team can deploy their own HR codebase or use an open-source option like Odoo HR. The main advantage is employee data never leaves the company's infrastructure, which is important given the sensitivity of HR records. The tradeoff is that this setup demands significant technical effort. For a team without a dedicated DevOps engineer, the ongoing maintenance cost in time and skill is a real concern.

PaaS

Odoo.sh

The team gets a managed platform built specifically for deploying and customising Odoo applications. It includes features like staging servers, custom deployment routes, automated backups, and a guaranteed 99.9% uptime backed by a support team. The provider handles the operating system, runtime, and infrastructure. The main advantage is a quick setup with low maintenance overhead for certified apps. The tradeoff is moderate vendor lock-in. Because the data lives on a third-party platform, teams with strict data residency requirements need to verify whether Odoo.sh meets those obligations before committing.

SaaS

Odoo Online

The team creates an account, configures it for the company, and starts using it right away. The platform handles hosting, maintenance, support, backups, and upgrades automatically. Apps are built, maintained, and updated by the Odoo team in the background. The tradeoff is very limited room for customisation and full dependence on Odoo for service delivery. For a 30-person company with standard HR needs, this simplicity is often worth those limitations. However, teams must carefully consider data residency rules and vendor lock-in risk before choosing this option, particularly given Nigeria's Data Protection Act requirements around how employee data is stored and processed.

Data Analytics Pipeline

IaaS

AWS EC2 (full self-managed stack)

The team would spin up a virtual machine and build the entire pipeline from scratch. A Python script would handle data extraction on a schedule. A custom transformation layer using a library like pandas would reshape the data. A self-managed PostgreSQL instance would store the processed data. A self-hosted Grafana instance would handle visualisation. Every component runs on infrastructure the team fully controls. The main advantage is maximum flexibility at the lowest raw compute cost. The tradeoff is a heavy operational burden. This setup realistically requires a dedicated cloud data engineer, which adds significant hiring and salary cost for a 30-person startup.

PaaS

AWS Glue + dbt Cloud + Redshift + Metabase Cloud

Using a combination of managed services, the team gets a handled pipeline at each stage. AWS Glue acts as the entry point, crawling data sources like S3 or RDS to handle extraction. dbt Cloud handles transformation, where the team writes SQL logic and the platform runs it on a schedule. Amazon Redshift handles loading as a managed data warehouse that scales automatically on a pay-per-query basis. Metabase Cloud handles visualisation by connecting directly to the warehouse. The provider manages the underlying servers and operating system across all four tools. The tradeoff is moderate vendor lock-in across multiple platforms, and teams must confirm that each provider meets their data residency obligations.

SaaS

Google Looker Studio

The team connects existing data sources and builds dashboards without managing any infrastructure. The platform handles data ingestion, storage, and visualisation in one place. The main advantage is speed and simplicity. The significant limitation is that the tool only supports basic filtering and it cannot run custom business logic. For example, if the logistics company needs to calculate a custom delivery efficiency score or apply proprietary pricing rules, Looker Studio cannot support that. This option works best when analytics needs are straightforward enough to fit within the platform's built-in capabilities.

Comparison Table

The table below summarises the key trade-off dimensions across all three service models.

Category IaaS PaaS SaaS
Cost model Low (pay as you go for raw compute) Medium (pay per usage with a platform premium) Fixed (subscription per seat or tier)
Time to deploy High (significant setup required) Medium (platform handles infrastructure setup) Low (ready to use immediately)
OS patches Hands-on (team manages all patching) Hands-off (provider handles OS updates) Hands-off (no OS visibility at all)
Scaling Manual (team configures auto-scaling rules) Automatic (platform scales with demand) Automatic (included in the service)
Vendor lock-in risk Low (standard VMs are portable) Medium (platform conventions vary) High (data and workflows wrap around one product)

Note: PaaS is worth considering when the goal is to offload infrastructure security while using pre-configured, audited services such as AWS Glue or Redshift to meet standard compliance frameworks like GDPR or NDPA more efficiently.

Recommendations

Assumptions

These recommendations are made under the following assumptions: the company is a 30-person logistics startup based in Lagos, Nigeria; the team has no dedicated DevOps engineer; the operating environment is cost-conscious; and each workload requires custom application logic but does not need infrastructure-level control. All recommendations are subject to revision if these conditions change.

Workload 1

Customer-Facing Website

Recommended: PaaS via Render

The customer-facing website is the primary interface between the logistics startup and its clients. It requires reliable uptime, the ability to handle unpredictable traffic spikes during peak booking periods, and custom backend logic for features like shipment tracking and booking management. These functional requirements rule out a pure content platform like Webflow, which cannot support application-level logic. At the same time, the team has no dedicated DevOps engineer, meaning any solution that requires ongoing server management would consume engineering time that is better spent building product features. The need for custom logic and limited infrastructure capacity defines the core constraint this recommendation must address.

PaaS via Render directly satisfies both constraints. The platform manages the operating system, runtime, SSL termination, and automatic scaling, removing the infrastructure burden from the team entirely. The team retains full ownership of the application code, which means the custom booking and tracking logic the business needs can be built and deployed without architectural compromise. The main tradeoff is moderate vendor lock-in around Render's deployment conventions. This is acceptable at this stage because the application logic itself remains portable to another platform if the need arises. This recommendation would be revisited if sustained traffic growth made PaaS per-unit costs significantly higher than the labour cost of managing IaaS infrastructure directly, but that threshold is unlikely for a company at this size.

Workload 2

Internal HR System

Recommended: PaaS via Odoo.sh

The internal HR system handles sensitive employee data including salaries, contracts, and personal records. This data sensitivity, combined with Nigeria's Data Protection Act obligations around how personal data is stored and processed, makes data residency a hard constraint rather than a preference. A fully managed SaaS option like Odoo Online would offer the fastest setup and lowest maintenance burden, but it places employee data entirely on third-party infrastructure with no guarantee of local data residency. IaaS via EC2 would provide complete data sovereignty but would require the team to manage the operating system, database, backups, and security patches without a dedicated DevOps engineer. Neither extreme fits the company's situation cleanly. The workload needs a middle path that provides meaningful control over data without imposing unsustainable operational overhead.

PaaS via Odoo.sh provides that balance. The platform manages the underlying infrastructure including automated backups, 99.9% uptime guarantees, and OS-level maintenance, while the team retains control over the application layer and where data is hosted. Because Odoo.sh allows deployment configuration and supports custom routes and staging environments, the team can make informed decisions about data residency in a way that pure SaaS does not allow. The tradeoff is moderate vendor lock-in within the Odoo ecosystem. This recommendation assumes the team can confirm that Odoo.sh's hosting options satisfy NDPA requirements. If that confirmation cannot be obtained, the recommendation shifts to IaaS via EC2 with an open-source HRMS, accepting the operational cost in exchange for unambiguous data control.

Workload 3

Data Analytics Pipeline

Recommended: PaaS via AWS Glue + dbt Cloud + Redshift + Metabase Cloud

The analytics pipeline processes operational data from across the logistics business to produce insights for internal decision-making. Its traffic pattern is batch-like, running on a schedule rather than constantly, and the data volume is expected to grow as the company scales. Most importantly, the pipeline requires custom transformation logic. A logistics operation needs to calculate metrics specific to its business such as delivery efficiency rates, route performance, and customer activity patterns that cannot be expressed through the basic filtering and aggregation tools that SaaS analytics platforms offer. A pure SaaS option like Google Looker Studio is therefore insufficient at the data transformation stage, regardless of its convenience at the visualisation layer. IaaS via EC2 could technically support the full pipeline but would require a dedicated data engineer to build and maintain four separate system components, which is not a realistic investment for a company at this stage.

The PaaS stack of AWS Glue, dbt Cloud, Amazon Redshift, and Metabase Cloud addresses the requirements at each stage of the pipeline without requiring the team to manage any underlying infrastructure. AWS Glue handles scheduled extraction from data sources like S3 and RDS in a serverless environment. dbt Cloud runs the custom transformation logic the team writes in SQL, on a schedule, without a server to provision or maintain. Amazon Redshift provides a managed data warehouse that scales automatically and charges per query, keeping costs proportional to actual usage. Metabase Cloud connects to the warehouse and allows the team to build and share dashboards without additional infrastructure. The primary tradeoff is vendor lock-in across multiple AWS and third-party services, and the team must confirm that data residency across these platforms meets NDPA obligations. This recommendation would be reconsidered if the data volume grew large enough that the cost of managed services exceeded the cost of building and running a self-managed pipeline on IaaS.