Serverless Architecture Guide

When functions-as-a-service makes sense—and when it doesn't.

11 min read Architecture Guide
Kasun Wijayamanna
Kasun WijayamannaFounder, AI Developer - HELLO PEOPLE | HDR Post Grad Student (Research Interests - AI & RAG) - Curtin University
Serverless architecture and cloud function infrastructure

Serverless computing abstracts infrastructure entirely—you deploy code, the cloud runs it. No servers to manage, no capacity planning, no patching. Pay only for execution time. It sounds ideal, but serverless isn't right for every workload.

What Is Serverless?

Functions as a Service (FaaS)

The core of serverless: small units of code (functions) triggered by events. AWS Lambda, Azure Functions, Google Cloud Functions. Functions execute in response to triggers and terminate when done.

Managed Services

Serverless extends beyond FaaS to managed services: serverless databases (DynamoDB, Cosmos DB), API gateways, message queues. The provider handles all infrastructure.

Event-Driven

Serverless architectures are typically event-driven. Functions respond to events: HTTP requests, file uploads, database changes, scheduled triggers, queue messages.

Benefits

No Infrastructure Management

No servers to provision, scale, patch, or monitor. Focus entirely on application logic. Reduced operational burden.

Automatic Scaling

Functions scale automatically with demand. From zero to thousands of concurrent executions. No capacity planning.

Pay Per Use

Pay only for execution time (typically per 100ms) and invocations. No cost when idle. Excellent for variable workloads.

Faster Time to Market

Less infrastructure setup. Faster development cycles. Focus on features, not operations.

Good Serverless Use Cases

  • Event processing: file uploads, data transformations
  • API backends with variable traffic
  • Scheduled tasks and cron jobs
  • Webhooks and integrations
  • Prototypes and MVPs

Challenges and Limitations

Cold Starts

Functions not recently executed need initialisation—cold start latency. Can add hundreds of milliseconds. Problem for latency-sensitive applications.

Execution Limits

Functions have execution time limits (typically 15 minutes maximum). Memory and payload size limits. Not suitable for long-running processes.

Statelessness

Functions are stateless—state must be stored externally. Database, cache, or storage. Adds complexity for stateful workflows.

Testing and Debugging

Local development is harder. Distributed tracing across functions is complex. Debugging production issues requires good observability.

Vendor Lock-in

Serverless services are highly provider-specific. Moving between providers requires significant rework.

Cost at Scale

Pay-per-use is great for variable workloads. For consistent, high-volume workloads, dedicated compute can be cheaper.

Common Patterns

API Backend

API Gateway routes requests to Lambda functions. Each endpoint or function group is a separate function. Scales automatically with request volume.

Event Processing

Functions triggered by events: S3 file uploads, SQS messages, database changes. Process data, trigger workflows, integrate systems.

Scheduled Tasks

CloudWatch Events or equivalent triggers functions on schedule. Reports, cleanup, synchronisation. Replaces cron jobs on servers.

Stream Processing

Process streaming data from Kinesis, Event Hubs, or Pub/Sub. Real-time analytics, event sourcing, log processing.

When Not to Use Serverless

  • Long-running processes: Execution time limits don't fit
  • Consistent high load: Reserved compute may be cheaper
  • Low-latency requirements: Cold starts are problematic
  • Complex local state: Statelessness adds overhead
  • GPU workloads: Limited or no GPU support

Cost Considerations

ScenarioServerless Cost Efficiency
Low, variable trafficExcellent—pay only for use
Spiky trafficGood—scales without pre-provisioning
Consistent moderate loadModerate—compare to reserved compute
Consistent high loadOften worse—dedicated compute cheaper

Summary

Serverless is powerful for event-driven, variable workloads where you want minimal operational overhead. It eliminates infrastructure management and provides automatic scaling and pay-per-use pricing.

But serverless isn't universal. Cold starts, execution limits, and cost at scale require careful consideration. Evaluate each workload: serverless for variable, event-driven work; containers or VMs for consistent, long-running processes.