erick.africa
All work

Senior Software Engineer · 2021 → present · Uganda & Kenya

SafeBoda — identity, dispatch, and event processing at ride-hailing scale

Core contributor across identity, dispatch, payments, notifications, and internal ops. Led the rewrite of the IAM service from Node.js to Elixir/Phoenix; extended the real-time dispatch engine handling thousands of ride requests daily; built the Kafka event processor powering driver earnings, loyalty, and SOS coordination.

Role
Senior Software Engineer
Years
2021 → present
Region
Uganda · Kenya
Stack
Elixir · NestJS · Kafka · GraphQL · Postgres

The IAM rewrite — Node.js → Elixir/Phoenix

The legacy auth service was a Node.js monolith with session-fixation vulnerabilities, slow login latency, and growing test debt. The rewrite moved authentication to Elixir/Phoenix with a GraphQL surface (Absinthe), JWT + TOTP, and token blacklisting backed by Nebulex over Redis.

The hardest part wasn't the rewrite — it was the cutover. We needed zero-downtime migration while users were authenticating in real time. The strategy:

  • Dual-write phase. Both services wrote to the same Postgres; reads still hit Node.
  • Shadow-read phase. Elixir served reads in parallel; we compared outputs in logs and triaged divergence.
  • Cutover. 5% of traffic, then 25%, then 100% over a week.

Result: auth latency dropped, session-fixation closed, and the Phoenix service became the foundation for downstream identity features (KYC, fraud signals, multi-tenant device tracking).

Real-time dispatch — finite state machines + Phoenix Channels

Dispatch is the heart of any ride-hailing platform. A trip's lifecycle is a state machine — requested, matching, assigned, en_route, started, completed, with several failure branches. Modeling this as an explicit FSM in Elixir made the system both faster to reason about and faster to debug when things broke.

The dispatch service is stateless across Phoenix nodes — trip state lives in Postgres, broadcast lives in Phoenix Channels with PubSub for cross-node fan-out. The matching funnel processes thousands of ride requests daily.

Event processor — Kafka + NestJS

Driver earnings, loyalty points, automated suspension, and SOS coordination all flow off a Kafka topic. A NestJS consumer reads events, transforms, and routes to downstream services. BullMQ provides reliable async delivery for the slow side-effects (push, SMS, ledger entries).

Why NestJS for this and not Elixir? Pragmatism — the existing consumer was Node, the team had Node ergonomics, and the workload is I/O-bound rather than concurrency-heavy. The Elixir IAM service and the Node Kafka consumer talk to each other through well-defined REST + Kafka contracts; neither cares about the other's runtime.

Notifications across millions of monthly deliveries

Push (FCM), SMS (Africa's Talking), WhatsApp, and email — all funneled through a single fan-out service with queue-backed scheduling. Campaign scheduling supports per-tenant rate limits and retry with exponential backoff.

Internal ops portal — Vue.js

Driver/rider management, trip monitoring, financial reconciliation, KYC workflow, surge pricing controls, and dispatch configuration. The ops team uses it daily; speed of iteration matters more than visual polish.

Phase 2 of this case study — embedded dispatch FSM with a live map. Drop a pin, watch a simulated ride transition through the state machine via Phoenix Channels.