Twinder Microservice
Event‑driven backend designed for traffic spikes and consistent latency across a social matching workload.
View source on GitHub ↗

Role
Backend Engineer
Highlights
- Scale: Java microservices processed ~500M daily requests with a Redis caching layer and AWS EC2 autoscaling for sustained performance.
- Streaming: Kafka cluster with batch compression and partitioning fanned out workloads to multiple consumers, cutting dispatch time from 200ms → 100ms.
- CQRS + Sharding: CQRS storage design accelerated reads/writes; a sharded MongoDB cluster delivered 99.98% uptime with seamless scale under peak load.
- Perf testing: Multi‑threaded clients and JMeter drove2M requests to benchmark improvements: throughput2k → 6.6k req/s, latency 200ms → 30ms.
Architecture
The system ingests swipe events at high volume, emitting them to Kafka topics. A set of workers transforms events, updating projections and pushing match candidates into Redis. RabbitMQ is used for targeted fan‑out to user‑specific queues for real‑time updates.
Notes
If you want to learn more or see code, reach out—happy to walk through design choices and tradeoffs.