Avik
back|go

Building a FinTech Platform with Go Microservices as a Beginner

By Avik Mukherjee  |  Apr 25, 2026 · 14 min read · Updated Apr 25, 2026

Most microservices tutorials stop at two services and a shared database. I wanted to build something I could actually explain end-to-end — transactions that debit one account and credit another atomically, fraud scoring happening asynchronously, audit logs written to object storage, and emails firing on completion. All of it wired together locally with Docker Compose.

This post covers what I built, the design decisions I made (and later changed), and the bugs that only surfaced when I ran the whole thing together.

What I Was Trying to Build#

A local fintech backend that covers the realistic surface area of a payments system:

  • Users register and authenticate with JWT
  • Each user can open multiple accounts (savings, current) with independent balances
  • Transfers debit the sender, credit the receiver, and record the transaction — atomically enough that a partial failure refunds the sender
  • Every completed transaction gets scored for fraud
  • Notifications go out on completion or failure
  • An immutable audit trail lands in object storage

The goal was not to ship to production. The goal was to build something where each layer would break in realistic ways so I could understand the failure modes.

Architecture Overview#

code
Client
  │
  ▼
Nginx (API Gateway :8080)
  │  rate-limits /api/v1/auth/*
  ├──► user-service:3001
  ├──► account-service:3002
  └──► transaction-service:3003
             │
             │ Kafka: transactions.events
             ▼
      ┌──────┼──────────────┐
      ▼      ▼              ▼
  fraud   notification   audit
  :3004    :3005          :3006
                │
                │ Kafka: fraud.alerts
                ▼
        notification + audit

Six services, three PostgreSQL databases (one per stateful service), Kafka as the event bus, MinIO for audit storage, and Mailhog to catch outbound emails locally.

Each HTTP-facing service has its own JWT middleware. Services communicate internally via a shared secret header (X-Internal-Secret) rather than a service mesh — pragmatic for a local setup, something you would replace with mTLS in production.

The Transaction Saga#

The trickiest part of the whole system is the transfer: you need to debit one account and credit another, and if either half fails, you need to undo what you already did.

I implemented a simplified saga over HTTP:

code
1. Write transaction record (status: pending)
2. Debit sender   → POST /internal/accounts/balance  (amount: -x)
3. Credit receiver → POST /internal/accounts/balance  (amount: +x)
4. Update status: completed
5. Publish transaction.completed to Kafka (fire-and-forget)

If step 3 fails after step 2 succeeded, the service issues a compensating credit back to the sender before marking the transaction failed:

go
if err := s.callBalanceUpdate(ctx, tx.ToAccountID, tx.Amount, "credit-"+tx.ID); err != nil {
    // Compensate: refund the sender
    s.callBalanceUpdate(ctx, tx.FromAccountID, tx.Amount, "refund-"+tx.ID)
    return fmt.Errorf("credit failed: %w", err)
}

Each call to account-service uses an idempotency key derived from the transaction ID (debit-{txid}, credit-{txid}, refund-{txid}), so retries do not double-apply. This is not a distributed transaction — if the process crashes between steps 2 and 3 with no retry logic, you have a partial failure. For this project that tradeoff was acceptable. A real system would use a persistent saga log or an outbox pattern.

The client-facing idempotency key works at the transaction layer. If you submit the same key twice, you get the original result back without a second charge:

bash
# Second call with the same idempotency_key
{"error":"duplicate transaction: idempotency_key already used"}

Event-Driven Pipeline#

Once a transaction completes (or fails), the transaction-service publishes to transactions.events:

json
{
  "transaction_id": "3faaed59-...",
  "from_account_id": "...",
  "to_account_id": "...",
  "amount": 100,
  "currency": "USD",
  "status": "completed",
  "event_type": "transaction.completed",
  "occurred_at": "2026-04-25T10:58:06Z"
}

Three services consume this topic independently with separate consumer group IDs. The fraud-service also publishes to fraud.alerts, which notification-service and audit-service additionally consume.

The publish from the transaction-service is intentionally fire-and-forget in a goroutine — the HTTP response to the client does not wait on Kafka:

go
go func() {
    if err := s.producer.PublishTransactionEvent(context.Background(), event); err != nil {
        log.Printf("[transaction-service] kafka publish failed: %v", err)
    }
}()

If Kafka is down, the transaction still completes. The event is lost. A production system would use an outbox table and a CDC connector, or at minimum retry with backoff before giving up. For a local dev environment, losing an event on a Kafka blip is fine.

Fraud Detection#

The fraud engine is rule-based and deliberately simple:

  • Large transaction: single transfer exceeds a configurable threshold (default $100,000)
  • Round amount: suspiciously round numbers often indicate test or bot transactions
  • High velocity: more than N transfers per minute from the same account (tracked in memory, so it resets on restart)

Each rule contributes a score. If the total exceeds a threshold, the transaction is flagged and a fraud.alerts event is published. Notification-service picks that up and sends a separate fraud alert email.

code
[fraud-consumer] tx=893a4e3f score=55 flagged=true reasons=[large_transaction round_amount]
[fraud-consumer] fraud alert published for tx=893a4e3f score=55

Scoring happens entirely in the consumer goroutine, which means latency does not affect the payment flow. The tradeoff is that fraud detection is always after the fact — you cannot block a transaction based on the fraud score with this architecture. For pre-authorization checks you would need a synchronous call from the transaction-service before committing.

Audit Logs#

The audit-service writes every consumed event as a JSON file to MinIO under a structured path:

code
audit-logs/
  transaction/
    2026/04/25/
      38d0c251-28c8-4e10-bd78-5bbf8466b1db.json
  fraud_alert/
    2026/04/25/
      d5b7108b-943b-4ae5-b8d2-28558e9a911b.json

The payload is stored verbatim — no transformation, no filtering. This makes the audit log a faithful copy of every event that passed through the system. You can replay it, diff it against the database, or use it as a source of truth for compliance questions.

Each record gets a UUID. The original code used time.Now().UnixNano() as the ID, which looks fine until you have two events arrive within the same nanosecond under load and one silently overwrites the other in MinIO.

Dead-Letter Topics#

The first version of the fraud and audit consumers had a comment that read:

go
// In production, route to a dead-letter topic after N retries.
continue

Which means a single malformed or unprocessable message would stall the consumer forever. On restart it would fetch the same message again, fail again, and repeat.

I replaced that with a retry counter per message key. After three failures the message is published to a dead-letter topic (fraud.dead-letter, audit.dead-letter) and committed so the consumer moves forward:

go
if c.retries[msgKey] >= maxRetries {
    c.sendToDLQ(ctx, msg, err)
    delete(c.retries, msgKey)
    c.reader.CommitMessages(ctx, msg)
}

The DLQ message carries source-topic and error headers so you can diagnose what went wrong without losing the original payload.

The Bug That Only Appeared End-to-End#

After wiring all six services together, the fraud and audit consumers started up correctly and logged their startup messages, but never processed a single event. The transaction-service was publishing. Kafka had the messages. Notification-service was consuming them fine. Fraud and audit were just... silent.

Checking the consumer group state:

bash
kafka-consumer-groups --describe --group fraud-service-group
# (empty — no committed offsets)
 
kafka-consumer-groups --describe --group notification-service-group-transactions.events
# CURRENT-OFFSET: 3  LAG: 0

Notification was caught up. Fraud had never committed anything.

The difference turned out to be the kafka-go version. Notification-service used v0.4.51. Fraud and audit used v0.4.47. There is a consumer group partition assignment bug in v0.4.47 where the consumer joins the group, shows as Stable, but never gets assigned any partitions — so FetchMessage blocks indefinitely.

Bumping to v0.4.51 in both go.mod files and rebuilding fixed it immediately.

This is the kind of bug that never shows up in unit tests, does not surface in a service tested in isolation, and only appears when you run the full stack and watch consumer group offsets.

Nginx as API Gateway#

Nginx handles routing and is the only entry point from the host machine. Each upstream maps to one service:

nginx
upstream account_service {
  server account-service:3002;
}
 
location ~ ^/api/v1/accounts(/|$) {
  proxy_pass http://account_service;
  ...
}

The regex ~ ^/api/v1/accounts(/|$) matches both /api/v1/accounts (list/create) and /api/v1/accounts/{id} (get by ID). A plain location /api/v1/accounts/ with a trailing slash would silently 404 on the non-slash form — something I found out the hard way during testing.

Auth endpoints are rate-limited to 10 requests per second per IP with a burst of 20:

nginx
limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=10r/s;
 
location /api/v1/auth/ {
  limit_req zone=auth_limit burst=20 nodelay;
  limit_req_status 429;
  ...
}

The nodelay flag means burst requests are served immediately rather than queued, and anything beyond the burst gets a 429 right away.

One thing that tripped me up: after rebuilding containers, Nginx caches upstream IPs at startup. When the service containers get new IPs on rebuild, Nginx keeps trying the old ones and returns 502. The fix is docker restart api-gateway to force re-resolution. In production you would use a service discovery mechanism or configure Nginx's resolver directive to re-resolve DNS dynamically.

Secrets and Configuration#

The initial version had "internal-secret-change-me" hardcoded as a string literal in the account-service handler, and "default-secret" as the JWT secret fallback in every service config. Neither was wired through docker-compose.

The fix was straightforward — add JWT_SECRET and INTERNAL_SECRET to docker-compose with sensible defaults using variable substitution:

yaml
environment:
  JWT_SECRET: ${JWT_SECRET:-fintech-jwt-secret}
  INTERNAL_SECRET: ${INTERNAL_SECRET:-fintech-internal-secret}

Override them for a real deployment by setting the variables in the environment or a .env file. The fallback values mean docker-compose up still works out of the box.

What I Would Do Differently#

Outbox pattern for Kafka publishing. The current fire-and-forget publish means events can be lost if Kafka is unavailable at the moment of a successful transaction. An outbox table written in the same database transaction as the transfer, with a separate relay process, would give you at-least-once delivery without coupling the HTTP response to Kafka availability.

Pre-authorization fraud check. The current fraud scoring is post-hoc. For high-risk transactions you want a synchronous score before the transfer commits. That means either a blocking call from transaction-service to fraud-service, or embedding a lightweight rule check directly in the transaction-service before the saga starts.

Saga log for crash recovery. If the transaction-service process dies between debit and credit, there is currently no recovery path. A persistent saga log in the database would let you resume or compensate on restart.

Service mesh or mTLS for internal traffic. The X-Internal-Secret header is fine for a local setup but it is a shared secret you have to rotate across every service simultaneously. mTLS with short-lived certificates per service is a cleaner model.

Running It Yourself#

bash
git clone https://github.com/Avik-creator/golang-microservices-v1
cd fintech-platform
docker compose up -d --build
 
# Register and log in
curl -X POST http://localhost:8080/api/v1/auth/register \
  -H "Content-Type: application/json" \
  -d '{"email":"you@example.com","password":"password123","full_name":"Your Name"}'
 
# Watch the pipeline
docker logs -f fraud-service
docker logs -f audit-service
docker logs -f notification-service
 
# View emails at http://localhost:8025
# View Kafka topics at http://localhost:9000
# View audit logs in MinIO at http://localhost:9001

Fire a transfer and within a few seconds you will see the fraud score, the MinIO write, and the email confirmation land simultaneously across three independent consumers.

Let's Keep Talking#

Building this end-to-end connected a lot of things I had only understood in isolation — why idempotency keys matter, where the saga pattern breaks down, what a stalled Kafka consumer actually looks like in practice.

If you are building something similar and want to compare notes on the parts that are harder than they look, I would like to hear from you.

Share this, or find me on GitHub.

Feedback welcome. Call out mistakes. I would rather be corrected than stay wrong.

Reach me on GitHub, X, Peerlist, or LinkedIn.