Data Engineering
Streaming and batch pipelines, incremental models, data quality, and lineage-aware processing.
Data Platforms • Backend • Cloud
Big Data Engineer | Platform Engineer
I design and run reliable data platforms for analytics and product use cases. Most of my work sits at the intersection of backend services, streaming pipelines, and cloud infrastructure.
Big Data and Streaming Engineer with 5+ years of experience building large-scale data systems. I focus on robust ingestion, incremental processing, and production-grade orchestration with tools such as Spark, Kafka, Airflow, and AWS.
Streaming and batch pipelines, incremental models, data quality, and lineage-aware processing.
Python/Go services, APIs, event-driven patterns, fault-tolerant jobs, and production observability.
AWS-native architectures with orchestration, CI/CD integration, and infrastructure-minded system design.
Python · Data signals · Risk controls
Backend-oriented trading signal engine that consumes OHLC market data across multiple timeframes and outputs explainable BUY/SELL/NO_TRADE decisions with SL/TP levels.
Backend APIs · Data workflows · Reliability
End-to-end auction platform side project focused on backend architecture and data lifecycle: listings, bids, event handling, and reporting.
This is the Medallion architecture pattern I use in my own work to turn raw events into trusted, analytics-ready datasets and production-facing outputs.
Raw events from APIs, trackers, and logs.
Validation, enrichment, dedupe, and session logic.
Analytics-ready models powering decisions.
Dashboards, APIs, and product features.
Reliable ingestion first. No assumptions, no dropped edge cases.
Schema checks, late-arrival handling, and deterministic transforms.
Domain-ready datasets for experiments, forecasting, and KPIs.
Low-latency access paths that teams can depend on every day.
I am open to data platform, backend, and cloud engineering roles.
Contact Me