I'm a Data Engineer who builds data systems that power decisions
My work focuses on designing pipelines — batch and streaming — that transform raw events into reliable datasets for analytics and machine learning. I combine modern data stack tools (dbt, Airflow, Delta Lake, Kafka, Databricks) with a business-first mindset to deliver insights, models, and dashboards that drive impact.
I believe the best data engineering isn't just about moving data efficiently — it's about enabling organizations to make better decisions faster. Whether it's real-time fraud detection, customer segmentation, or operational analytics, I build systems that turn data into competitive advantage.
Technical Expertise
Modern data stack tools chosen for reliability, scalability, and developer productivity
Python
LanguagesSQL
LanguagesAirflow
Orchestrationdbt
TransformationSpark
ProcessingKafka
StreamingDatabricks
PlatformFivetran
IntegrationDuckDB
AnalyticsAzure
CloudDocker
InfrastructureKubernetes
InfrastructureProfessional Experience
Building data systems that drive business impact across different scales and industries
Senior Data Engineer
Saturn Laboratories
Design and deploy scalable analytics systems for e-commerce and IoT platforms. Engineer data pipelines processing 2TB/day with focus on real-time analytics, financial modeling, and automated data governance.
Key Achievements:
- •Built real-time analytics system for food e-commerce platform, improving decision-making speed by 40%
- •Engineered scalable IoT analytics pipeline with 35% latency reduction and 99.5% uptime
- •Developed financial Data Lakehouse with predictive models for risk detection and portfolio stress testing
- •Optimized ETL/ELT pipelines on Azure Databricks with Delta Lake, improving query performance by 30%
Data Engineer
Amazon
Architected robust data pipelines for climate initiative analytics using AWS cloud services. Built scalable data models and optimized performance for real-time IoT telemetry processing.
Key Achievements:
- •Improved reporting cycle time by 80% through automated validation workflows using AWS Glue and Spark
- •Optimized Redshift query performance by 40+% through advanced SQL techniques and indexing
- •Built comprehensive KPI visualizations on AWS QuickSight and Power BI for climate initiatives
- •Established performance benchmarks and quality standards for cross-functional decision making
Data Engineer
Ernst & Young
Led design and implementation of enterprise data systems for fraud detection and compliance. Integrated Azure cloud services with focus on data governance, metadata management, and strategic analytics.
Key Achievements:
- •Reduced processing times by 50% for fraud detection golden datasets using Azure Databricks and Spark
- •Integrated Azure Purview for automated metadata cataloging and CCPA/GDPR compliance monitoring
- •Created 20+ strategic data models with comprehensive documentation and quality metrics
- •Established data governance practices including ETL documentation and quality assurance protocols
My Approach
Core principles that guide how I design and build data systems
Business Impact First
Every pipeline, model, and dashboard should solve a real business problem. I focus on delivering measurable value, not just technical sophistication.
Reliability & Scale
Data systems must be dependable. I build with monitoring, testing, and graceful failure handling to ensure business continuity.
Modern Stack Adoption
I stay current with data engineering best practices, adopting proven tools that improve developer productivity and system performance.
Let's Build Something Together
Whether you need to scale your data infrastructure, implement real-time analytics, or build ML pipelines, I'd love to discuss how we can solve your data challenges.