google logo
microsoft logo (2012)
justdial logo
logo
cisco logo blue 2016

About me

I’m a Data and Backend Engineer with 12+ years of experience building scalable data systems, automation platforms, and high-impact data products. I specialize in designing end-to-end data ecosystems—from large-scale data collection and processing to enrichment, validation, and delivery—enabling businesses to make reliable, data-driven decisions.
My work focuses on solving complex data challenges using modern engineering, cloud infrastructure, and AI-powered automation. I’ve built high-volume web scraping systems, robust ETL pipelines, and intelligent APIs that power data platforms handling millions of records across diverse industries.
Beyond engineering, I bring strong ownership and leadership—driving architecture decisions, improving system reliability, and mentoring teams to deliver scalable and maintainable solutions.

Core Expertise

  • End-to-End Data Engineering Systems – Designing complete data ecosystems from collection → ingestion → standardization → enrichment → delivery
  • Scalable Web Scraping & Data Extraction – Building high-volume, distributed scraping systems for complex and dynamic data sources.
  • Data Standardization & Quality Engineering – Creating validation frameworks, rule engines, and QA automation layers to ensure clean, reliable data
  • High-Volume Data Pipelines (ETL & Reverse ETL) – Developing robust pipelines for structured & unstructured data across multiple platforms
  • Data Integration & API Engineering – Building and integrating systems with CRMs and analytics platforms (Salesforce, HubSpot, GA, etc.)
  • Automation Systems & Workflow Engineering – Designing intelligent automation frameworks for continuous data processing and operations
  • AI-Powered Data Systems & Agents – Developing AI-driven tools for research, enrichment, similarity scoring, and decision automation
  • Data Enrichment & Identity Resolution – Creating systems for entity matching, taxonomy mapping, and deep data enrichment
  • Platform Architecture & Scalability – Building scalable, reliable, and observable data platforms in cloud environments
  • Data Governance & Observability – Ensuring data accuracy, consistency, monitoring, and system reliability at scale

Experience

1

iCustomer – Principal Data Engineer (Jun 2023 – Feb 2026)

  • Led end-to-end planning, execution, and delivery of large-scale data engineering and automation initiatives.
  • Architected high-volume web scraping and data extraction systems for complex and distributed data sources.
  • Built robust automation frameworks for continuous, reliable data ingestion, validation, and transformation.
  • Designed and implemented scalable ETL and Reverse ETL pipelines for structured and unstructured data.
  • Developed rule engines, QA automation layers, and data standardization APIs to ensure data accuracy and governance.
  • Built and maintained scalable REST APIs for data enrichment, identity resolution, and workflow automation.
  • Delivered integration pipelines with platforms like Salesforce, HubSpot, Mailchimp, Amplitude, and Google Analytics.
  • Architected and enhanced a Visitor Intelligence Engine using behavioral signals, event tracking, and enrichment data.
  • Led development of AI-driven automation tools using prompt-engineered agents for research and data enrichment.
  • Built custom developer tools using MCP to streamline engineering workflows and AI-powered automation.
  • Owned data quality, governance, observability, and reliability across the entire data ecosystem.
2

Terminus – Senior lead DataCloud Engineering (Aug 2021 – Jun 2023)

  • Led teams across web scraping, data extraction, transformation, and QA automation.
  • Designed advanced scraping pipelines processing millions of global records.

  • Built industry-specific prospect databases using social media, directories, and deep web sources.

  • Developed Python tools, APIs, and automation scripts improving efficiency and data accuracy.

  • Evaluated and validated datasets to ensure accuracy, integrity, and consistency.

3

Zylotech – Data Operations Manager (Aug 2015 – Jul 2021)

  • Strategic management and execution of data operations within the organization. Responsibilities included designing, implementing, and optimizing data processing workflows.
  • Ensuring data integrity, and leading cross-functional teams. Additionally, implemented quality control measures and facilitated the integration of new data technologies.
  • Designed and maintained robust web scraping systems, performed data cleaning and transformation, and collaborated with teams to ensure efficient data processing workflows.
  • Provided technical leadership and conducted rigorous code reviews to uphold best practices.
  • Designed, deployed, and managed web scrapers across diverse datasets, ensuring data integrity.
  • Conducted data cleaning, exploration, and transformation procedures. Collaborating with cross-functional teams.
  • Streamlined data processing workflows and authored quality control checks to ensure data integrity and maintained alerting systems.
4

Denave – Research Analyst (Oct 2013 – Jan 2015)

  • Managed research and lead-generation projects for Microsoft, Cisco, CtrlS Datacenters, GetIT, and Yash Technologies.
  • Conducted IT infrastructure research, cloud demo lead generation, and customer onboarding support.
  • Provided technical support for Microsoft Lumia smartphone users.

My skills include, but are not limited to:

Language & Framework

Python

Javascript

VBA

HTML

CSS

Wordpress

Selenium

Beautifulsoup

Scrapy

Fastapi

Requests

Asyncio

Pandas

Polars

Streamlit

Airflow

Docker

Excel

Cloud & Infra (AWS)

EC2

Lambda

S3

ECR

Cloudwatch

Iam

RDS

Step Function

ECS

Glue

EventBridge 

KMS

Database & Warehouse

PostgreSQL

BigQuery

Snowflake

Sqlite

Pinecone

Mysql

Awards

  • 2023 – Duck Tape Award (Terminus)

  • 2019 – Z-Innovator Award (Zylotech)

  • 2015 – Spot light of the year Award (Zylotech)

2016 – Winners Award (Zylotech)