Senior Data Engineer - Cloud ETL

Job Description


School Degree


Software Developers at IBM are the backbone of our strategic initiatives to design, code, test, and provide industry-leading solutions that make the world run today - planes and trains take off on time, bank transactions complete in the blink of an eye and the world remains safe because of the work our software developers do. Whether you are working on projects internally or for a client, software development is critical to the success of IBM and our clients worldwide. At IBM, you will use the latest software development tools, techniques and approaches and work with leading minds in the industry to build solutions you can be proud of.

Your Role and Responsibilities
As a Data Engineer - Cloud ETL you are expected to be functionally knowledgeable in data domain areas with a good emphasis on Data architecture around ETL using cloud native Paas services. As a Data Engineer in our Data and AI team, you will provide support for data management and ETL for one or more projects; assist in defining scope and sizing of work; and work on Proof of Concept development. You will support the team in providing data engineering solutions based on the business problem, data integration with third party services, designing and developing complex ETL/ELT pipelines for clients business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Pre-Sales and various pursuits focused on our clients business needs.

You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains.


  • Be responsible for requirements gathering, System Designing, Data modeling, and ETL design, development, Production Enhancements, Support and Maintenance
  • Support the build of complex data processing pipeline in java/python using apache Beam/Glue/databricks for reading/writing data to /from Data warehouses/Lakes on cloud
  • Write generic ETL flow templates, job scheduling and management using native services
  • Developing and maintaining complex data ETL/ELT Pipelines, data models and standards for various data Integration & data warehousing projects from source to sinks on Saas/Paas platforms - snowflake, databricks, Azure Synapse, Redshift, BigQuery, etc
  • Develop scalable, secure and optimized data transformation pipelines and integrate them with downstream sinks
  • Develop and maintain documentation of the data flows and data models, integrations, pipelines etc
  • Support the teams in providing technical solutions from data flow design and architecture perspective, ensure the right direction and propose resolution to potential data pipeline-related problems.
  • Developing Proof of concepts (PoC) of key technology components to project stakeholders
  • Collaborate with other members of the project team (Architects, Data Engineers) to support delivery of additional project components (like API interfaces, Search, visualization)
  • Evaluate and create PoVs around the performance aspects of ETL/ELT tools in the market against customer requirements
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
  • Assist in driving improvements to the Big Data and Cloud Data Warehouse technologies stack, with a focus on the digital experience for the user, as well as application performance & security to meet the needs of the business and customers, now & in the future
  • Support technical investigations & proof of concepts, both individually and as part of a team, including being hands - on with code, to make technical recommendations
  • Create documentation for Architecture Principles, design patterns & examples, technology roadmaps & future planning

Required Technical and Professional Expertise
ETL Developer with 7 years of experience with following skills -

  • Good hands-on experience in SQL
  • Good experience in Data Modelling techniques
  • Good experience in design & implementation of Enterprise Data Warehouse solutions
  • Hands-on expertise in at least 1 projects (ETL) on Hyperscalars - AWS, Azure, GCP
  • Experience in any of the following solutions - AWS Glue, Dataflow, Datafusion, Databricks, Azure Data Factory
  • Exposure to Big Data and Cloud Data Warehouse technologies - Redshift, BigQuery, Synapse, Snowflake
  • Experience in ETL technologies (in more than 1 area amongst Informatica, DataStage, SAS, Azure Data Factory, Talend, Matillion)
  • Experience in BI technologies (in more than 1 areas amongst Cognos, BO, Power BI, Tableau)
  • Experience in Database (in one or more amongst Teradata, Exadata, Netezza, Oracle)
  • Meta Data Management and Data Governance Experience
  • Hand on experience in Python, Scala, Spark, Kafka

Preferred Technical and Professional Expertise

  • DevOps - CD/CI Implementations
  • Framework Development and Automation Techniques
  • Experience in implementation of Data Catalogue and Data Lake Implementations
  • Experience in Data Management Solution Development with strong experience in SQL and NoSQL data bases

UG: Any Graduate - Any Specialization

There is something wrong with this job ad? Report the error

Related Ads