IBM Big Data Engineer Jobs

Here is the great opportunity to freshers student to Grow our career in IBM company. IBM Big Data Engineer Jobs and Engineer Role in Pune, India. IBM Hiring for fresher Students golden opportunity Graduate student.

ABOUT IBM 

IBM’ greatest invention is the IBM. We believe that through the application of intelligence, reason and science, we can improve business, society and the human condition, bringing the power of an open hybrid cloud and AI strategy to life for our clients and partners around the world.

 IBM Big Data Engineer Jobs

 

At IBM, we pride ourselves on being an early adopter of artificial intelligence, quantum computing and blockchain. Now it’s time for you to join us on our journey to being a responsible technology innovator and a force for good in the world.

IBM Consulting is IBM’s consulting and global professional services business, with market  leading capabilities in business and technology transformation. With deep expertise in many industries, we offer strategy, experience, technology, and operations service to many of the most innovation and valuable companies in the world. Our people are focused on accelerating our clients’ businesses through the power of collaboration. We believe in the power of technology responsibly used to help people, partners and the planet.

JOB DESCRIPTION- IBM Big Data Engineer Jobs

Job Details

  • State: MAHARASHTRA
  • City: Pune
  • Category: Technical Specialist
  • Education: Bachelor’s Degree
  • Employment Type: Full- Time
  • Contract Type: Regular
  • Company: IBM India Private Limited
  • Req ID: 624893BR

Your Role and Responsibilities

A Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipeline/workflows for Source to Target and implementing solutions that tackle the clients needs.

Your Primary Responsibilities

  • Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
  • Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
  • Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they needs too.

Required Technical and Professional Expertise

  • Developed the Pyspark code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution…
  • Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules .
  • Developed Hadoop steaming Jobs using python for integrating python API supported applications.
  • Developed Python code to gather the data from HBase and designs the solutions to implement using Pyspark. Apache Spark Data Frames/RDD’s were used to apply business transformations and utilized Hive Context objects to perform read/ write operations.
  • Re- write some Hive queries to Spark SQL to reduce the overall batch time.

Preferred Technical and Professional Expertise

  • Understanding of Devops.
  • Experience in building scalable end- to- end data ingestion and processing solutions.
  • Experience with object-oriented and/or functional programming language, such as Python, Java and Scala

APPLY LINK            APPLY NOW

Latest More Opportunity