IBM Recruitment 2023| Data Engineer | B.E/ B.Tech/ M.E/ M.Tech

Official advertisement for IBM off-campus drive for Data Engineer has started and relevant advertisement has been published on the website. B.E/ B.Tech/ M.E/ M.Tech candidates can apply for this job. The company is recruiting candidates for Kochi locations. Necessary details like eligibility and application process are given below. If candidates have any problem regarding this recruitment process, then candidates should read the official advertisement.

About the Company: IBM integrates technology and expertise, providing infrastructure, software (including market-leading Red Hat), and consulting services for clients as they pursue the digital transformation of the world’s mission-critical businesses.

IBM generated around 61 billion U.S. dollars in revenue in 2022, an increase of over 3 billion U.S. dollars from the previous year. The firm’s yearly revenue has trended downward over the past decade, having previously exceeded 100 billion U.S. dollars.

IBM off-campus drive 2023

  • Company name: IBM
  • Website: ibm.com
  • Job Position: Data Engineer
  • Location: Kochi
  • Job Type: Full time
  • Experience: Freshers
  • Qualification: B.E/ B.Tech/ M.E/ M.Tech
  • Batch: 2019/ 2020/ 2021/ 2022/ 2023
  • Salary: Up to 12 LPA (Expected)

Skills Required:

  • Design, build, optimize, and support new and existing data models and ETL processes based on our client’s business requirements.
  • Build, deploy, and manage data infrastructure that can adequately handle the needs of a rapidly growing data-driven organization.
  • Coordinate data access and security to enable data scientists and analysts to easily access data whenever needed.
  • Developed the Pysprk code for AWS Glue jobs and EMR Worked on scalable distributed data system using the Hadoop ecosystem in AWS EMR, and MapR distribution.
  • Developed Python and Pyspark programs for data analysis. Good working experience with Python to develop a Custom Framework for generating rules (just like a rules engine).
  • Developed Hadoop streaming Jobs using Python for integrating Python API-supported applications.
  • Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDDs were used to apply business transformations and utilized Hive Context objects to perform read/write operations.
  • Rewrite some Hive queries to Spark SQL to reduce the overall batch time

Preferred Technical and Professional Expertise:

  • Understanding of DevOps.
  • Experience in building scalable end-to-end data ingestion and processing solutions
  • Experience with object-oriented and/or functional programming languages, such as Python, Java, and Scala”

How to Apply for These Jobs?

  • Click on Apply Here button given below.
Apply Here 

If you have gone to the link and filled out the application form, write “Done” in the comment. Thank you for visiting our Site freejobsalerrt.com

Leave a comment