Career

Exp – 7+ Apply Now

Key Responsibilities
  • Develop Big Data applications using Spark (Scala-Spark or PySpark), Hadoop, Hive and/or Kafka, HBase, MongoDB
  • Understanding of Machine Learning models
  • Working knowledge on Public Cloud / Private Cloud platforms
MUST-HAVE
  • Total IT / development experience of 7+ years
  • Experience in Spark (Scala-Spark and PySpark) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB
  • Deep knowledge of Spark-Scala and PySpark libraries to develop and debug complex data engineering challenges
  • Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies
  • Exposure in deploying on Cloud platforms
  • At least 3 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Scala-Spark
  • At least 3 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • At least 4 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, TDD, CI/CD, Change Management to support DevOps
  • At least 6 years of development experience on Data technologies
Location Bengaluru / Chennai / Pune Experience 4-7 Yrs Compensation Best in the industry Notice Immediate to 15 Days
Exp – 4+ Apply Now

Key Responsibilities
  • Develop Big Data applications using Spark (Scala-Spark or PySpark) on Hadoop, Hive and/or Kafka, HBase, MongoDB
MUST-HAVE
  • Total IT / development experience of 3+ years
  • Experience in Spark (Scala-Spark and PySpark) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB
  • Deep knowledge of Scala-Spark and PySpark libraries to develop and debug complex data engineering challenges
  • Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies
  • Exposure to deploying on Cloud platforms
  • At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala/PySpark
  • At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
GOOD-TO-HAVE
  • Banking domain knowledge
  • Hands-on experience in SAS toolset / statistical modelling migrating to Machine Learning models
  • Banking Risk, Fraud or Digital Marketing Machine Learning models and use cases
  • ETL / Data Warehousing, SQL and Data Modelling experience prior to Big Data experience
Location Bengaluru / Chennai / Pune Experience 4-7 Yrs Compensation Best in the industry Notice Immediate to 15 Days

We are not hiring as of now. We will be back soon. You can submit your profile using the application form,
we will reach if we will be having a suitable opening for you.

Careers

Job Application Form

Take just 2 minutes to complete our resume builder and take the first step toward a challenging and rewarding career with us. This form is designed to capture key information about your background and experience, helping us streamline your selection process efficiently.

    Only .pdf and .docx files are accepted with max size as 2MB

    • Submission of the CV doesn't guarantee a job.

    • Your profile will be reviewed.

    • If suitable, our team will contact you.

    • By sharing your contact, you consent to being contacted.

    Icon

    Phone

    +971-552534865
    Icon

    Address

    SAZKO Solutions,

    Dubai : IFZA Business Park, Dubai Silicon Oasis, Building A1 Dubai 342001, UAE

    Malaysia: V05-03-03A , Signature 1, Lingkaran SV, Sunway Velocity, 55100 Kuala Lumpur, Malaysia