| Job Description: |
Job Details:
Must Have Skills: Spark, Scala Nice to have skills: Spark Streaming, Hadoop, Hive, SQL, Sqoop, Impala Detailed Job Description • At least 8+ years of experience and strong knowledge in Scala programming language. • Able to write clean, maintainable and efficient Scala code following best practices. • Good knowledge on the fundamental Data Structures and their usage • At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies. • Having expertise in Spark Core, Spark SQL and Spark Streaming. • Experience with Hadoop, HDFS, Hive and other BigData technologies. • Familiarity with Data warehousing and ETL concepts and techniques • Having expertise in Database concepts and SQL/NoSQL operations. • UNIX shell scripting will be an added advantage in scheduling/running application jobs. • At least 8 years of experience in Project development life cycle activities and maintenance/support projects. • Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives. • Understand project requirements and translate them into technical solutions which meets the project quality standards • Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues. • Strong problem solving and Good Analytical skills. • Excellent verbal and written communication skills. • Experience and desire to work in a Global delivery environment. • Stay up to date with new technologies and industry trends in Development. |