The ideal candidate will be results-driven, self-motivated and a strong developer that has interest in working on data processing systems with large datasets, low latency and high concurrency. You should have deep interest or prior knowledge of distributed data processing architectures like Hadoop, Map-Reduce, Distributed queues, Real-time Stream Data Processing frameworks like Spark and NoSQL data stores. You must have a flair and passion for solving hard problems related to large data sets.
Work on a team employing agile strategies in order to meet delivery dates and other goals.
Develop Data Processing frameworks that handle large amounts of streaming data with very low latency
Take ownership of features from prototype/mockups and design documents through to acceptance testing.
Must be committed to continuous learning, experimenting and applying cutting edge techniques, technologies, and software paradigms
Participate in Design and implementation of analytics products using a combination of technologies
A very fast learner with experience across several programming languages and technologies, with the ability to quickly become an expert on new technologies and approaches.
1-2 years of experience in a development role
Bachelor's degree in Computer Science or Engineering from a reputed institution
Solid concepts in Data Structures, Algorithms and Software Design.
Excellent analytical, critical thinking, and reasoning
High degree of expertise in one or more programming languages like Java
Working knowledge of Distributed Data Processing Framework
Some exposure to AWS services.
Excellent English communication skills.