You will help shape the future of what data-driven organizations look like, creating new lines of thinking within a diverse range of clients and situations. You will constantly engage with our clients and team members as they architect new systems and strategies for extracting, transforming and optimizing data flow from complex, sometimes disparate, data sources.
Required Experience and Educational Qualification:
Bachelor's degree in computer science, information systems, or closely related field.
Advanced knowledge of Hadoop stack, with prior experience in Hive, Pig, HBase, Impala, and Sqoop.
Advanced knowledge of database maintenance and administration using MS SQL Server.
Hands-on experience in Microsoft Azure or Amazon EC2 cloud platform.
Should be a highly motivated individual with the ability to work effectively with people across all levels in an organization.
Experienced in RDBMS systems like Oracle Database, IBM DB2, or MySQL.
1-5 years of experience in a software engineering or IT infrastructure role.
Advanced knowledge of object-oriented programming, distributed systems, and software design principles.
Strong programming experience in Java, Python, and R.
Experienced in NoSQL systems like MongoDB, Redis, or Cassandra.