The Opportunity

Near is looking for DevOps Engineer. Main duties are to perfom the day-to-day activities and to support the business’s data centers, software, applications platforms that service the entire business. It is a demanding role that requires the candidate to be capable of working with cross-functional teams diagnosing complex issues on the various platforms.

Demands extensive programming experience and has served extensively the business’s sites, software, applications at a production and support capacity, superior troubleshooting skills and has extensive knowledge in monitoring and alerting mechanisms

Tasks include

  • Manage large scale production environment and mission critical infrastructure.
  • Handling the stability, automation, scalability, deployment, monitoring, alerting, security and ensuring maximum availability of our tech infrastructure.
  • Manage distributed big data systems composed of kafka, hadoop, spark, hive, flink, storm, mongoDB, elastic search and Cassandra and other cloud services like S3, EMR etc.
  • Work closely with Software and Big data Developers and other engineering teams to ensure the Infrastructure is capable of serving current and future needs.
  • Set up monitoring systems, create and maintain run-books.
  • Participate in 24x7 on-call support on rotational basis as needed in future.
  • Influence, create and contribute to the automation platform.
  • Take a complete ownership of assigned modules and execute them to completion.


  • Strong understanding of Security, Transport and Application layer.
  • Prior experience in setting up a instances in Data Center and Cloud environment, preferably AWS.
  • Excellent high level knowledge of the Unix operating systems(CentOS preferred), and very good system troubleshooting skills.
  • Experience working with Web, Internet & Load Balancing and Big data technologies.
  • Good at administration of Big Data ecosystems - spark, kafka, airflow and NoSql databases like MongoDB.
  • Good at monitoring tools such as Nagios, Graphite, cacti and Ganglia.
  • Experience working with database technologies - Redis, MySQL, mongo DB and Cassandra.
  • Must have experience in configuration management tools - Puppet, CHEF, fabric, and Ansible.
  • Must have experience in any one of the programming languages - Python/Ruby/Perl - Python is more preferable.
  • Experience with Software Engineering Lifecycle, and Handle deployments on a large scale.
  • Hands-on experience with Continuous Integration/Continuous Delivery tools Jenkins/Nexus/Maven/Ant. Needs to own releases.


  • We are looking for an engineer with Bachelor’s/Master’s degree.
  • Overall 2-5 years of experience into Development Operations.
  • Candidate is expected to have exceptional problem solving, analytical and organisation skills with a detail-oriented attitude.
  • Passion for learning new technologies.

Apply to join us