Near is looking for a DevOps Engineer to perform day-to-day activities and to support the company’s data centers, software and applications platforms that service the entire business. It is a demanding role that requires the candidate to be capable of working with cross-functional teams diagnosing complex issues on the various platforms.
The ideal candidate should have extensive programming experience and should be able to serve the business’s sites, software and applications at a production and support capacity. The candidate should also have superior troubleshooting skills and knowledge in monitoring and alerting mechanisms.
This is a great opportunity to be part of one of the fastest-growing Enterprise SaaS companies in the world.
- Manage large scale production environment and mission critical infrastructure.
- Handle stability, automation, scalability, deployment, monitoring, alerting, security and ensure maximum availability of Near’s tech infrastructure.
- Manage distributed big data systems composed of kafka, hadoop, spark, hive, flink, storm, mongoDB, elastic search and Cassandra and other cloud services like S3, EMR.
- Work closely with Software and Big data Developers and other engineering teams to ensure the infrastructure is capable of serving current and future needs.
- Set up monitoring systems, create and maintain run-books.
- Participate in 24x7 on-call support roles on a rotational basis as needed in future.
- Influence, create and contribute to the automation platform.
- Take complete ownership of assigned modules and execute them.
Skills and Requirements
- Bachelor’s/Master’s degree in B.Tech/M.Tech.
- Overall, 4 – 6 years of experience in Development Operations.
- Strong understanding of Security, Transport and Application layer.
- Prior experience in setting up instances in the Data Center and Cloud environment, preferably AWS.
- Excellent knowledge of the Unix operating systems(CentOS preferred), and very good system troubleshooting skills.
- Experience working with Web, Internet & Load Balancing and Big data technologies.
- Good at administration of Big Data ecosystems - spark, kafka, airflow and NoSql databases like MongoDB.
- Good at monitoring tools such as Nagios, Graphite, cacti and Ganglia.
- Experience working with database technologies - Redis, MySQL, mongoDB and Cassandra.
- Must have experience in configuration management tools - Puppet, CHEF, fabric, and Ansible.
- Must have experience in any one of the programming languages - Python/Ruby/Perl - Python is more preferable.
- Experience with Software Engineering Lifecycle, and handle deployments on a large scale.
- Hands-on experience with Continuous Integration/Continuous Delivery tools like Jenkins/Nexus/Maven/Ant and needs to own releases.
- Must have exceptional problem solving, analytical and organisation skills with a detail-oriented attitude.
- Passion for learning new technologies.