Location: Remote (Based in Vietnam)
Hyperion360 is a software outsourcing company. We have helped many companies build and manage remote teams of dedicated full-time software engineers responsible for products that have generated billions in revenue. At Hyperion360 we offer flexibility in working hours and working location (fully remote). You will work closely with the client to refine feature design and functionality. Learn more about us.
Summary
Our client is at the forefront of innovating solutions to safeguard one of the most critical infrastructures — the electrical grid. Recognizing its pivotal role in daily lives, their mission is to enhance grid reliability and safety through advanced monitoring and analysis systems. The cutting-edge technology uses high-precision sensor arrays to continuously assess the electrical and mechanical behavior of grid assets, enabling the preemptive identification and mitigation of potential faults.
By bringing together top-tier expertise in data science, engineering, and technology development, they have created a system that has been proven with major utilities to reduce customer outage durations and bolster safety measures. As the demand for power grows, their commitment is to protect today's grid while we build the grid of tomorrow.
What you’ll do:
You will be responsible for constructing a robust data pipeline to facilitate event streaming, ensuring enhanced SLA, stability, and reliability. Your work will enable seamless data flows that empower data engineers to focus on algorithm and logic coding. Key responsibilities include developing key components of a Kafka-based event streaming system and contributing to the continuous improvement of our data infrastructure.
- Transform the data infrastructure to an event-driven architecture utilizing Kafka and related technologies.
- Develop a python-based producer to generate and stream synthetic sensor data to Kafka topics.
- Create a python-based consumer that reads events, performs data enrichment using cache or databases, and reinjects enriched data into another Kafka topic.
- Build an end-of-pipeline consumer to process and analyze streamed events.
- Collaborate with cross-functional teams to integrate event streaming solutions into existing infrastructure.
- Contribute to the setup and maintenance of infrastructure as code (IaC) and CI/CD pipelines using ArgoCD and Kubernetes.
- Optimize and ensure the reliability and efficiency of data workflows.
Your background and skills will include:
- Minimum of 5 years of experience in a Software Engineer role, with a focus on the technologies mentioned.
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- Proven experience as a senior Python engineer with a strong focus on event streaming.
- Proficient in using Kafka for building event streaming applications.
- Familiar with Jupyter notebook.
- Experience with CI/CD tools and infrastructure management, particularly with ArgoCD and Kubernetes.
- Strong problem-solving skills and the ability to work collaboratively in a fast-paced environment.
- Excellent communication skills to effectively work with cross-functional teams.
- Full professional proficiency in English.
Experience with Apache Flink is a plus.
Experience with Elasticsearch is a plus.
Experience with Databricks or Snowflake is a plus.