We are the Big Data Platform team, responsible for building the company's core data infrastructure and computing platform. Our mission is to provide stable and efficient data processing capabilities to support data-driven decision-making and intelligent applications across the business. Here, you will work with massive data processing technologies and contribute to the construction of key data pipelines.
Key Responsibilities
- Design data models and develop / optimize ETL processes for both batch and real-time data warehouses.
- Participate in building and maintaining large-scale data computing and storage platforms (e.g., Hadoop / Spark / Flink).
- Develop data products, ensure data quality, and improve the stability and efficiency of data services.
- Understand business requirements, and design / develop data applications and solutions for various business scenarios.
- Continuously optimize data architecture and processing pipelines to solve performance challenges in big data computing.
Requirements
Bachelor's degree or above in Computer Science, Software Engineering, Mathematics, or a related field.Proficient in SQL and at least one programming language (e.g., Java / Scala / Python).Understanding of common components in the big data ecosystem (e.g., Hadoop, Hive, Spark, Kafka, Flink, Doris).Solid foundation in data structures and algorithms, with a strong interest in solving challenging technical problems.Good team player with strong communication skills, a high sense of responsibility, and self-motivation.Project experience or contributions to open-source projects in related areas are a plus.Open to candidates graduating between January 2025 and August 2026.Fluent in both English & Chinese.Only shortlisted candidates will be contacted.