Data Engineer Job Responsibilities:
• Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
• Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
• Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
• Writes unit/integration tests, contributes to engineering wiki, and documents work.
• Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
• Works closely with a team of frontend and backend engineers, product managers, and analysts.
• Designs data integrations and data quality framework.
• Designs and evaluates open source and vendor tools for data lineage.
• Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.
• Builds algorithms and prototypes
• Combines raw information from different sources
• Explores ways to enhance data quality and reliability
• Identifies opportunities for data acquisition
• Develops analytical tools and programs
Requirements and skills：
• Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
• 5+ years of experience in a Data Engineer role or in a similar role
• Technical expertise with data models, data mining, and segmentation techniques
• Knowledge of programming languages (e.g. Java and Python)
• Hands-on experience with SQL database design
• Great numerical and analytical skills
Experience using the following software/tools:
• Experience with big data tools: Hadoop, Spark, Kafka, etc.
• Experience with relational SQL and NoSQL database, specifically MySQL, MongoDB and
• Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
• Experience with AWS cloud services: EC2, EMR, RDS, Redshift
• Experience with stream-processing systems: Storm, Spark-Streaming, etc.
• Experience with object-oriented/object function scripting languages: NodeJS, Python
• Experience designing, building, and maintaining data processing systems
• Experience working with either a Map Reduce or an MPP system on any size/scale
Perks and Benefits：
● Flexible hours
● Remote Working
● Medical Benefits allowance
● Laptop allowance
● Public Holidays as per the Singapore laws but flexible