Elyon international1/8/2024 Strong skills in a scripting language (Python is preferred, Ruby, Bash) Strong experience with Spark 2+ years of experience with workflow management tools (Airflow is preferred, Oozie, Azkaban, UC4)Įxperience with Hadoop (or similar) Ecosystem (Spark, Presto are preferred, Yarn, HDFS, Hive, Pig, HBase, Parquet) Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth at data model and data schema based on business and engineering needs Implement systems tracking data quality and consistencyĭevelop tools supporting self-service data pipeline management (ETL) SQL and spark job tuning to improve data processing performanceĥ+ years of relevant professional experience You will report to a Data Engineering Manager. Your efforts will allow access to business and user behavior insights, using huge amounts of data to fuel several teams such as Analytics, Data Science, Marketplace, and many others. You will help architect, building, and launching scalable data pipelines to supports growing data processing and analytics needs. You’ll have ownership of our core data pipeline that powers top-line metrics You will also use data expertise to help evolve data models in several components of the data stack. We are looking for a Data Engineer to build a scalable data platform. As a Data Engineer, you will be a part of a team that builds the data transport, collection, and storage, and exposes services that make data a first-class citizen.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |