Client:Airbnb
Title:Data Engineers. Remote…
JD:
Expertise:
5-9+ years of relevant industry experience with a BS/Masters, or 2+ years with a PhD
Experience with distributed processing technologies and frameworks, such as Hadoop, Spark, Kafka, and distributed storage systems (e.g., HDFS, S3)
Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
Expertise with ETL schedulers such as Apache Airflow, Luigi, Oozie, AWS Glue or similar frameworks
Solid understanding of data warehousing concepts and hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse)
Excellent written and verbal communication skills
A Typical Day:
Design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, financial details, and external data feeds.
Develop data models that enable the efficient analysis and manipulation of data for merchandising optimization. Ensure data quality, consistency, and accuracy.
Build scalable data pipelines (SparkSQL & Scala) leveraging Airflow scheduler/executor framework
Collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to define data requirements, and deliver data solutions that drive merchandising and sales improvements.
Contribute to the broader Data Engineering community at Airbnb to influence tooling and standards to improve culture and productivity
Improve code and data quality by leveraging and contributing to internal tools to automatically detect and mitigate issues.
Skill Sets – Python, SQL (expert level), Spark and Scala (intermediate).
Skills
Not every Data Engineer will require all of these skills, but we expect most Data Engineers to be strong in a significant number of these skills to be successful at Airbnb.
Data Product Management
? Effective at building partnerships with business stakeholders, engineers and product to understand use cases from intended data consumers
? Able to create & maintain documentation to support users in understanding how to use tables/columns
Data Architecture & Data Pipeline Implementation
? Experience creating and evolving dimensional data models & schema designs to structure data for business-relevant analytics
? Strong experience using ETL framework (ex: Airflow) to build and deploy production-quality ETL pipelines
? Experience ingesting and transforming structured and unstructured data from internal and third-party sources into dimensional models
? Experience with dispersal of data to OLTP (ex: MySQL, Cassandra, HBase, etc) and fast analytics solutions
Data Systems Design
? Strong understanding of distributed storage and compute (S3, Hive, Spark)
? Knowledge in distributed system design, such as how map-reduce and distributed data processing work at scale
? Basic understanding of OLTP systems like Cassandra, HBase, Mussel, Vitess etc
Coding
? Experience building batch data pipelines in Spark
? Expertise in SQL
? General Software Engineering (e.g. proficiency coding in Python, Java, Scala)
? Experience writing data quality unit and functional tests
? Proficiency in Salesforce and understanding of its data structure. (Optional)
? Knowledge on Salesforce Bulk Operators. (Optional)
Powered by JazzHR
28881h0h2A