Software Data Engineer
Job Description
Summary
The people here at Apple don't just create products - they create the kind of wonder that's revolutionized entire industries. It's the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts.
Apple is seeking an experienced, detail-minded data engineering to join our worldwide business development and strategy team. If you are someone who looks forward to solving complex business problems and is excited about this opportunity, please reach out to us.
Apple is seeking an experienced, detail-minded data engineering to join our worldwide business development and strategy team. If you are someone who looks forward to solving complex business problems and is excited about this opportunity, please reach out to us.
Description
You will architect, develop, and test large scale data solutions, to provide efficient analytical and reporting capabilities across Apple’s global and regional sales and finance teams. You will develop highly scalable data pipelines to load data from various source systems, use Apache Airflow to orchestrate, schedule and monitor the workflows. Build generic and reusable solutions that can scale and utilize various technologies and frameworks to solve our complex business requirements. You will be required to understand existing solutions, fine-tune them and support them as needed. Data quality is our goal and we expect you to meet our high standards on data and software quality. We are a rapidly growing team with plenty of interesting technical and business challenges to solve. We seek a self starter, who is willing to learn fast, adapt well to changing requirements and work with cross functional teams.
Minimum Qualifications
- • 8+ years of hands-on data modeling and data engineering experience
- • Strong expertise in dimensional modeling and data warehousing
- • Database design and development experience with relational or MPP databases such as Postgres/ Oracle/ Teradata/ Vertica
- • Experience in design and development of custom ETL pipelines using SQL and scripting languages (Python/ Shell/ Golang)
- • Proficiency in advanced SQL, performance tuning
- • Hands on experience with Big-Data platform like Spark, Dremio, Hadoop, MapReduce, Hive etc
- • Experience with Java, Scala and Python
- • Experience with cloud computing platforms like AWS, Google Cloud
- • Experience working with APIs
- • Ability to learn and adapt to new tools and technologies
- • Analytical and mathematical mind, capable of evaluating and solving various complex problems
- • Ability to work individually or as part of a team
- • Ability to learn quickly in a fast paced environment
- • Excellent oral and written communication skills
Preferred Qualifications
- Familiarity with version control systems, CI/CD practices, testing, and migration tools for database and software
- Experience working with APIs
- Experience with real time data processing using Apache Kafka or Spark Streaming a big plus