The International Rescue Committee (IRC) responds to the world’s worst humanitarian crises, helping to restore health, safety, education, economic wellbeing, and power to people devastated by conflict and disaster. Founded in 1933 at the call of Albert Einstein, the IRC is one of the world’s largest international humanitarian non-governmental organizations (INGO), at work in more than 40 countries and 29 U.S. cities helping people to survive, reclaim control of their future and strengthen their communities. A force for humanity, IRC employees deliver lasting impact by restoring safety, dignity and hope to millions. If you’re a solutions-driven, passionate change-maker, come join us in positively impacting the lives of millions of people world-wide for a better future.
Background/IRC Summary
Technology and Operations supports the organization’s work by providing reliable and scalable solutions for the IRC’s offices around the world. The Data Team at IRC is responsible for the design and delivery of global data strategy and the systems and products that deliver on it.
Job Summary
The Data Engineer will play a pivotal role in the development, maintenance, and optimization of our data infrastructure, focusing on data integration, ETL/ELT processes, and management of Data Warehouses, and Lakehouse.
The successful candidate will be responsible for building and maintaining data pipelines using Databricks, Synapse, and Fabric environments, using tools like DBT and Databricks pipelines. Key responsibilities include ensuring system readiness in terms of security, performance, and health, completing data loads, and implementing data modeling to support various business domains. This hands-on role demands strong technical expertise alongside excellent communication and collaboration skills.
Major Responsibilities
- Develop Python, SQL, and PySpark-based applications and data flows within Databricks.
- Build and maintain data pipelines using DBT and Databricks, ensuring efficient and scalable data processes.
- Design and implement real-time and batch data processing systems to support analytics, reporting, and business needs.
- Monitor and analyze data pipelines for performance, reliability, cost, and efficiency.
- Proactively address any issues or bottlenecks to ensure smooth operations.
- Discover opportunities for process improvements, including redesigning pipelines and optimizing data delivery mechanisms.
- Manage and maintain Azure cloud services and monitor alerts to ensure system availability and performance.
Minimum Requirements
- Demonstrated ability writing SQL scripts (required).
- Some experience with Python (strong plus).
- Exposure to Databricks is a significant advantage.
- Experience working in a cloud environment (strong plus).
- Experience with DBT Core or DBT Cloud is a major plus.
- Ability to quickly learn and absorb existing and new data structures.
- Excellent interpersonal and communication skills (both written and verbal).
- Ability to work independently and collaboratively within a team.
Preferred Additional Requirements
- Experience with cloud platforms (Azure preferred).
- Databricks Data Engineer Certification or similar.
- Software development
Key Working Relationships
- Data Team
- Enterprise systems owners and technical and analytics teams under their leadership
Position Reports to: Data Architect
Position Supervises: None
Travel Requirements
- Special projects may require travel to facilitate hands-on learning that will contribute to our data strategy efforts.
Working Environment
- Remote. Location is flexible but must be within an approved IRC country of operation.