Job Title: Data Engineer
Organisation: Raising The Village (RTV)
Duty Station: Mbarara, Uganda
About Organisation:
Raising The Village International (RTV) is a Canadian non-profit organization focused on ending extreme poverty by eliminating immediate barriers of scarcity, nurturing income-generation activities and building local capacity, while moving communities toward economic self-sufficiency. Raising The Village is a fast-growing organization on an accelerated growth path. Our East Africa and North American teams work together to lift communities out of ultra-poverty in last-mile villages. We operate at the intersection of direct implementation and advanced data analytics to inform progress, decision-making, and impact.
Job Summary: The Data Engineer will play a crucial role in the VENN department by designing, building, and maintaining scalable data pipelines, ensuring efficient data ingestion, storage, transformation, and retrieval. The role involves working with large-scale structured and unstructured data, optimizing workflows, and supporting analytics and decision-making.
The ideal candidate will have deep expertise in data pipeline orchestration, data modeling, data warehousing, and batch/stream processing. They will work closely with cross-functional teams to ensure data quality, governance, and security while enabling advanced analytics and AI-driven insights to support Raising The Village’s mission to eradicate ultra-poverty.
Key Duties and Responsibilities:
Data Pipeline Development & Orchestration
- Design, develop, and maintain scalable ETL/ELT pipelines for efficient data movement and transformation.
- Develop and maintain workflow orchestration for automated data ingestion and transformation.
- Implement real-time and batch data processing solutions using appropriate frameworks and technologies.
- Monitor, troubleshoot, and optimize pipelines for performance and reliability.
Data Architecture & Storage
- Build and optimize data architectures, warehouses, and lakes to support analytics and reporting.
- Work with both cloud and on-prem environments to leverage appropriate storage and compute resources.
- Implement and maintain scalable and flexible data models that support business needs.
Data Quality, Security & Governance
- Ensure data integrity, quality, security, and compliance with internal standards and industry best practices.
- Support data governance activities, including metadata management and documentation to enhance usability and discoverability.
- Collaborate on data access policies and enforcement across the organization.
Cross-functional Collaboration & Solutioning
- Work closely with cross-functional teams (analytics, product, programs) to understand data needs and translate them into technical solutions.
- Support analytics and AI teams by providing clean, accessible, and well-structured data.
Innovation & Continuous Improvement
- Research emerging tools, frameworks, and data technologies that align with RTV’s innovation goals.
- Contribute to DevOps workflows, including CI/CD pipeline management for data infrastructure.
Qualifications, Skills and Experience:
- Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. (Master’s is a plus!)
- Experience: 4+ years of hands-on work in data engineering and building data pipelines.
- Programming: Strong in SQL and Python—you can clean, process, and move data like a pro.
- Data Tools: Experience using workflow tools like Airflow, Prefect, or Kestra.
- Data Transformation: Comfortable working with tools like DBT, Dataform, or similar.
- Data Systems: Hands-on with data lakes and data warehouses—you’ve worked with tools like BigQuery, Snowflake, Redshift, or S3.
- APIs: Able to build and work with APIs (e.g., REST, GraphQL) to share and access data.
- Processing: Know your way around batch processing tools like Apache Spark and real-time tools like Kafka or Flink.
- Data Design: Good understanding of data modeling, organization, and indexing to keep things fast and efficient.
- Databases: Familiar with both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB) databases.
- Cloud: Experience with major cloud platforms like AWS, Google Cloud, or Azure.
- DevOps: Know your way around Docker, Terraform, Git, and CI/CD tools for smooth deployments and testing
Skills & Abilities:
- Strong ability to design, implement, and optimize scalable data pipelines.
- Experience with data governance, security, and privacy best practices.
- Ability to work collaboratively and engage with diverse stakeholders.
- Strong problem-solving and troubleshooting skills.
- Ability to effectively manage conflicting priorities in a fast-paced environment.
- Strong documentation skills for technical reports and process documentation.
How to Apply:
All Qualified and interested candidates should apply online at the link below.