This is a remote, project-based role for data engineering professionals with 1–3 years of work experience. You will complete tasks similar to those performed in data engineering, data infrastructure, ETL development, or analytics engineering roles. Work is over the next 2-3 weeks, asynchronous, and assigned on a project-by-project basis, with an expected commitment of 10–20 hours per week for the projects you accept. This position offers highly competitive pay, exposure to real-world data pipeline and infrastructure challenges, and an excellent opportunity to boost your resume.
Commitment: 40 hours/week | Pay: $60 - $100/hr | Type: Full-time
Responsibilities
- Document technical solutions, data flows, and pipeline architectures
- Collaborate asynchronously on data integration and warehousing projects
- Build and deploy data infrastructure using cloud services and modern tools
- Troubleshoot and resolve data pipeline failures and performance issues
Required Qualifications
- 1–3 years of professional experience in data engineering, ETL development, or related fields
- Strong proficiency in one or more of the following; Moreso, DBT, Kafka, Spark, Snowflake, Redshift
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related technical field
- Strong problem-solving skills and ability to work independently on technical tasks
Why Apply
- Fantastic Pay & Growth Opportunities – project based pay starts at est. $60/hour and can go up to $100/hour
- Flexible Time Commitment – Work on your schedule while gaining valuable experience
- Startup Exposure – Work directly with an early-stage Y Combinator-backed company, gaining hands-on experience that sets you apart
- Portfolio Building - Gain experience with modern data architectures and tools