Senior Data Engineer

Valukoda has a client that is looking for a contract to hire senior data engineer that can be fully remote anywhere in the US. The senior data engineer will implement strategies directed at acquiring data and promoting the development of new insights across the business. The senior data engineer owns and extends the business’s data pipeline through the collection, storage, processing, and transformation of large data-sets.

Duties and Responsibilities:

  • Responsibility of contributing to the continual improvement of the business’ data platforms through observations and well-researched knowledge
  • Proficient with programming in at least one programming language (Python, Bash scripting, etc.)
  • Proficient with horizontally distributed database technologies
  • Strong ETL proficiency using GUI-based tools or code-based patterns
  • Proficient with data-modeling principles
  • Keeps track of industry best practices and trends and through his acquired knowledge, takes advantage of process and system improvement opportunities
  • Experience building and managing data warehouses to fulfill reporting requirements
  • Excellent verbal and written communication
  • Result-driven individual and strategic thinker
  • Be proactive requiring minimal supervision, be highly organized
  • Have an ability to handle multiple tasks and meet tight deadlines
  • Can communicate effectively and simplify complexity
  • Work comfortably work in a collaborative setting, work comfortably with senior departmental leadership
  • Ability to combine experience, knowledge, perspective, and awareness to make sound business decisions
  • Strong problem-solving skills
  • Adaptability and flexibility in changing situations
  • Passion for delivering compelling solutions that exceed client expectations


  • Understanding of data storage strategies and optimizing read/write trade-offs
  • Familiarity with “Lambda Architecture” and streaming / batch data
  • Understands data design implications and communicates risks/trade-offs
  • Understands algorithmic complexity and “Big O” notation
  • Understands data partitioning strategies
  • Understands clustering and parallelism strategies
  • Understands MapReduce and the popular technologies build around it
  • Understands DAGs and their use in ETL
  • Amazon Redshift, RDS (Postgres), S3, Kinesis, DynamoDB and related AWS data products
  • Bachelor’s degree in Computer Science, Mathematics, Statistics or a related field
  • 5+ years of related experience

Please email to apply.