Analytics Engineer (Contractor)

The team is seeking an experienced analytics engineer to help grow our modern analytics stack in ways that support creating data transformation pipelines, reusable data models, visualizations and queries for the company and our customers.

As a member of our Product Science team, you will establish yourself as a key expert and evangelist on our data, working cross-functionally to support data self-service and generate reusable data models and assets. The data assets created by this team reliably support generating insights about our business in support of key initiatives. You’ll develop customer-facing data products using SQL that become a part of the platform itself.

This role is a balance of understanding what makes our business and our customers’ businesses tick, putting data modeling, generating analysis and insights, and telling the data story. We’re looking for someone who is able to effectively communicate and collaborate with others, and is passionate about working with data.

What you’ll do:

  • Collaborate and build trust with business teams within as well as our customers to understand data requirements and achieve successful analytics outcomes.
  • Translate data requirements into performant SQL models using dbt (data build tool).
  • Be our eyes and ears on dbt OSS and contribute significantly to our dbt code base. You will own our dbt improvements road map including implementing new features and best practices that you learn from the dbt community.
  • Take ownership of our data quality - you will iterate on our existing data test suites written in dbt to ensure quality of our data products.
  • Take ownership of the health of our in-warehouse SQL transformations. - if you find any model that takes a long time to run, you will suggest improvements, discuss potential solutions and implement them.
  • Work within our data orchestrator platform - Most of our data transformation task invocations are managed from Dagster which is written in Python, so you are expected to contribute to our Dagster code base frequently as you write new dbt transformations.
  • Maintain our data catalog/data documentation - We are customer zero for our own product. We love using our data catalog to make collaboration around data easier for our internal users. You are expected to maintain our data catalog including documenting our data models and lineage.
  • Develop BI data reports, visualizations, and queries to support measuring our KPIs and supporting the success of our SaaS business.
  • Participate in evaluating and implementing new data tools - We believe in relentlessly modernizing our data platform and evaluating/implementing new tools is part of that effort. You will lend in your voice and expertise as we go through this journey.

Our data stack:

  • Segment as our customer data platform
  • AWS cloud services and Stitch/Fivetran for extraction and load jobs
  • Dagster for workflow orchestration
  • Snowflake and dbt (data build tool) for in-warehouse transformations
  • for data catalog, governance, and collaboration
  • Tableau as our data serving and BI layer
  • CircleCI for CI/CD

Experience and capabilities you have:

  • 2+ years of core analytics engineering experience writing production grade data warehouse transformation jobs using SQL and dbt (data build tool).
  • 2+ years of experience working with any of the modern data warehouses such as Snowflake, BigQuery or Redshift.
  • 2+ years of experience working on Infrastructure As Code tools such as Terraform.
  • Strong data modeling experience using star schema or other data modeling patterns.
  • Strong data testing experience using tools like dbt tests, Great Expectations etc.
  • Strong communication and presentation skills with the ability to explain concepts and conclusions around data and insights in a clear, concise, and compelling way.

Big pluses:

  • Experience writing data pipelines using Python.
  • Experience working with cloud infrastructure such as AWS/GCP/Azure.
  • Experience with Operational Analytics or Reverse ETL tools such as Hightouch, Census, etc.
  • Experience working with streaming data infrastructure such as AWS Kinesis, Kafka, Materialize, etc.
  • Experience working in SaaS or enterprise software companies in the data or analytics space.