REMOTE opportunity with a FULL suite of benefits
Title: Senior Data Engineer
Location: Phoenix, AZ
You will be project leader and independent contributor on a fast-growing Data Engineering team pursuing a vision of analytics-driven mining. Your expertise in data engineering and software engineering will enable and empower our organization to build and deploy data driven solutions to production. We understand that our data does not reach its full potential until it is analyzed, and insights effectively communicated to the enterprise. You will work in close collaboration with mining operations, subject matter experts, data scientists, and software engineers to develop advanced, highly automated data products. You will be a champion of DataOps, and agile practices; actively participating in project teams to drive value.
- Agile Project Work: Work as a project leader in cross-functional, geographically distributed agile teams of highly skilled data engineers, software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate analytic solutions.
- Design, develop, and review real-time/bulk data pipelines from a variety of sources (streaming data, APIs, data warehouse, messages, images, video, etc) while also coach jr. team members.
- Ensure the project team is following established design patterns for data ingest, transformation, and egress
- Develop documentation of Data Lineage and Data Dictionaries to create a broad awareness of the enterprise data model and its applications
- Apply best practices within DataOps (Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications)
- Problem Solving/Project Leadership: Provide thought leadership in problem solving to enrich possible solutions by constructively challenging paradigms and actively soliciting other opinions. Actively participate in R&D initiatives
- Architecture: Utilize modern cloud technologies and employ best practices from DevOps/DataOps to produce enterprise quality production Python and SQL code with minimal errors. Identify and direct the implementation code optimization opportunities during code review sessions and proactively pull in external experts as needed.
- Self-Development: Flexibly seek out new work or training opportunities to broaden experience. Independently research latest technologies and openly discuss applications within the department.
- Perform other duties as requested.
- Bachelor’s degree in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and three (5) years of relevant work experience
- Master’s in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and one (3) year of relevant work experience
- Ph.D. in engineering, computer science, analytical field (Statistics, Mathematics, etc.) or related discipline and one (1) year of relevant work experience
- Strong experience in at least three areas:
- Knowledgeable Practitioner of SQL development with experience designing high quality, production SQL codebases
- Knowledgeable Practitioner of Python development with experience designing high quality, production Python codebases
- Knowledgeable Practitioner in data engineering, software engineering, and ML systems architecture
- Knowledgeable Practitioner of data modeling
- Experience applying software development best practices in data engineering projects, including Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Driven Development/Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications using Python and SQL
- Data science experience wrangling data, model selection, model training, modeling validation, e.g., Operational Readiness Evaluator and Model Development and Assessment Framework, and deployment at scale
- Working knowledge of Azure Stream Architectures, DBT, Schema Change tools, Data Dictionary tools, Azure Machine Learning Environment, GIS Data
- Working knowledge of Software Engineering and Object Orient Programming Principles
- Working knowledge of Distributed Parallel Processing Environments such as Spark or Snowflake
- Working knowledge of problem solving/root cause analysis on Production workloads
- Working knowledge of Agile, Scrum, and Kanban
- Working knowledge of workflow orchestration using tools such as Airflow, Prefect, Dagster, or similar tooling
- Working knowledge with CI/CD and automation tools like Jenkins or Azure DevOps
- Experience with containerization tools such as Docker
- Strong verbal and written communication skills in English language
- Our client promotes a drug/alcohol-free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as allowed by applicable state laws