Back to Job Search

Data Engineer

  • Location: Irving, Texas, 75039
  • Job Type:Contract

Posted 7 months ago

Terrific 12+ month contract opportunity in Irving, TX for a Data Engineer. Will support the development and eventual ownership of a new system (SSURM) the Safety and Soundness Unified Reporting and Monitoring System. Will work with existing resources to form the core team to build this platform from the ground up. Will provide creative solutions in a scrum/agile work environment. When the Initial 1.0 Version is built, will become one of two primary core owners of the platform and a support and development team will be built around their expertise. This is an exciting opportunity to work on an important new platform, which will have huge impacts on the reporting and monitoring technology for safety and soundness (technology risk) business and the future architecture in this area.
Responsibilities include:
•    Act as the subject matter expert regarding data pipelines to the DevOps focused team and to external stakeholders.
•    Analyze, code, test, and implement data solutions and controls.
•    Build a close relationship with clients and stakeholders to understand the use cases for the platform and prioritize work accordingly.
•    Evaluation of data sourcing to a new platform and the building of the data models and sourcing structure to support that platform. This role will become a key owner of that safety and soundness technology platform as it evolves along with the development team.
•    Work with business stakeholders as end consumers of the data to ensure we are meeting their requirements.
•    Contribute to the team’s strategy around development and deployment of best practices.
•    5+ years of experience building solutions to improve or replace manual data sourcing processes.
•    5+ years of experience building solutions with Machine Learning, Graph Analytics and other advanced analytics techniques.
•    In-depth knowledge of data pre-processing, feature engineering and modeling.
•    In-depth knowledge of scalable model deployment, model performance monitoring stats and modelling pipeline automation.
•    Hands-on experience with Python/PySpark/Scala and basic libraries for machine learning.
•    Knowledge of Agile(scrum) development methodology preferred.
Prefer hands-on experience with XGBoost, Tensorflow, scikit-learn, PySpark, Spark GraphX.
•    Prefer experience working with Anaconda and Jupiter notebook and Mongo DB, Oracle DB.
•    Prefer proficient experience in programming in Java or Python with prior Apache Beam/Spark.
•    Prefer Continuous Integration / Scrum experience.
•    Prefer experience in consumer banking, financial services, or risk controls domains.
•    Basic knowledge of the Hadoop ecosystem and Big Data technologies is a plus (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr).