Back to Job Search

Big Data Developer

  • Location: Tampa, Hillsborough County, Florida, 33610
  • Job Type:Contract

Posted 2 months ago

Long term contract opportunity in Tampa, FL for a Big Data Developer. Will require hands-on designing, building and implementing robust & scalable applications that can fulfill the business requirements. Will create and develop the technical design solution that can fulfill technical and business requirements.
 
Responsibilities:
  • Design, develop and implement robust and scalable data pipelines on Big Data technology stack.
  • Perform defect analysis and support offshore development teams in resolving defects/production issues.
  • Perform platform upgrades across SDLC environments from an application standpoint.
  • Coordinate and work with platform engineering team, support teams and architects to triage technical issues and identify resolutions.
  • Triage production issues and ensure BAU.
  • Proactively notify stakeholders of risks, bottlenecks, problems, issues, and concerns.
  • Compliance with the company’s system development lifecycle and information security requirements.
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation.
  • Design, development and enhancement in existing framework.
  • Technical brainstorming on best approaches to tackle changes in the system.
  • Contribute ideas to the evolution of the system architecture.
  • Contribute ideas to the refinement of the team development tools and processes.
  • Manage time and changing priorities in a dynamic environment.
  • Provide quick turnaround to software issues and management requests.
  • Assimilate key issues and concepts and come up to speed quickly.
  • Develop prototypes and proof of concepts for the selected solutions.
 
Qualifications:
  • 5 to 8 years of application development experience through full lifecycle.
  • Bachelor’s degree (in Science, Computers, Information Technology or Engineering).
  • Strong experience in Hadoop, Hive, SQL, Spark with solid understanding of ETL/Data Pipelines.
  • Autosys job scheduler.
  • Architecture Design
  • Prior ETL development experience in building data warehouses and data pipelines.
  • Commanding knowledge and hands on experience in Hadoop, Hive, Spark, Impala, Sqoop and other technologies in Cloudera’s CDH distribution.
  • Prior experience on building solutions & reusable components on Big Data platforms.
  • Prior experience on designing and developing data ingestion process, Data Quality and Recon rules.
  • Strong knowledge on performance tuning in Hadoop ecosystems.
  • Experience in defining architecture and technical design for use cases in Hadoop ecosystems.
  • Experience with RedHat Linux and UNIX Bash Shell Scripting.
  • Strong knowledge and experience of Python and PySpark is preferred.
  • Strong knowledge and experience of on software release lifecycle management tools like Bit-bucket, Jenkins, etc.
  • Machine learning.
  • AML domain knowledge.
 
About our client
 
Our client stands as one of the world’s most global banks and a trusted brand with over 200 years of continuously evolving financial services. Its teams provide unique insights to more than 200 million clients and enable progress all over the world.
 
While growing your career, you will work alongside some of the smartest minds in the industry who are excited to share their knowledge and to learn from you. From analysts to architects, developers to data scientists, our client’s employees operate as one team where each voice is heard, and each perspective is appreciated.