Terrific Long-Term Contract Opportunity with a FULL suite of benefits!
As one of the largest financial institutions in the world, our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that's not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Position: Hadoop Engineer
Location: Fremont, California, 94538
Term: 24 months
Key Responsibilities include:
- Primary production support for data ETL jobs which are near real-time within the Large Data warehouse applications. Support and maintain processes that aim to improve the availability, scalability, SLA, and efficiency of Core services platform and applications.
- Troubleshoot and analyze complex production problems related to data, network file delivery, server and application issues independently and provide solutions to recovery. Drive and participate in postmortems to avoid repeated incidents.
- Solve problems related to the Core services systems and the buildout of automation to prevent problem recurrence; with the goal of automating response to all non-exceptional service conditions.
- Collaborate with the business analysts and ETL development team members to resolve complex data design issues/provide optimal solutions that meet business requirements and benefits system performance.
- Collaborate with channel teams which includes offshore team members, work flexible hours as required to achieve a global follow-the-sun support model.
- Identify and implement operational best practices and process improvements within the following functional areas: Audit, Risk, Change, Event detection, Incident Management, Problem management, Vulnerability Management, Technical Refresh, Tools and Application Support
- Encourage and participate in knowledge sharing as necessary.
- Develop and foster a positive relationship with team members, team leads and business partners.
- Develop and update documentation, departmental technical procedures and user guides.
- Be willing to work non-standard business hours on an on-call basis in a 24x7x365 environment
- Implement best practices for production environments
- Encourage and participate in knowledge sharing as necessary
- 5+ years of experience supporting or designing complex ETL production environments
- 5+ years of experience with databases such as Oracle, DB2, SQL server, or Teradata
- 5+ years of experience with Big Data or Hadoop tools such as Spark, Hive, Kafka and MapR
- 5+ years of Ab Initio/Informatica experience
- 5+ years of UNIX experience
- 5+ years of Autosys /any scheduling tool experience
- 3+ years of PAC2000/Service Now experience
- Good verbal, written, and interpersonal communication skills
- A BS/BA degree or higher in science or technology
- Knowledge and understanding of incident management: gathering impacts and analyzing data
- Strong organizational, multi-tasking, and prioritizing skills
- Advanced problem solving and technical troubleshooting capabilities
- 3+ years of Teradata experience
- 2+ years of business intelligence experience
- 3+ years of Scripting experience in Perl, Shell or Python
- 2+ years of experience with analysis, data science, data modeling, or business analysis
- Work Schedule Flexibility to address incidents as needed 24 hours a day
- Work Schedule Flexibility to frequently be on call beyond normal working hours
- Work Schedule Ability to work on call as assigned