Term: Direct-Hire
Location: Fully Remote
Description/Responsibilities
- Work as part of a team to develop Cloud Data and Analytics solutions, adoption templates, guidance, internal consulting and development patterns
- Participate in development of projects that involve cloud data warehouses, data as a service, business intelligence solutions and cloud functions
- Coding, and reviewing code in C#, T-SQL, and Phython with complex software engineering concepts (locking, parallelism, object-oriented design, eventual consistency, and encryption)
- Developing, reviewing, and consulting on modern solutions using the Azure Stack (ADLS, Azure Synapse, CosmosDB, Azure Functions, Azure SQL, Azure Event Hub, JSON/AVRO/Parquet formats)
- Hands-on experience in Azure as a software engineer/developer using C# and cloud databases (CosmosDB, Azure Synapse, Azure SQL Databases) – Mandatory
- Good understanding of Azure PaaS development services and ability to build data-driven templates and guide business IT teams assessing performance & scalability of their solutions
- Understanding of advanced software development techniques, design practices, retry logic implementation, encryption, file formats, ADLS, test frameworks, ADO.NET and Entity Framework
- Solid understanding of Azure Functions and various triggers
- Good understanding of GitHub, Azure DevOps, Build Pipelines, Deployment Pipelines, and Microsoft Visual Studio tools
- Experience with Azure Event Hub and other messaging platforms (such as Kafka)
- Experience with working in Agile ( Scrum) environment as part of a scrum team
- Programming experience in C#, T-SQL
- Hands-on experience in Azure stack, with good knowledge on Azure Key Vault, Active directory and overall understanding of Devops, and Azure Data Lake
- Hands-on experience in Azure Databricks, Azure Data Factory, Azure Synapse – Mandatory
- Good understanding of Azure Databricks platform and ability to build data analytics solutions to support the required performance & scale
- Working with Spark cluster and effective usage of spark engine for any data transformation creating notebooks with PySql scripts
- Ability to Read and Write data to Azure Data Lake or Blob storage from Databricks
- Good understanding and experience working on Delta Lake using Apache Delta in Azure Databricks building Data Pipelines for File, Database and API based ingestion using different activities within and also with Data flows for any Data Transformation
- Good Knowledge on Global parameters, Triggers, Linked services, Datasets and Arm template
- Experience of creating tables, building dimensional modelling
- Experience as Data Engineer in Azure Big Data Environment
- Expertise in ETL tools i.e. (SSIS)
- Expertise in Implementing Data Warehousing Solutions
- Experience with working in Agile ( Scrum) environment as part of a scrum team
- Programming experience in Python, T-SQL
- Hands-on experience in Azure stack, with good knowledge on Azure Key Vault, Active directory and basic understanding of Devops and SSIS
- Experienced working on performance tuning and optimizing long running jobs
- Demonstrated analytical and problem-solving skills, particularly those that apply to a big data environment
- Good understanding of Modern Data Warehouse and Data Warehousing concepts
- Proficient in a source code control system (e.g. ADO,gitHub)
- Excellent written and verbal skills (English)
- Flexible team player or individual contributor