Job Details
Azure Data Engineer
- About 24 days ago
- Fixnhour Escrow Protection
Role- Azure Data Engineer Experience 6+ yrs
Relevant Experience- 4+ yrs
Duration- 3 months and extendable
Location- Work from home
Location- Work from home
Skills- Databricks, SQL, PySpark
Databricks certification (Good to have)
6+ years in Databricks/Azure Cloud Technologies/ ( Must be a Core Developer).
Azure Databricks and Azure Data Factory Expertise:
Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure Databricks and Azure Data Factory.
Provide expertise in configuring and managing data pipelines within the Azure cloud environment.
PySpark Proficiency:
Possess a strong command of PySpark for data processing and analysis.
Develop and optimize PySpark code to ensure efficient and scalable data transformations.
Big Data Experience:
Showcase hands-on experience working with big data technologies and frameworks.
Ability to troubleshoot and optimize data processing tasks on large datasets.
Data Pipeline Development:
Design, implement, and maintain end-to-end data pipelines for various data sources and destinations.
Ensure data quality, integrity, and reliability throughout the entire data pipeline.
Extraction, Ingestion, and Consumption Frameworks:
Develop frameworks for efficient data extraction, ingestion, and consumption.
Implement best practices for data integration and ensure seamless data flow across the organization.
Collaboration and Communication:
Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
Communicate effectively with stakeholders to gather and clarify data-related requirements.
Azure Databricks and Azure Data Factory Expertise:
Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure Databricks and Azure Data Factory.
Provide expertise in configuring and managing data pipelines within the Azure cloud environment.
PySpark Proficiency:
Possess a strong command of PySpark for data processing and analysis.
Develop and optimize PySpark code to ensure efficient and scalable data transformations.
Big Data Experience:
Showcase hands-on experience working with big data technologies and frameworks.
Ability to troubleshoot and optimize data processing tasks on large datasets.
Data Pipeline Development:
Design, implement, and maintain end-to-end data pipelines for various data sources and destinations.
Ensure data quality, integrity, and reliability throughout the entire data pipeline.
Extraction, Ingestion, and Consumption Frameworks:
Develop frameworks for efficient data extraction, ingestion, and consumption.
Implement best practices for data integration and ensure seamless data flow across the organization.
Collaboration and Communication:
Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
Communicate effectively with stakeholders to gather and clarify data-related requirements.
SKILLS
- Python
ABOUT CLIENT
Ratings
Country Not Specified
TimeZone not specified$0.00 Total Spent
0 Hires, 0 Active2 Jobs Posted
0% Hire Rate, 2 Open Job0/hr avg rate
0 hoursMember Since
About 24 days agoPreferred Qualifications
Freelancer Type:
ExpertJob Success Score:
Not sureFixnhour Hours:
Not sureProfile Level:
Not sureActivity on this Job
Proposals:
0Interviewing:
0Invites sent:
0PROPOSAL (0 PROPOSAL, 0 Invites)
- Avg $0.00
- High $0.00
- Low $0.00
There is no bid yet.
Sorry, there are no bid to display.