Job Title: Data Architect
Location: Atlanta, GA
Duration: 12 Months
Job Description –
The Department of Early Care & Learning (DECAL) is seeking an experienced Data Architect to design and implement enterprise data solutions using Microsoft Fabric and Azure Databricks for integration with state-level systems. This role will focus on creating scalable data architecture that enables seamless data flow between IES Gateway and our analytics platform. The ideal candidate will have deep expertise in modern data architecture, with specific experience in Microsoft's data platform and Delta Lake architecture.
Work Location & Attendance Requirements:
• Must be physically located in Georgia
• On-site: Tuesday to Thursday, per manager's discretion
• Mandatory in-person meetings:
o All Hands
o Enterprise Applications
o On-site meetings
o DECAL All Staff
• Work arrangements subject to management's decision
Key Responsibilities:
Data Architecture:
- Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.
- Design data flows within the Microsoft Fabric environment.
- Implement OneLake storage strategies.
- Configure Synapse Analytics workspaces.
- Establish Power BI integration patterns.
Integration Design:
- Architect data integration patterns between IES Gateway and the analytics platform using Azure Databricks and Microsoft Fabric.
- Design Delta Lake architecture for IES Gateway data.
- Implement medallion architecture (Bronze/Silver/Gold layers).
- Create real-time data ingestion patterns.
- Establish data quality frameworks.
Lakehouse Architecture:
- Implement modern data lakehouse architecture using Delta Lake, ensuring data reliability and performance.
Data Governance:
- Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance.
Implement row-level security.
- Configure Microsoft Purview policies.
- Establish data masking for sensitive information.
- Design audit logging mechanisms.
Pipeline Development:
- Design scalable data pipelines using Azure Databricks for ETL/ELT processes and real-time data integration.
Performance Optimization:
- Implement performance tuning strategies for large-scale data processing and analytics workloads.
- Optimize Spark configurations.
- Implement partitioning strategies.
- Design caching mechanisms.
- Establish monitoring frameworks.
Security Framework:
Design and implement security patterns aligned with federal and state requirements for sensitive data handling.
Education:
Bachelor's degree in computer science or related field.
Experience:
- 6+ years of experience in data architecture and engineering.
- 2+ years hands-on experience with Azure Databricks and Spark.
- Recent experience with Microsoft Fabric platform.
Technical Skills:
Microsoft Fabric Expertise:
- Data Integration: Combining and cleansing data from various sources.
- Data Pipeline Management: Creating, orchestrating, and troubleshooting data pipelines.
- Analytics Reporting: Building and delivering detailed reports and dashboards to derive meaningful insights from large datasets.
- Data Visualization Techniques: Representing data graphically in impactful and informative ways.
- Optimization and Security: Optimizing queries, improving performance, and securing data
Azure Databricks Experience:
- Apache Spark Proficiency: Utilizing Spark for large-scale data processing and analytics.
- Data Engineering: Building and managing data pipelines, including ETL (Extract, Transform, Load) processes.
- Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
- Data Analysis and Visualization: Using Databricks notebooks for exploratory data analysis (EDA) and creating visualizations.
- Cluster Management: Configuring and managing Databricks clusters for optimized performance. (Ex: autoscaling and automatic termination)
- Integration with Azure Services: Integrating Databricks with other Azure services like Azure Data Lake, Azure SQL Database, and Azure Synapse Analytics.
- Machine Learning: Developing and deploying machine learning models using Databricks MLflow and other tools.
- Data Governance: Implementing data governance practices using Unity Catalog and Microsoft Purview
Programming & Query Languages:
SQL: Proficiency in SQL for querying and managing databases, including skills in SELECT statements, JOINs, subqueries, and window functions12.
Python: Using Python for data manipulation, analysis, and scripting, including libraries like Pandas, NumPy, and PySpark
Data Modeling:
- Dimensional modeling
- Real-time data modeling patterns
Soft Skills:
- Strong analytical and problem-solving abilities
- Excellent communication skills for technical and non-technical audiences
- Experience working with government stakeholders
Preferred Experience:
- Azure DevOps
- Infrastructure as Code (Terraform)
- CI/CD for data pipelines
- Data mesh architecture
Certifications (preferred):
- Microsoft Azure Data Engineer Associate
- Databricks Data Engineer Professional
- Microsoft Fabric certifications (as they become available)
Project-Specific Requirements:
- Experience designing data architectures for grant management systems
- Knowledge of federal/state compliance requirements for data handling
- Understanding of financial data processing requirements
- Experience with real-time integration patterns
This position requires strong expertise in modern data architecture with specific focus on Microsoft's data platform. The successful candidate will play a crucial role in designing and implementing scalable data solutions that enable efficient data processing and analytics for state-level grant management and reporting systems.
You received this message because you are subscribed to the Google Groups "SureShotJobs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sureshotjobs+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sureshotjobs/CALBV6Xzq1%2BUp993xAPaK5g5XnhdZUb%2B7%2BDsEtgudT_xOc7-52A%40mail.gmail.com.
No comments:
Post a Comment