Video accessible from your Account page after purchase.
Register your product to gain access to bonus material or receive a coupon.
12+ Hours of Video Instruction
Any IT professional who works with data inside Microsoft Azure should prepare and sit for Microsoft Exam DP-203 in order to demonstrate their knowledge of Azure data engineering. Azure data engineers help stakeholders understand data through exploration, and they build and maintain secure and compliant data-processing pipelines by using different tools and techniques. These professionals use various Azure data services and languages to store and produce cleansed and enhanced datasets for analysis.
Description
Microsoft MVP and Microsoft Certified Azure Solutions Architect Tim Warner walks you through what to expect on the DP-203 Data Engineering on Microsoft Azure exam. The new Azure certifications align to industry job roles; earning Azure certification both validates your specific Azure skillset and increases your value in today's IT job market.
Azure data engineers help ensure that data pipelines and data stores are high-performing, efficient, organized, and reliable, given a set of business requirements and constraints. They deal with unanticipated issues swiftly, and they minimize data loss. They also design, implement, monitor, and optimize data platforms to meet data pipelines needs.
This training course covers every Exam DP-203 objective in a friendly and logical way.
Skill Level
What You Will Learn
After completing this video, you will be able to:
Who Should Take This Course
Course Requirement Prerequisite:
More about Microsoft Press
Microsoft Press creates IT books and references for all skill levels across the range of Microsoft technologies.
https://www.microsoftpressstore.com/
About Pearson Video Training
Pearson publishes expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. These professional and personal technology videos feature world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, Pearson IT Certification, Sams, and Que. Topics include IT Certification, Network Security, Cisco Technology, Programming, Web Development, Mobile Development, and more. Learn more about Pearson Video training athttp://www.informit.com/video.
Introduction
Module 1: Design & Implement Data Storage
Lesson 1: Design a Data Storage Structure
1.1 Design an Azure Data Lake solution
1.2 Recommend file types for storage
1.3 Recommend file types for analytical queries
1.4 Design for efficient querying
Lesson 2: Design for Data Pruning
2.1 Design a folder structure that represents levels of data transformation
2.2 Design a distribution strategy
2.3 Design a data archiving solution
Lesson 3: Design a Partition Strategy
3.1 Design a partition strategy for files
3.2 Design a partition strategy for analytical workloads
3.3 Design a partition strategy for efficiency/performance
3.4 Design a partition strategy for Azure Synapse Analytics
3.5 Identify when partitioning is needed in Azure Data Lake Storage Gen2
Lesson 4: Design the Serving Layer
4.1 Design star schemas
4.2 Design slowly changing dimensions
4.3 Design a dimensional hierarchy
4.4 Design a solution for temporal data
4.5 Design for incremental loading
4.6 Design analytical stores
4.7 Design metastores in Azure Synapse Analytics & Azure Databricks
Lesson 5: Implement Physical Data Storage Structures
5.1 Implement compression
5.2 Implement partitioning
5.3 Implement sharding
5.4 Implement different table geometries with Azure Synapse Analytics pools
5.5 Implement data redundancy
5.6 Implement distributions
5.7 Implement data archiving
Lesson 6: Implement Logical Data Structures
6.1 Build a temporal data solution
6.2 Build a slowly changing dimension
6.3 Build a logical folder structure
6.4 Build external tables
6.5 Implement file & folder structures for efficient querying and data pruning
Lesson 7: Implement the Serving Layer
7.1 Deliver data in a relational star schema
7.2 Deliver data in Parquet files
7.3 Maintain metadata
7.4 Implement a dimensional hierarchy
Module 2: Design & Develop Data Processing
Lesson 8: Ingest & Transform Data
8.1 Transform data by using Apache Spark
8.2 Transform data by using Transact-SQL
8.3 Transform data by using Data Factory
8.4 Transform data by using Azure Synapse Pipelines
8.5 Transform data by using Stream Analytics
Lesson 9: Work with Transformed Data
9.1 Cleanse data
9.2 Split data
9.3 Shred JSON
9.4 Encode & decode data
Lesson 10: Troubleshoot Data Transformations
10.1 Configure error handling for the transformation
10.2 Normalize & denormalize values
10.3 Transform data by using Scala
10.4 Perform data exploratory analysis
Lesson 11: Design a Batch Processing Solution
11.1 Develop batch processing solutions by using Data Factory, Data Lake, Spark, Azure Synapse Pipelines, PolyBase, & Azure Databricks
11.2 Create data pipelines
11.3 Design & implement incremental data loads
11.4 Design & develop slowly changing dimensions
11.5 Handle security & compliance requirements
11.6 Scale resources
Lesson 12: Develop a Batch Processing Solution
12.1 Configure the batch size
12.2 Design & create tests for data pipelines
12.3 Integrate Jupyter/Python notebooks into a data pipeline
12.4 Handle duplicate data
12.5 Handle missing data
12.6 Handle late-arriving data
Lesson 13: Configure a Batch Processing Solution
13.1 Upsert data
13.2 Regress to a previous state
13.3 Design & configure exception handling
13.4 Configure batch retention
13.5 Revisit batch processing solution design
13.6 Debug Spark jobs by using the Spark UI
Lesson 14: Design a Stream Processing Solution
14.1 Develop a stream processing solution by using Stream Analytics, Azure Databricks, & Azure Event Hubs
14.2 Process data by using Spark structured streaming
14.3 Monitor for performance & functional regressions
14.4 Design & create windowed aggregates
14.5 Handle schema drift
Lesson 15: Process Data in a Stream Processing Solution
15.1 Process time series data
15.2 Process across partitions
15.3 Process within one partition
15.4 Configure checkpoints/watermarking during processing
15.5 Scale resources
15.6 Design & create tests for data pipelines
15.7 Optimize pipelines for analytical or transactional purposes
Lesson 16: Troubleshoot a Stream Processing Solution
16.1 Handle interruptions
16.2 Design & configure exception handling
16.3 Upsert data
16.4 Replay archived stream data
16.5 Design a stream processing solution
Lesson 17: Manage Batches and Pipelines
17.1 Trigger batches
17.2 Handle failed batch loads
17.3 Validate batch loads
17.4 Manage data pipelines in Data Factory/Synapse Pipelines
17.5 Schedule data pipelines in Data Factory/Synapse Pipelines
17.6 Implement version control for pipeline artifacts
17.7 Manage Spark jobs in a pipeline
Module 3: Design & Implement Data Security
Lesson 18: Design Security for Data Policies
18.1 Design data encryption for data at rest and in transit
18.2 Design a data auditing strategy
18.3 Design a data masking strategy
18.4 Design for data privacy
Lesson 19: Design Security for Data Standards
19.1 Design a data retention policy
19.2 Design to purge data based on business requirements
19.3 Design Azure RBAC & POSIX-like ACL for Data Lake Storage Gen2
19.4 Design row-level & column-level security
Lesson 20: Implement Data Security Protection
20.1 Implement data masking
20.2 Encrypt data at rest & in motion
20.3 Implement row-level & column-level security
20.4 Implement Azure RBAC
20.5 Implement POSIX-like ACLs for Data Lake Storage Gen2
20.6 Implement a data retention policy
20.7 Implement a data auditing strategy
Lesson 21: Implement Data Security Access
21.1 Manage identities, keys, & secrets across different data platforms
21.2 Implement secure endpoints (private & public)
21.3 Implement resource tokens in Azure Databricks
21.4 Load a DataFrame with sensitive information
21.5 Write encrypted data to tables or Parquet files
21.6 Manage sensitive information
Module 4: Monitor & Optimize Data Storage & Data Processing
Lesson 22: Monitor Data Storage
22.1 Implement logging used by Azure Monitor
22.2 Configure monitoring services
22.3 Measure performance of data movement
22.4 Monitor & update statistics about data across a system
22.5 Monitor data pipeline performance
22.6 Measure query performance
Lesson 23: Monitor Data Processing
23.1 Monitor cluster performance
23.2 Understand custom logging options
23.3 Schedule & monitor pipeline tests
23.4 Interpret Azure Monitor metrics & logs
23.5 Interpret a Spark Directed Acyclic Graph (DAG)
Lesson 24: Tune Data Storage
24.1 Compact small files
24.2 Rewrite user-defined functions (UDFs)
24.3 Handle skew in data
24.4 Handle data spill
24.5 Tune shuffle partitions
24.6 Find shuffling in a pipeline
24.7 Optimize resource management
Lesson 25: Optimize & Troubleshoot Data Processing
25.1 Tune queries by using indexers
25.2 Tune queries by using cache
25.3 Optimize pipelines for analytical or transactional purposes
25.4 Optimize pipeline for descriptive versus analytical workloads
25.5 Troubleshoot failed Spark jobs
25.6 Troubleshoot failed pipeline runs