Last Updated on December 13, 2020 by
Develop solutions that use blob storage is part of Develop for Azure storage topics. The total weight of this in the exam will be 10-15%. This training post is designed to help and provide readers with a better understanding of the topic mentioned.
Disclaimer: This is not a training article to help complete the Microsoft Azure AZ-204, but it provides a good insight into the areas within these topics. Labs and hands-on work are essential to passing most Microsoft Azure exams.
Develop solutions that use blob storage:
blob storage between storage accounts or containers
Develop solutions that use blob storage:
Azure Storage overview
In this section, we will be looking at an overview of all the types of Azure storage available:
Files
- Fully managed file shares in the cloud
- SMB and REST access
- “Lift and shift” legacy apps
- Sync with on-premises
Blobs
- Highly scalable, REST-based cloud object store
- Block blobs: Sequential file I/O
- Page blobs: Random-write pattern data
- Append blobs
Tables
- Massive auto-scaling NoSQL store
- Dynamic scaling based on load
Queues
- Reliable queues at scale for cloud services
- Decouple and scale components
- Message visibility
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Blob storage
Object storage solution in the cloud
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.
Note: Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Blob storage is designed for:
- Serving images or documents directly to a browser
- Storing files for distributed access
- Streaming video and audio
- Writing to log files
- Storing data for backup and restore disaster recovery, and archiving
- Storing data for analysis by an on-premises or Azure-hosted service
Accessible via an HTTP/HTTPS API
With HTTP/HTTPS, objects in blog storage can be accessed globally. Accessibility to the storage blog can be achieved via Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.
Note: Client libraries are available for different programming languages, including:
Azure Blob storage resource hierarchy

Storage account: This refers to an account that you can use to access Azure Storage. In terms of hierarchy, it’s the parent account of the container and blob resources.
Container: This refers to a collection of objects that are grouped together and accessed by using the same base Uniform Resource Identifier (URI).
Blob: This refers to an object, typically a media file or a disk, that is stored in the container.
Blob types

Block blobs
- Comprise blocks of data
- Ideal for data that is stored in blocks—up to 100-MB chunks
- Simultaneous upload of large blobs with a single write operation
- A single block blob can include up to 50,000 blocks
Append blobs
Append blobs include the following characteristics:
- They are composed of blocks
- They are optimized for append operations
- They are ideal for performant logging
Page blobs
- Composed of 512-byte pages
- Similar to hard disk storage
- Ideal for virtual hard disks
- Pages created by initializing the age blob and specifying the size
- Content to be added within 512-byte page boundaries
- Writes to page blobs commit immediately
Blob events

Azure Storage events allow applications to react to the creation and deletion of blobs using modern serverless architectures. It does so without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through Azure Event Grid to subscribers such as Azure Functions, Azure Logic Apps, or even to your own custom http listener, and you only pay for what you use.
Blob storage events are reliably sent to the Event grid service which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery.
Common Blob storage event scenarios include image or video processing, search indexing, or any file-oriented workflow. Asynchronous file uploads are a good fit for events. When changes are infrequent, but your scenario requires immediate responsiveness, event-based architecture can be especially efficient.
Develop solutions that use blob storage:
set and retrieve properties and metadata
Develop solutions that use blob storage:
Set and retrieve properties and metadata
Containers and blobs support custom metadata, represented as HTTP headers. Metadata headers can be set on a request that creates a new container or blob resource, or on a request that explicitly creates a property on an existing resource.
Metadata headers are name/value pairs. The format for the header is:
x-ms-meta-name:string-value
Beginning with version 2009-09-19, metadata names must adhere to the naming rules for C# identifiers. Names are case insensitive. Note that metadata names preserve the case with which they were created, but are case-insensitive when set or read. If two or more metadata headers with the same name are submitted for a resource, the Blob service returns status code 400 (Bad Request). The metadata consists of name/value pairs. The total size of all metadata pairs can be up to 8 kilobytes (KB) in size. Metadata name/value pairs are valid HTTP headers, and so they adhere to all restrictions governing HTTP headers.
Metadata on a blob or container resource can be retrieved or set directly, without returning or altering the content of the resource. Note that metadata values can only be read or written in full; partial updates are not supported. Setting metadata on a resource overwrites any existing metadata values for that resource.
Develop for Azure storage (Develop solutions that use Cosmos DB storage)
Develop solutions that use blob storage:
implement data archiving and retention
Develop solutions that use blob storage:
implement data archiving and retention
Lifecycle management
Rule-based automation for data tiering and retention management:
- Rules run daily at the storage account
- Supports:
- General-purpose v2 storage accounts
- Blob storage
- Premium BlockBlob (only supports deletion for lifecycle management)
- Prefix filters enable targeting of containers or sets of blobs
Develop solutions that use blob storage:
implement hot, cool, and archive storage
Develop solutions that use blob storage:
implement hot, cool, and archive storage
Storage tier

Azure Storage offers different storage tiers, which allow you to store Blob storage object data in the most cost-effective manner.
The available tiers include:
Premium storage is a performance tier optimized for mission-critical high-performance applications. This is currently available only for Block blob storage.
The following three access tiers are currently available in the Standard performance tier, for GPv2 account types.
•Hot storage is optimized for storing data that is accessed frequently.
•Cool storage is optimized for storing data that is infrequently accessed and stored for at least 30 days.
•Archive storage is optimized for storing data that’s rarely accessed and is stored for at least 180 days with flexible latency requirements (on the order of hours).
Storage tier pricing



Example of lifecycle management flows

Consider a scenario where data gets frequent access during the early stages of the lifecycle but only occasionally after two weeks. Beyond the first month, the dataset is rarely accessed.
In this scenario, hot storage is best during the early stages. Cool storage is most appropriate for occasional access. Archive storage is the best tier option after the data ages more than a month.
Policy Example

Parameter name | Parameter type | Required |
name | String | True |
enabled | Boolean | False |
type | An enum value | True |
definition | An object that defines the lifecycle rule | True |
A policy must include at least one rule. You can define up to 100 rules in a policy.
A rule name can include up to 256 alphanumeric characters, and the name is case-sensitive. It must be unique within a policy.
Each rule definition is made up of a filter set and an action set.
More topics on Develop for Azure storage:
Develop solutions that use Cosmos DB storage
Microsoft Azure AZ-204 exam topics:
If you have covered the current topics in Develop for Azure storage then you can have a look at the other topic areas:
If you have covered the current topics in Connect to and consume Azure services and third-party services then you can have a look at the other topic areas:
Develop Azure compute solutions (25-30%)
Implement Azure security (15-20%)
Monitor, troubleshoot, and optimize Azure solutions (10-15%)
Connect to and consume Azure services and third-party services (25-30%)
View full documentation Microsoft Azure: AZ-204 exam content from Microsoft
Leave a Reply