Quantcast
Channel: Microsoft Azure Storage Team Blog
Viewing all 55 articles
Browse latest View live

(Cross-Post) Windows Azure’s Flat Network Storage and 2012 Scalability Targets

$
0
0

Update April 2014 – All storage accounts have been upgraded to Gen 2 hardware as of April 2014, so all storage accounts in production now have the scalability targets shown here.  Please refer to the MSDN link for the latest Azure Storage scalability targets.

Earlier this year, we deployed a flat network for Windows Azure across all of our datacenters to create Flat Network Storage (FNS) for Windows Azure Storage. We used a flat network design in order to provide very high bandwidth network connectivity for storage clients. This new network design and resulting bandwidth improvements allows us to support Windows Azure Virtual Machines, where we store VM persistent disks as durable network attached blobs in Windows Azure Storage. Additionally, the new network design enables scenarios such as MapReduce and HPC that can require significant bandwidth between compute and storage.

From the start of Windows Azure, we decided to separate customer VM-based computation from storage, allowing each of them to scale independently, making it easier to provide multi-tenancy, and making it easier to provide isolation. To make this work for the scenarios we need to address, a quantum leap in network scale and throughput was required. This resulted in FNS, where the Windows Azure Networking team (under Albert Greenberg) along with the Windows Azure Storage, Fabric and OS teams made and deployed several hardware and software networking improvements.

The changes to new storage hardware and to a high bandwidth network comprise the significant improvements in our second generation storage (Gen 2), when compared to our first generation (Gen 1) hardware, as outlined below:

Storage SKU

Storage Node Network Speed

Networking Between Compute and Storage

Load Balancer

Storage Device Used for Journaling

Gen 1

1 Gbps

Hierarchical Network

Hardware Load Balancer

Hard Drives

Gen 2

10 Gbps

Flat Network

Software Load Balancer

SSDs

The deployment of our Gen 2 SKU, along with software improvements, provides significant bandwidth between compute and storage using a flat network topology. The specific implementation of our flat network for Windows Azure is referred to as the “Quantum 10” (Q10) network architecture. Q10 provides a fully non-blocking 10Gbps based fully meshed network, providing an aggregate backplane in excess of 50 Tbps of bandwidth for each Windows Azure datacenter. Another major improvement in reliability and throughput is moving from a hardware load balancer to a software load balancer. Then the storage architecture and design described here, has been tuned to fully leverage the new Q10 network to provide flat network storage for Windows Azure Storage.

With these improvements, we are pleased to announce an increase in the scalability targets for Windows Azure Storage, where all new storage accounts are created on the Gen 2 hardware SKU.These new scalability targets apply to all storage accounts created after June 7th, 2012. Storage accounts created before this date have the prior scalability targets described here. Unfortunately, we do not offer the ability to migrate storage accounts, so only storage accounts created after June 7th, 2012 have these new scalability targets.

To find out the creation date of your storage account, you can go to the new portal, click on the storage account, and see the creation date on the right in the quick glance section as shown below:

accountcreationdate

Storage Account Scalability Targets

By the end of 2012, we will have finished rolling out the software improvements for our flat network design. This will provide the following scalability targets for asingle storage account created after June 7th 2012.

  • Capacity– Up to 200 TBs
  • Transactions– Up to 20,000 entities/messages/blobs per second
  • Bandwidth for a Geo Redundant storage account
    • Ingress - up to 5 gigabits per second
    • Egress - up to 10 gigabits per second
  • Bandwidth for a Locally Redundant storage account
    • Ingress - up to 10 gigabits per second
    • Egress - up to 15 gigabits per second

Storage accounts have geo-replication on by default to provide what we call Geo Redundant Storage. Customers can turn geo-replication off to use what we call Locally Redundant Storage, which results in a discounted price relative to Geo Redundant Storage and higher ingress and egress targets (by end of 2012) as described above. For more information on Geo Redundant Storage and Locally Redundant Storage, please see here.

Note, the actual transaction and bandwidth targets achieved by your storage account will very much depend upon the size of objects, access patterns, and the type of workload your application exhibits. To go above these targets, a service should be built to use multiple storage accounts, and partition the blob containers, tables and queues and objects across those storage accounts. By default, a single Windows Azure subscription gets 20 storage accounts. However, you can contact customer support to get more storage accounts if you need to store more than that (e.g., petabytes) of data.

Partition Scalability Targets

Within a storage account, all of the objects are grouped into partitions as described here. Therefore, it is important to understand the performance targets of a single partition for our storage abstractions, which are (the below Queue and Table throughputs were achieved using an object size of 1KB):

  • Single Queue– all of the messages in a queue are accessed via a single queue partition. A single queue is targeted to be able to process:
    • Up to 2,000 messages per second
  • Single Table Partition– a table partition are all of the entities in a table with the same partition key value, and usually tables have many partitions. The throughput target for a single table partition is:
    • Up to 2,000 entities per second
    • Note, this is for a single partition, and not a single table. Therefore, a table with good partitioning, can process up to the 20,000 entities/second, which is the overall account target described above.
  • Single Blob– the partition key for blobs is the “container name + blob name”, therefore we can partition blobs down to a single blob per partition to spread out blob access across our servers. The target throughput of a single blob is:
    • Up to 60 MBytes/sec

The above throughputs are the high end targets. What can be achieved by your application very much depends upon the size of the objects being accessed, the operation types (workload) and the access patterns. We encourage all services to test the performance at the partition level for their workload.

When your application reaches the limit to what a partition can handle for your workload, it will start to get back “503 Server Busy” or “500 Operation Timeout” responses. When this occurs, the application should use exponential backoff for retries. The exponential backoff allows the load on the partition to decrease, and to ease out spikes in traffic to that partition.

In summary, we are excited to announce our first step towards providing flat network storage. We plan to continue to invest in improving bandwidth between compute and storage as well as increase the scalability targets of storage accounts and partitions over time.

Brad Calder and Aaron Ogus
Windows Azure Storage


Announcing Storage Client Library 2.1 RTM & CTP for Windows Phone

$
0
0

We are pleased to announce that the storage client for .NET 2.1 has RTM’d. This release includes several notable features such as Async Task methods, IQueryable for Tables, buffer pooling support, and much more. In addition we are releasing the CTP of the storage client for Windows Phone 8. With the existing support for Windows Runtime clients can now leverage Windows Azure Storage via a consistent API surface across multiple windows platforms. As usual all of the source code is available via github (see resources section below). You can download the latest binaries via the following nuget Packages:

Nuget – 2.1 RTM

Nuget – 2.1 For Windows Phone and Windows RunTime (Preview)

Nuget – 2.1 Tables Extension library for Non-JavaScript Windows RunTime apps (Preview)

This remainder of this blog will cover some of the new features and scenarios in additional detail and provide supporting code samples. As always we appreciate your feedback, so please feel free to add comments below.

Fundamentals

For this release we focused heavily on fundamentals by dramatically expanding test coverage, and building an automated performance suite that let us benchmark performance behaviors across various high scale scenarios.

Here are a few highlights for the 2.1 release:

  • Over 1000 publicly available Unit tests covering every public API
  • Automated Performance testing to validate performance impacting changes
  • Expanded Stress testing to ensure data correctness under massive loads
  • Key performance improving features that target memory behavior and shared infrastructure (more details below)

Performance

We are always looking for ways to improve the performance of client applications by improving the storage client itself and by exposing new features that better allow clients to optimize their applications. In this release we have done both and the results are dramatic.

For example, below are the results from one of the test scenarios we execute where a single XL VM round trips 30 256MB Blobs simultaneously (7.5 GB in total). As you can see there are dramatic improvements in both latency and CPU usage compared to SDK 1.7 (CPU drops almost 40% while latency is reduced by 16.5% for uploads and 23.2% for downloads). Additionally, you may note the actual latency improvements between 2.0.5.1 and 2.1 are only a few percentage points. This is because we have successfully removed the client out of the critical path resulting in an application that is now entirely dependent on the network. Further, while we have improved performance in this scenario CPU usage has dropped another 13% on average compared to SDK 2.0.5.1.

 

perf

This is just one example of the performance improvements we have made, for more on performance as well as best practices please see the Tech Ed Presentation in the Resources section below.

Async Task Methods

Each public API now exposes an Async method that returns a task for a given operation. Additionally, these methods support pre-emptive cancellation via an overload which accepts a CancellationToken. If you are running under .NET 4.5, or using the Async Targeting Pack for .NET 4.0, you can easily leverage the async / await pattern when writing your applications against storage.

Buffer Pooling

For high scale applications, Buffer Pooling is a great strategy to allow clients to re-use existing buffers across many operations. In a managed environment such as .NET, this can dramatically reduce the number of cycles spent allocating and subsequently garbage collecting semi-long lived buffers.

To address this scenario each Service Client now exposes a BufferManager property of type IBufferManager. This property will allow clients to leverage a given buffer pool with any associated objects to that service client instance. For example, all CloudTable objects created via CloudTableClient.GetTableReference() would make use of the associated service clients BufferManager. The IBufferManager is patterned after the BufferManager in System.ServiceModel.dll to allow desktop clients to easily leverage an existing implementation provided by the framework. (Clients running on other platforms such as Windows Runtime or Windows Phone may implement a pool against the IBufferManager interface)

For desktop applications to leverage the built in BufferManager provided by the System.ServiceModel.dll a simple adapter is required:

using Microsoft.WindowsAzure.Storage;
using System.ServiceModel.Channels;

publicclass WCFBufferManagerAdapter : IBufferManager
{
privateint defaultBufferSize = 0;

public WCFBufferManagerAdapter(BufferManager manager, int defaultBufferSize)
{
this.Manager = manager;
this.defaultBufferSize = defaultBufferSize;
}

public BufferManager Manager { get; internal set; }

publicvoid ReturnBuffer(byte[] buffer)
{
this.Manager.ReturnBuffer(buffer);
}

publicbyte[] TakeBuffer(int bufferSize)
{
returnthis.Manager.TakeBuffer(bufferSize);
}

publicint GetDefaultBufferSize()
{
returnthis.defaultBufferSize;
}
}

With this in place my application can now specify a shared buffer pool across any resource associated with a given service client by simply setting the BufferManager property.

BufferManager mgr = BufferManager.CreateBufferManager([MaxBufferPoolSize], [MaxBufferSize]);

serviceClient.BufferManager = new WCFBufferManagerAdapter(mgr, [MaxBufferSize]);

Multi-Buffer Memory Stream

During the course of our performance investigations we have uncovered a few performance issues with the MemoryStream class provided in the BCL (specifically regarding Async operations, dynamic length behavior, and single byte operations). To address these issues we have implemented a new Multi-Buffer memory stream which provides consistent performance even when length of data is unknown. This class leverages the IBufferManager if one is provided by the client to utilize the buffer pool when allocating additional buffers. As a result, any operation on any service that potentially buffers data (Blob Streams, Table Operations, etc.) now consumes less CPU, and optimally uses a shared memory pool.

.NET MD5 is now default

Our performance testing highlighted a slight performance degradation when utilizing the FISMA compliant native MD5 implementation compared to the built in .NET implementation. As such, for this release the .NET MD5 is now used by default, any clients requiring FISMA compliance can re-enable it as shown below:

CloudStorageAccount.UseV1MD5 = false;

New Range Based Overloads

In 2.1 Blob upload API’s include an overload which allows clients to only upload a given range of the byte array or stream to the blob. This feature allows clients to avoid potentially pre-buffering data prior to uploading it to the storage service. Additionally, there are new download range API’s for both streams and byte arrays that allow efficient fault tolerant range downloads without the need to buffer any data on the client side.

Client Tracing

The 2.1 release implements .NET Tracing, allowing users to enable log information regarding request execution and REST requests (See below for a table of what information is logged). Additionally, Windows Azure Diagnostics provides a trace listener that can redirect client trace messages to the WADLogsTable if users wish to persist these traces to the cloud.

Logged Data

Each log line will include the following data:

  • Client Request ID: Per request ID that is specified by the user in OperationContext
  • Event: Free-form text

As part of each request the following data will be logged to make it easier to correlate client-side logs to server-side logs:

  • Request:
  • Request Uri
  • Response:
  • Request ID
  • HTTP status code

Trace Levels

Level

Events

Off

Nothing will be logged.

Error

If an exception cannot or will not be handled internally and will be thrown to the user; it will be logged as an error.

Warning

If an exception is caught and handled internally, it will be logged as a warning. Primary use case for this is the retry scenario, where an exception is not thrown back to the user to be able to retry. It can also happen in operations such as CreateIfNotExists, where we handle the 404 error silently.

Informational

The following info will be logged:

  • Right after the user calls a method to start an operation, request details such as URI and client request ID will be logged.
  • Important milestones such as Sending Request Start/End, Upload Data Start/End, Receive Response Start/End, Download Data Start/End will be logged to mark the timestamps.
  • Right after the headers are received, response details such as request ID and HTTP status code will be logged.
  • If an operation fails and the storage client decides to retry, the reason for that decision will be logged along with when the next retry is going to happen.
  • All client-side timeouts will be logged when storage client decides to abort a pending request.

Verbose

Following info will be logged:

  • String-to-sign for each request
  • Any extra details specific to operations (this is up to each operation to define and use)


Enabling Tracing

A key concept is the opt-in / opt-out model that the client provides to tracing. In typical applications it is customary to enable tracing at a given verbosity for a specific class. This works fine for many client applications, however for cloud applications that are executing at scale this approach may generate much more data than what is required by the user. As such we have provided the ability for clients to work in either an opt-in model for tracing which allows clients to configure listeners at a given verbosity, but only log specific requests if and when they choose. Essentially this design provides the ability for users to perform “vertical” logging across layers of the stack targeted at specific requests rather than “horizontal” logging which would record all traffic seen by a specific class or layer.

To enable tracing in .NET you must add a trace source for the storage client to the app.config and set the verbosity:

<system.diagnostics>
<sources>
<sourcename="Microsoft.WindowsAzure.Storage">
<listeners>
<addname="myListener"/>
</listeners>
</source>
</sources>
<switches>
<addname="Microsoft.WindowsAzure.Storage"value="Verbose"/>
</switches>

Then add a listener to record the output; in this case we will simply record it to application.log
 
<sharedListeners>
<addname="myListener"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="application.log"/>
</sharedListeners>

The application is now set to log all trace messages created by the storage client up to the Verbose level. However, if a client wishes to enable logging only for specific clients or requests they can further configure the default logging level in their application by setting OperationContext.DefaultLogLevel and then opt-in any specific requests via the OperationContext object:
 
// Disable Default Logging
OperationContext.DefaultLogLevel = LogLevel.Off;

// Configure a context to track my upload and set logging level to verbose
OperationContext myContext = new OperationContext() { LogLevel = LogLevel.Verbose };
blobRef.UploadFromStream(stream, myContext);

With client side tracing used in conjunction with storage logging clients can now get a complete view of their application from both the client and server perspectives.

Blob Features

Blob Streams

In the 2.1 release, we improved blob streams that are created by OpenRead and OpenWrite APIs of CloudBlockBlob and CloudPageBlob. The write stream returned by OpenWrite can now upload much faster when the parallel upload functionality is enabled by keeping number of active writers at a certain level. Moreover, the return type is changed from a Stream to a new type named CloudBlobStream, which is derived from Stream. CloudBlobStream offers the following new APIs:

publicabstract ICancellableAsyncResult BeginCommit(AsyncCallback callback, object state);
publicabstract ICancellableAsyncResult BeginFlush(AsyncCallback callback, object state);
publicabstractvoid Commit();
publicabstractvoid EndCommit(IAsyncResult asyncResult);
publicabstractvoid EndFlush(IAsyncResult asyncResult);

Flush already exists in Stream itself, so CloudBlobStream only adds asynchronous version. However, Commit is a completely new API that now allows the caller to commit before disposing the Stream. This allows much easier exception handling during commit and also the ability to commit asynchronously.

The read stream returned by OpenRead does not have a new type, but it now has true synchronous and asynchronous implementations. Clients can now get the stream synchronously via OpenRead or asynchronously using [Begin|End]OpenRead. Moreover, after the stream is opened, all synchronous calls such as querying the length or the Read API itself are truly synchronous, meaning that they do not call any asynchronous APIs internally.

Table Features

IgnorePropertyAttribute

When persisting POCO objects to Windows Azure Tables in some cases clients may wish to omit certain client only properties. In this release we are introducing the IgnorePropertyAttribute to allow clients an easy way to simply ignore a given property during serialization and de-serialization of an entity. The following snippet illustrates how to ignore my FirstName property of my entity via the IgnorePropertyAttribute:

publicclass Customer : TableEntity
{
[IgnoreProperty]
publicstring FirstName { get; set; }
}
Compiled Serializers

When working with POCO types previous releases of the SDK relied on reflection to discover all applicable properties for serialization / de-serialization at runtime. This process was both repetitive and expensive computationally. In 2.1 we are introducing support for Compiled Expressions which will allow the client to dynamically generate a LINQ expression at runtime for a given type. This allows the client to do the reflection process once and then compile a Lambda at runtime which can now handle all future read and writes of a given entity type. In performance micro-benchmarks this approach is roughly 40x faster than the reflection based approach computationally.

All compiled expressions for read and write are held in a static concurrent dictionaries on TableEntity. If you wish to disable this feature simply set TableEntity.DisableCompiledSerializers = true;

Serialize 3rd Party Objects

In some cases clients wish to serialize objects in which they do not control the source, for example framework objects or objects form 3rd party libraries. In previous releases clients were required to write custom serialization logic for each type they wished to serialize. In the 2.1 release we are exposing the core serialization and de-serialization logic for any CLR type. This allows clients to easily persist and read back entities objects for types that do not derive from TableEntity or implement the ITableEntity interface. This pattern can also be especially useful when exposing DTO types via a service as the client will longer be required to maintain two entity types and marshal between them.

A general purpose adapter pattern can be used which will allow clients to simply wrap an object instance in generic adapter which will handle serialization for a given type. The example below illustrates this pattern:

publicclass EntityAdapter<T> : ITableEntity where T : new()
{
public EntityAdapter()
{
// If you would like to work with objects that do not have a default Ctor you can use (T)Activator.CreateInstance(typeof(T));
this.InnerObject = new T();
}

public EntityAdapter(T innerObject)
{
this.InnerObject = innerObject;
}

public T InnerObject { get; set; }

/// <summary>
/// Gets or sets the entity's partition key.
/// </summary>
/// <value>The partition key of the entity.</value>
publicstring PartitionKey { [TODO: Must implement logic to map PartitionKey to object here!] get; set; }

/// <summary>
/// Gets or sets the entity's row key.
/// </summary>
/// <value>The row key of the entity.</value>
publicstring RowKey {[TODO: Must implement logic to map RowKey to object here!] get; set; }

/// <summary>
/// Gets or sets the entity's timestamp.
/// </summary>
/// <value>The timestamp of the entity.</value>
public DateTimeOffset Timestamp { get; set; }

/// <summary>
/// Gets or sets the entity's current ETag. Set this value to '*' in order to blindly overwrite an entity as part of an update operation.
/// </summary>
/// <value>The ETag of the entity.</value>
publicstring ETag { get; set; }

publicvirtualvoid ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
{
TableEntity.ReadUserObject(this.InnerObject, properties, operationContext);
}

publicvirtual IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
{
return TableEntity.WriteUserObject(this.InnerObject, operationContext);
}
}

The following example uses the EntityAdapter pattern to insert a DTO object directly to the table via the adapter:
 
table.Execute(TableOperation.Insert(new EntityAdapter<CustomerDTO>(customer)));
 
Further I can retrieve this entity back via:
 
testTable.Execute(TableOperation.Retrieve<EntityAdapter<CustomerDTO>>(pk, rk)).Result;

Note, the Compiled Serializer functionality will be utilized for any types serialized or deserialized via TableEntity.[Read|Write]UserObject.
Table IQueryable

In 2.1 we are adding IQueryable support for the Table Service layer on desktop and phone. This will allow users to construct and execute queries via LINQ similar to WCF Data Services, however this implementation has been specifically optimized for Windows Azure Tables and NoSQL concepts. The snippet below illustrates constructing a query via the new IQueryable implementation:

var query = from ent in currentTable.CreateQuery<CustomerEntity>()
where ent.PartitionKey == “users” && ent.RowKey = “joe”
select ent;

The IQueryable implementation transparently handles continuations, and has support to add RequestOptions, OperationContext, and client side EntityResolvers directly into the expression tree. Additionally, since this makes use of existing infrastructure optimizations such as IBufferManager, Compiled Serializers, and Logging are fully supported.

Note, to support IQueryable projections the type constraint on TableQuery of ITableEntity, new() has been removed. Instead, any TableQuery objects not created via the new CloudTable.CreateQuery<T>() method will enforce this constraint at runtime.

Conceptual model

We are committed to backwards compatibility, as such we strive to make sure we introduce as few breaking changes as possible for existing clients. Therefore, in addition to supporting the new IQueryable mode of execution, we continue to support the 2.x “fluent” mode of constructing queries via the Where, Select, and Take methods. However, these modes are not strictly interoperable while constructing queries as they store data in different forms.

Aside from query construction, a key difference between the two modes is that the IQueryable interface requires that the query object be able to execute itself, as compared to the previous model of executing queries via a CloudTable object. A brief summary of these two modes of execution is listed below:

Fluent Mode (2.0.x)

  • Queries are created by directly calling a constructor
  • Queries are executed against a CloudTable object via ExecuteQuery[Segmented] methods
  • EntityResolver specified in execute overload
  • Fluent methods Where, Select, and Take are provided

IQueryable Mode (2.1+)

  • Queries are created by an associated table, i.e. CloudTable.CreateQuery<T>()
  • Queries are executed by enumerating the results, or by Execute[Segmented] methods on TableQuery
  • EntityResolver specified via LINQ extension method Resolve
  • IQueryable Extension Methods provided : WithOptions, WithContext, Resolve, AsTableQuery

The table below illustrates various scenarios between the two modes

 

Fluent Mode

IQueryable Mode

Construct Query

TableQuery<ComplexEntity> stringQuery = new TableQuery<ComplexEntity>()

TableQuery<ComplexEntity> query = (from ent in table.CreateQuery<ComplexEntity>()

Filter

q.Where(TableQuery.GenerateFilterCondition("val",QueryComparisons.GreaterThanOrEqual, 50));

TableQuery<ComplexEntity> query =(from ent in table.CreateQuery<ComplexEntity>()

                                                            where ent.val >= 50select ent);

Take

q.Take(5);

TableQuery<ComplexEntity>  query = (from ent in table.CreateQuery<ComplexEntity>()

                                                              select ent).Take(5);

Projection

q.Select(new List<string>() { "A", "C" })

TableQuery<ProjectedEntity>  query = (from ent in table.CreateQuery<ComplexEntity>()

                                                              select new ProjectedEntity(){a = ent.a,b = ent.b,c = ent.c…});

Entity Resolver

currentTable.ExecuteQuery(query, resolver)

TableQuery<ComplexEntity> query = (from ent in table.CreateQuery<ComplexEntity>()

                                                              select ent).Resolve(resolver);

Execution

currentTable.ExecuteQuery(query)

foreach (ProjectedPOCO ent in query)

< OR >

query.AsTableQuery().Execute(options, opContext)

Execution Segmented

TableQuerySegment<Entity> seg = currentTable.ExecuteQuerySegmented(query, continuationToken, options, opContext);

TableQuery<ComplexEntity> query = (from ent in table.CreateQuery<ComplexEntity>()

                                                             select ent).AsTableQuery().ExecuteSegmented(token, options, opContext);

Request Options

currentTable.ExecuteQuery(query, options, null)

TableQuery<ComplexEntity> query = (from ent in table.CreateQuery<ComplexEntity>()

                                                             select ent).WithOptions(options);

< OR >

query.AsTableQuery().Execute(options, null)

Operation Context

currentTable.ExecuteQuery(query, null, opContext)

TableQuery<ComplexEntity> query = (from ent in table.CreateQuery<ComplexEntity>()

                                                             select ent).WithContext(opContext);

< OR >

query.AsTableQuery().Execute(null, opContext)

 

Complete Query

The query below illustrates many of the supported extension methods and returns an enumerable of string values corresponding to the “Name” property on the entities.

var nameResults = (from ent in currentTable.CreateQuery<POCOEntity>()
where ent.Name == "foo"
select ent)
.Take(5)
.WithOptions(new TableRequestOptions())
.WithContext(new OperationContext())
.Resolve((pk, rk, ts, props, etag) => props["Name"].StringValue);

Note the three extension methods which allow a TableRequestOptions, an OperationContext, and an EntityResolver to be associated with a given query. These extensions are available by including a using statement for the Microsoft.WindowsAzure.Storage.Tables.Queryable namespace.

The extension .AsTableQuery() is also provided, however unlike the WCF implementation this is no longer mandatory, it simply allows clients more flexibility in query execution by providing additional methods for execution such as Task, APM, and segmented execution methods.

Projection

In traditional LINQ providers projection is handled via the select new keywords, which essentially performs two separate actions. The first is to analyze any properties that are accessed and send them to the server to allow it to only return desired columns, this is considered server side projection. The second is to construct a client side action which is executed for each returned entity, essentially instantiating and populating its properties with the data returned by the server, this is considered client side projection. In the implementation released in 2.1 we have allowed clients to separate these two different types of projections by allowing them to be specified separately in the expression tree. (Note, you can still use the traditional approach via select new if you prefer.)

Server Side Projection Syntax

For a simple scenario where you simply wish to filter the properties returned by the server a convenient helper is provided. This does not provide any client side projection functionality, it simply limits the properties returned by the service. Note, by default PartitionKey, RowKey, TimeStamp, and Etag are always requested to allow for subsequent updates to the resulting entity.

IQueryable<POCOEntity> projectionResult = from ent in currentTable.CreateQuery<POCOEntity>()
select TableQuery.Project(ent, "a", "b");

This has the same effect as writing the following, but with improved performance and simplicity:

IQueryable<POCOEntity> projectionResult = from ent in currentTable.CreateQuery<POCOEntity>()
select new POCOEntity()
{
PartitionKey = ent.PartitionKey,
RowKey = ent.RowKey,
Timestamp = ent.Timestamp,
a = ent.a,
b = ent.b
};

Client Side Projection Syntax with resolver

For scenarios where you wish to perform custom client processing during deserialization the EntityResolver is provided to allow the client to inspect the data prior to determining its type or return value. This essentially provides an open ended hook for clients to control deserialization in any way they wish. The example below performs both a server side and client side project, projecting into a concatenated string of the “FirstName” and “LastName” properties.

IQueryable<string> fullNameResults = (from ent in from ent in currentTable.CreateQuery<POCOEntity>()
select TableQuery.Project(ent, "FirstName", "LastName"))
.Resolve((pk, rk, ts, props, etag) => props["FirstName"].StringValue + props["LastName"].StringValue);

The EntityResolver can read the data directly off of the wire which avoids the step of de-serializing the data into the base entity type and then selecting out the final result from that “throw away” intermediate object. Since EntityResolver is a delegate type any client side projection logic can be implemented here (See the NoSQL section here for a more in depth example).
Type-Safe DynamicTableEntity Query Construction

The DynamicTableEntity type allows for clients to interact with schema-less data in a simple straightforward way via a dictionary of properties. However constructing type-safe queries against schema-less data presents a challenge when working with the IQueryable interface and LINQ in general as all queries must be of a given type which contains relevant type information for its properties. So for example, let’s say I have a table that has both customers and orders in it. Now if I wish to construct a query that filters on columns across both types of data I would need to create some dummy CustomerOrder super entity which contains the union of properties between the Customer and Order entities.

This is not ideal, and this is where the DynamicTableEntity comes in. The IQueryable implementation has provided a way to check for property access via the DynamicTableEntity Properties dictionary in order to provide for type-safe query construction. This allows the user to indicate to the client the property it wishes to filter against and its type. The sample below illustrates how to create a query of type DynamicTableEntity and construct a complex filter on different properties:

TableQuery<DynamicTableEntity> res = from ent in table.CreateQuery<DynamicTableEntity>()
where ent.Properties["customerid"].StringValue == "customer_1" ||
ent.Properties["orderdate"].DateTimeOffsetValue > startDate
select ent;

In the example above the IQueryable was smart enough to infer that the client is filtering on the “customerid” property as a string, and the “orderdate” as a DateTimeOffset and constructed the query accordingly.

Windows Phone Known Issue

The current CTP release contains a known issue where in some cases calling HttpWebRequest.Abort() may not result in the HttpWebRequest’s callback being called. As such, it is possible when cancelling an outstanding request the callback may be lost and the operation will not return. This issue will be addressed in a future release.

Summary

We are continuously making improvements to the developer experience for Windows Azure Storage and very much value your feedback. Please feel free to leave comments and questions below,

 

Joe Giardino

 

Resources

Getting the most out of Windows Azure Storage – TechEd NA ‘13

Nuget – 2.1 RTM

Nuget – 2.1 For Windows Phone and Windows RunTime (Preview)

Nuget – 2.1 Tables Extension library for Non-JavaScript Windows RunTime apps (Preview)

Github

2.1 Complete Changelog


Windows Azure Storage Emulator 2.2.1 Preview Release with support for “2013-08-15” version

$
0
0

We are excited to announce the release of a preview update to the Windows Azure Storage Emulator that supports the newly announced features for version “2013-08-15” such as CORS, JSON, etc.

The Windows Azure Storage Emulator 2.2.1 Preview Release MSI package can be found here.

Installation steps

This is a preview release and requires that Windows Azure SDK 2.2 be already installed. The installer does not replace the Windows Azure Storage Emulator 2.2 binaries automatically. Instead, it will drop the binaries under a temporary folder "%ProgramFiles%\Windows Azure Storage Emulator 2.2.1\devstore" for 32-bit OS or "%ProgramFiles(x86)%\Windows Azure Storage Emulator 2.2.1\devstore" for 64-bit OS. We took this approach of not overwriting because of the “preview” nature of this emulator and it allows you to easily revert back to previous emulator if required without requiring uninstallation and reinstallation of SDK 2.2. We therefore recommend backing up the Windows Azure Storage Emulator 2.2 binaries and replacing them with the new binaries. A readme.txt file with the detailed manual steps will be open after the MSI installation is complete.

Please note that this is a preview version and any feedback will be greatly appreciated. Please feel free to leave any comments at the end of this post or at the Windows Azure Storage Forum.

Michael Roberson, Jean Ghanem

 

For your convenience, we are providing the post MSI installation instructions that are part of the readme.txt file that is available once you install below:

[README.TXT]

Windows Azure Storage Emulator 2.2.1 Preview

------------------------------------

PREREQUISITES

Windows Azure SDK 2.2 must already be installed from http://www.microsoft.com/en-us/download/details.aspx?id=40893

SETUP

To use version 2.2.1, follow these steps:

  1. Ensure that Windows Azure SDK 2.2 is installed. The Windows Azure Storage Emulator 2.2.1 will not work unless SDK version 2.2 is installed.
  2. Shut down the Windows Azure Storage Emulator if it is currently running.
  3. Copy all files from the following path:
    • For 32-bit OS: "%ProgramFiles%\Windows Azure Storage Emulator 2.2.1\devstore"
    • For 64-bit OS: "%ProgramFiles(x86)%\Windows Azure Storage Emulator 2.2.1\devstore"

to the following path:

"%ProgramFiles%\Microsoft SDKs\Windows Azure\Emulator\devstore"

If prompted, choose to replace the existing files with the new ones.

UNINSTALLATION

Windows Azure Storage Emulator 2.2.1 maintains backward compatibility with version 2.2, so reverting back to version 2.2 is unnecessary in most cases. To revert anyway, reinstall the Windows Azure SDK 2.2 emulator package from the following website:

http://www.windowsazure.com/en-us/downloads/

Microsoft Azure Storage Release –Append Blob, New Azure File Service Features and Client Side Encryption General Availability

$
0
0

We are excited to announce new capabilities in the Azure Storage Service and updates to our Storage Client Libraries. We have a new blob type, Append Blob, as well as a number of new features for the Azure File Service. In detail, we are adding the following:

1. Append Blob with a new AppendBlock API

A new blob type, the append blob, is now available. All writes to an append blob are added sequentially to the end of the blob, making it optimal for logging scenarios. Append blobs support an Append Block operation for adding blocks to the blob. Once a block is added with Append Block, it is immediately available to be read; no further commits are necessary. The block cannot be modified once it has been added.

Please read Getting Started with Blob Storage for more details.

2. Azure File Service

A number of new features are available for the Azure File Service (in preview – with technical support available), including:

Check out our Azure Files Preview Update blog to learn more. Also, read the How to use Azure File storage with PowerShell and .NET getting started to learn how to use these new features.

If you’re not familiar with CORS or SAS signatures, you’ll find the following documentation helpful:

3. Client-Side Encryption

We are also announcing general availability for the .NET client-side encryption capability that has been in preview since April. In addition to enabling encryption of Blobs, Tables and Queues we also have support for Append Blobs. Please read Get Started with Client-Side Encryption for Microsoft Azure Storage for more details.

4. Azure Storage Client Library and Tooling Updates

We have also released new versions of our .NET, Java, C++, Node.js, and Android client libraries which provide support for the new 2015-02-21Storage Version. For tooling, we've released new versions of AzCopy. Check out Getting Started with the AzCopy Command-Line Utility to learn more.  We've also released Storage updates to Azure PowerShell and Azure CLI.

 

We hope you will find these features useful. As always, please let us know if you have any further questions either via forum or comments on this post.

Thanks!

Azure Storage Team

Azure Files Preview Update

$
0
0

At Build 2015 we announced that technical support is now available for Azure Files customers with technical support subscriptions. We are pleased to announce several additional updates for the Azure Files service which have been made in response to customer feedback. Please check them out below:

New REST API Features

Server Side Copy File

Copy File allows you to copy a blob or file to a destination file within the Storage account or across different Storage accounts all on the server side. Before this update, performing a copy operation with the REST API or SMB required you to download the file or blob and re-upload it to its destination.

File SAS

You can now provide access to file shares and individual files by using SAS (shared access signatures) in REST API calls.

Share Size Quota

Another new feature for Azure Files is the ability to set the “share size quota” via the REST API. This means that you can now set limits on the size of file shares. When the sum of the sizes of the files on the share exceeds the quota set on the share, you will not be able to increase the size of the files in the share.

Get/Set Directory Metadata

The new Get/Set Directory Metadata operation allows you to get/set all user-defined metadata for a specified directory.

CORS Support

Cross-Origin Resource Sharing (CORS) has been supported in the Blob, Table, and Queue services since November 2013. We are pleased to announce that CORS will now be supported in Files.

Learn more about these new features by checking out the Azure Files REST API documentation.

Library and Tooling Updates

The client libraries that support these new features are .NET (desktop), Node.JS, Java, Android, ASP.NET 5, Windows Phone, and Windows Runtime. Azure Powershell and Azure CLI also support all of these features – except for get/set directory metadata. In addition, the newest version of AZCopy now uses the server side copy file feature.

If you’d like to learn more about using client libraries and tooling with Azure Files then a great way to get started would be to check out our tutorial for using Azure Files with Powershell and .NET.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

Thanks!

Azure Storage Team

Azure Files Preview Update

$
0
0

At Build 2015 we announced that technical support is now available for Azure Files customers with technical support subscriptions. We are pleased to announce several additional updates for the Azure Files service which have been made in response to customer feedback. Please check them out below:

New REST API Features

Server Side Copy File

Copy File allows you to copy a blob or file to a destination file within the Storage account or across different Storage accounts all on the server side. Before this update, performing a copy operation with the REST API or SMB required you to download the file or blob and re-upload it to its destination.

File SAS

You can now provide access to file shares and individual files by using SAS (shared access signatures) in REST API calls.

Share Size Quota

Another new feature for Azure Files is the ability to set the “share size quota” via the REST API. This means that you can now set limits on the size of file shares. When the sum of the sizes of the files on the share exceeds the quota set on the share, you will not be able to increase the size of the files in the share.

Get/Set Directory Metadata

The new Get/Set Directory Metadata operation allows you to get/set all user-defined metadata for a specified directory.

CORS Support

Cross-Origin Resource Sharing (CORS) has been supported in the Blob, Table, and Queue services since November 2013. We are pleased to announce that CORS will now be supported in Files.

Learn more about these new features by checking out the Azure Files REST API documentation.

Library and Tooling Updates

The client libraries that support these new features are .NET (desktop), Node.JS, Java, Android, ASP.NET 5, Windows Phone, and Windows Runtime. Azure Powershell and Azure CLI also support all of these features – except for get/set directory metadata. In addition, the newest version of AZCopy now uses the server side copy file feature.

If you’d like to learn more about using client libraries and tooling with Azure Files then a great way to get started would be to check out our tutorial for using Azure Files with Powershell and .NET.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

Thanks!

Azure Storage Team

AzCopy – Introducing Append Blob, File Storage Asynchronous Copying, File Storage Share SAS, Table Storage data exporting to CSV and more

$
0
0

We are pleased to announce that AzCopy 3.2.0 and AzCopy 4.2.0-preview are now released! These two releases introduce the following new features:

Append Blob

Append Blob is a new Microsoft Azure Storage blob type which is optimized for fast append operations, making it ideal for scenarios where the data must be added to an existing blob without modifying the existing contents of that blob (E.g. logging, auditing). For more details, please go to Introducing Azure Storage Append Blob.

Both AzCopy 3.2.0 and 4.2.0-preview will include the support for Append Blob in the following scenarios:

  • Download Append Blob, same as downloading a block or page blob
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /Pattern:appendblob1.txt
  • Upload Append Blob, add option /BlobType:Append to specify the blob type
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /Pattern:appendblob1.txt /BlobType:Append
  • Copy Append Blob, there is no need to specify the /BlobType
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:appendblob1.txt

Note that when uploading or copying append blobs with names that already exist in the destination, AzCopy will prompt either “overwrite or skip” message. Trying to overwrite a blob with the same name but a mismatched blob type will fail. For example, AzCopy will report a failure when overwriting a Block Blob with an Append Blob.

AzCopy does not include the support for appending data to an existing append blob, and if you are using an older version AzCopy, the download and copy operations will fail with the following error message when the source container includes Append Blob.

Error parsing the source location “[the source URL specified in the command line]”: The remote server returned an error: (409) Conflict. The type of a blob in the container is unrecognized by this version.

 

File Storage Asynchronous Copy (4.2.0 only)

Azure Storage File Service adds several new features with Storage Service REST version 2015-2-21, please find more details at Azure Storage File Preview Update.

In the previous version of AzCopy 4.1.0, we introduced synchronous copy for Blob and File, now AzCopy 4.2.0-preview includes the support for the following File Storageasynchronous copy scenarios.

Unlike synchronous copy which simulate the copy by downloading the blobs from the source storage endpoint to local memory and then uploading them to the destination storage end point, the File Storage asynchronous copy is a server side copy which is running in the background and you can get the copy status programmatically, please find more details at Server Side Copy File.

  • Asynchronous copying from File Storage to File Storage
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from File Storage to Block Blob
AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare/ /Dest:https://myaccount2.blob.core.windows.net/mycontainer/ /SourceKey:key1 /DestKey:key2 /S
  • Asynchronous copying from Block/Page Blob Storage to File Storage
AzCopy /Source:https://myaccount1.blob.core.windows.net/mycontainer/ /Dest:https://myaccount2.file.core.windows.net/myfileshare/ /SourceKey:key1 /DestKey:key2 /S

Note that asynchronous copying from File Storage to Page Blob is not supported.

 

File Storage Share SAS (Preview version 4.2.0 only)

Besides the File asynchronous copy, another File Storage new feature ‘File Share SAS’ will be supported in AzCopy 4.2.0-preview as well.

Now you can use option /SourceSAS and /DestSAS to authenticate the file transfer request.

AzCopy /Source:https://myaccount1.file.core.windows.net/myfileshare1/ /Dest:https://myaccount2.file.core.windows.net/myfileshare2/ /SourceSAS:SAS1 /DestSAS:SAS2 /S

For more details about File Storage share SAS, please visit Azure Storage File Preview Update.

 

Export Table Storage entities to CSV (Preview version 4.2.0 only)

AzCopy allows end users to export Table entities to local files in JSON format since the 4.0.0 preview version, now you can specify the new option /PayloadFormat:<JSON | CSV> to export data to CSV files. Without specifying this new option, AzCopy will export Table entities to JSON files.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /PayloadFormat:CSV

Besides the data files with .csv extension that will be found in the place specified by the parameter /Dest, AzCopy will generate scheme file with file extension .schema.csv for each data file.

Note that AzCopy does not include the support for “importing” CSV data file, you can use JSON format to export/import as you did in previous version of AzCopy.

 

Specify the manifest file name when exporting Table entities (Preview version 4.2.0 only)

AzCopy requires end users to specify the option /Manifest when importing table entities, in previous version the manifest file name is decided by AzCopy during the exporting which looks like “myaccount_mytable_timestamp.manifest”, and users need to find the name in the destination folder firstly before writing the import command line.

Now you can specify the manifest file name during the exporting by option /Manifest which should bring more flexibility and convenience to your importing scenarios.

AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /Manifest:abc.manifest

 

Enable FIPS compliant MD5 algorithm

AzCopy by default uses .NET MD5 implementation to calculate the MD5 when copying objects, now we include the support for FIPS compliant MD5 setting to fulfill some scenarios’ security requirements.

You can create an app.config file “AzCopy.exe.config” with property “AzureStorageUseV1MD5” and put it aside with AzCopy.exe.

<?xml version="1.0" encoding="utf-8" ?> 
<configuration>
<appSettings>
<add key="AzureStorageUseV1MD5" value="false"/>
</appSettings>
</configuration>

For property “AzureStorageUseV1MD5”

  • true – The default value, AzCopy will use .NET MD5 implementation.
  • false – AzCopy will use FIPS compliant MD5 algorithm.

Note that FIPS compliant algorithms is disabled by default on your Windows machine, you can type secpol.msc in your Run window and check this switch at “Security Setting->Local Policy->Security Options->System cryptography: Use FIPS compliant algorithms for encryption, hashing and signing”.

 

Reference

Azure Storage File Preview Update

Microsoft Azure Storage Release –Append Blob, New Azure File Service Features and Client Side Encryption General Availability

Introducing Azure Storage Append Blob

Enable FISMA MD5 setting via Microsoft Azure Storage Client Library for .NET

Getting Started with the AzCopy Command-Line Utility

As always, we look forward to your feedback.

Microsoft Azure Storage Team

Issue in Azure Storage Client Library 5.0.0 and 5.0.1 preview in AppendBlob functionality

$
0
0

An issue in the Azure Storage Client Library 5.0.0 for .Net and in the Azure Storage Client Library 5.0.1 preview for .Net was recently discovered. This will impact the Windows desktop and phone targets. The details of the issue are as follows:

When the method to append a string of text to an append blob asynchronously,  CloudAppendBlob.AppendTextAsync() is invoked with either only the content parameter specified or only the content and CancellationToken parameters specified, the call will overwrite the blob content instead of appending to it. Other synchronous and asynchronous invocations to append a string of text to an append blob (CloudAppendBlob.AppendText() , CloudAppendBlob.AppendTextAsync()) do not manifest the issue.

The Azure Storage team has hotfixes available for both releases for this issue. The hotfix will have updated versions 5.0.2 and 5.0.3-preview respectively. If you had installed either Azure Storage Client Library 5.0.0 for .Net or the Azure Storage Client Library 5.0.1 preview for .Net, please make sure to update your references with the corresponding package. You can install the these versions either from:

  1. The Visual Studio NuGet Package Manager UI.
  2. The Package Manager console using the following command (the released version for instance): Install-Package WindowsAzure.Storage -Version 5.0.2
  3. The NuGet gallery web page that houses the package: here for the released version and here for the preview version.

Please note the following:

  1. The older versions will be unlisted in the Visual Studio NuGet Package Manager UI.
  2. If you attempt to launch the web page that contained the original package, you may encounter a 404 error.
  3. We recommend you to not install the older versions through the Package Manager console so that you don’t run into the issue.

Thank you for your support to Azure Storage. We look forward to your continued feedback.

Microsoft Azure Storage Team


Introducing the Azure Storage Client Library for iOS (Public Preview)

$
0
0

We are excited to announce the public preview of the Azure Storage Client Library for iOS!

Having a client library for iOS is essential to providing a complete mobile story for developers. With this release, developers can now take advantage of Azure Storage on all major mobile platforms: Windows Phone, iOS, Android, and Xamarin.

Currently, this library supports iOS 9, iOS 8 and iOS 7 and can be used with both Objective-C and Swift. This library also supports the latest Azure Storage service version 2015-02-21.

With this being the first release, we want to make sure we’re taking advantage of the wealth of knowledge provided by the iOS developer community. For this reason, we’ll be releasing block blob support first with the goal being to solicit feedback plus better understand additional scenarios you would like to see supported.

Please check out How to use Blob Storage from iOS to get started. You can also download the sample app to quickly see the use of Azure Storage in an iOS application.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.

We’d also like to give a special thanks to all those who joined our preview program and contributed their ideas and suggestions.

Thanks!

Azure Storage Team

(Cross-Post) Introducing Azure Storage Data Movement Library Preview

$
0
0

Since AzCopy was first released, a large number of customers have requested programmatic access to AzCopy. We are pleased to announce a new open-sourced Azure Storage data movement library for .NET (DML for short). This library is based on the core data movement framework that powers AzCopy. The library is designed for high-performance, reliable and easy Azure Storage data transfer operations enabling scenarios such as:
•    Uploading, downloading and copying data between Microsoft Azure Blob and File Storage
•    Migrating data from other cloud providers such as AWS S3 to Azure Blob Storage
•    Backing up Azure Storage data

Here is a sample demonstrating how to upload a blob, please find more samples at github.

using System;
using System.Threading;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
// Include the New Azure Storage Data Movement Library
using Microsoft.WindowsAzure.Storage.DataMovement;
 
// Setup the storage context and prepare the object you need to upload
string storageConnectionString = "myStorageConnectionString";
CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference("mycontainer");
blobContainer.CreateIfNotExists();
string sourcePath = "path\\to\\test.txt";
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob");
 
// Use the interfaces from the new Azure Storage Data Movement Library to upload the blob
// Setup the number of the concurrent operations
TransferManager.Configurations.ParallelOperations = 64;
 
// Setup the transfer context and track the upoload progress
TransferContext context = new TransferContext();
context.ProgressHandler = new Progress<TransferProgress>((progress) =>
{
    Console.WriteLine("Bytes uploaded: {0}", progress.BytesTransferred);
});
 
// Upload a local blob
var task = TransferManager.UploadAsync(
    sourcePath, destBlob, null, context, CancellationToken.None);
task.Wait();

Azure Storage Data Movement Library has the same performance as AzCopy and exposes the core functionalities of AzCopy. You can install the first preview of the library from Nuget or download the source code from Github. In the initial version (0.1.0) of this library, you can find the following abilities:
•    Support data transfer for Azure Storage abstraction: Blob
•    Support data transfer for Azure Storage abstraction: File
•    Download / Upload / Copy single object
•    Control the number of concurrent operations
•    Synchronous and asynchronous copying
•    Define the number of concurrent operations
•    Define the suffix of the user agent
•    Set the content type
•    Set the Access Condition to conditionally copy objects, for example copy objects changed since certain date
•    Validate content MD5
•    Download specific blob snapshot
•    Track transfer progress: bytes transferred, number of success/fail/skip files
•    Recover (Set/Get transfer checkpoint)
•    Transfer Error handling (transfer exception and error code)
•    Client-Side Logging

DML is an open source project, we welcome contributions from the community. In particular we are interested in extensions to our samples to help make them more robust. Together with the release of version 0.1.0, we have created the following samples, for more details, please visit Github Readme.md.

•    Upload/Download/Copy an Azure Storage Blob
•    Migrate data from AWS S3 to Azure Blob Storage

Next Steps
We will continue the investment for both AzCopy and Data Movement Library, and in the next releases of Data Movement Library, we will add the support for more advanced features, which shall include:
•    Download / Upload / Copy directory (Local file directory, blob virtual directory, File share directory)
•    Transfer directory in recursive mode or flat mode
•    Specify the file pattern when copying files and directories
•    Download Snapshots under directories

As always, we look forward to your feedback.

Microsoft Azure Storage Team

How to use Blob Storage from iOS

Client-Side Encryption in Java Client Library for Microsoft Azure Storage – Preview

$
0
0

We are excited to announce preview availability of the client side encryption feature in the Azure Storage Java Client Library. This preview enables you to encrypt and decrypt your data inside client applications before uploading to and after downloading from Azure Storage. The feature is available for Blobs, Queues and Tables. We also support integration with Azure Key Vault in order to let you store and manage your keys. We recently made Client-side encryption generally available in the Storage .Net library and now we are happy to provide the same capability in the Java client library as a preview.

Why use client-side encryption?

Client-side encryption is helpful in scenarios where customers want to encrypt the data at source such as encrypting surveillance data from cameras before uploading to Storage. In this scenario, the user controls the keys and the Azure Storage service never sees the keys used for cryptographic operations. You can additionally inspect exactly how the library is encrypting your data to ensure that it meets your standards since the library is open source and available on GitHub. The feature is helpful in scenarios such as encrypting surveillance data from cameras before uploading to Storage.

Benefits of the Java Client Library

We wanted to provide a library that would accomplish the following:

  • Implement Security Best Practices.  This library has been reviewed for its security so that you can use it with confidence. Encrypted data is not decipherable even if the storage account keys are compromised. Additionally, we’ve made it simple and straightforward for users to rotate keys themselves. i.e. multiple keys will be supported during the key rotation timeframe.
  • Interoperability across languages and platforms.  Many users use more than one of our client libraries. Given our goal to use the same technical design across implementations, data encrypted using the .NET library can be decrypted using the Java library and vice versa.  Support for other languages is planned for the future. Similarly, we support cross platform encryption. For instance, data encrypted in the Windows platform can be decrypted in Linux and vice versa.
  • Design for Performance.  We’ve designed the library for both throughput and memory footprint. We have used a technique where there is a fixed overhead so that your encrypted data will have a predictable size based on the original size.
  • Self-contained encryption – Every blob, table entity, or queue message has all encryption metadata stored in either the object or its metadata.  There is no need to get any additional data from anywhere else, except for the key you used.
  • Full blob uploads/ Full and range blob downloads: Upload for blobs such as files like documents, photos and videos that are going to be uploaded in entirety is supported. But sometimes, files like mp3 are downloaded in ranges depending on the part that is to be played. To support this, range downloads are allowed and are entirely taken care of by the SDK.

How to use it?

Using client-side encryption is easy. The client library will internally take care of encrypting data on the client when uploading to Azure Storage, and automatically decrypts it when data is retrieved. All you need to do is specify the appropriate encryption policy and pass it to data upload/download APIs.

// Create the IKey used for encryption
RsaKey key = new RsaKey("private:key1" /* key identifier */);
 
// Create the encryption policy to be used for upload and download.
BlobEncryptionPolicy policy = new BlobEncryptionPolicy(key, null);
 
// Set the encryption policy on the request options.
BlobRequestOptions options = new BlobRequestOptions();
options.setEncryptionPolicy(policy);
 
// Upload the encrypted contents to the blob.
blob.upload(stream, size, null, options, null);
 
// Download and decrypt the encrypted contents from the blob.
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); blob.DownloadToStream(outputStream, null, options, null);

You can find more details and code samples in the Getting Started with Client-Side Encryption for Microsoft Azure Storage article.

Key considerations

  • This is a preview!  It should not be used for production data.  Schema impacting changes can be made and data written with the first preview may not be readable in the GA version.
  • With client side encryption, we support full uploads and full/ range downloads only. As such if you perform operations that update parts of a blob after you have written an encrypted blob, you may end up making it unreadable.
  • Avoid performing a SetMetadata operation on the encrypted blob or specifying metadata while creating a snapshot of an encrypted blob as this may render the blob unreadable. If you must update, then be sure to call the downloadAttributes method first to get the current encryption metadata, and avoid concurrent writes while metadata is being set.

We look forward to your feedback on design, ease of use and any additional scenarios you would like to tell us about.  This will enable us to deliver a great GA release of the library. While some requests for additional functionality may not be reflected in the first release, these will be strongly considered for the future.

Thank you.

Dinesh Murthy
Emily Gerner
Microsoft Azure Storage Team

Microsoft Azure Storage Service Version Removal Update: Extension to 2016

$
0
0

Summary

The Storage Service uses versioning to govern what operations are available, how a given request will be processed and what will be returned. In 2014, we announced that specific versions of the Microsoft Azure Storage Service would be removed on December 9th, 2015. Based on your feedback, we are now making the following changes with the details in the table below.

  1. We will delay the removal date for some REST API versions and impacted client libraries. This includes all REST endpoints starting version 2009-07-17 and earlier. The effective date for this service removal is August 1st, 2016.
  2. We will indefinitely postpone the removal date for the endpoints 2011-08-18 and 2009-09-19. This is effective immediately. We intend to remove these versions at some point in the future, but not within the coming 12 months. The exact date of removal will be communicated via this blog forum and with 12 months’ notice provided.
  3. We will begin using service version 2014-04-05 for requests that do not include a specific version for SAS authentication and Anonymous access. However, we will begin rejecting any unversioned SharedKey/SharedKeyLite authenticated requests. The effective date for this is August 1st, 2016.
  4. Finally, there is no change to support level and availability of versions 2012-02-12 and beyond.
Endpoint Action Effective
2008 (undocumented, but used for processing unversioned requests) Removal Aug 1, 2016
Version 2009-04-14 Removal Aug 1, 2016
Version 2009-07-17 Removal Aug 1, 2016
Version 2009-09-19
(.Net client library v1.5.1 uses this)
Postponed N/A
Version 2011-08-18
(.Net client library v1.7 uses this)
Postponed N/A
Version 2012-02-12
Version 2013-08-15
Version 2014-02-14
Version 2015-02-21
Version 2015-04-05
No change N/A

Please plan and implement your application upgrades soon so you are not impacted when service versions are removed. Additionally, we encourage you to regularly update to the latest service version and client libraries so you get the benefit of the latest features. To understand the details of how this will impact you and what you need to do, please read on.

How will these changes manifest?

Explicitly Versioned Requests

Any requests which are explicitly versioned using the HTTP x-ms-version request header set to one of the removed versions or in the case of SAS requests api-version parameter set to one of the removed versions, will fail with an HTTP 400 (Bad Request) status code, similar to any request made with an invalid version header.

SharedKey/SharedKeyLite Requests with no explicit version

For requests that were signed using the account’s shared key, if no explicit version is specified using HTTP x-ms-version, the request was previously processed with the undocumented 2008 version. Going forward, processing will fail with HTTP 400 (Bad Request) if the version is not explicitly specified.

SAS Requests with no “sv” parameter and no “x-ms-version”

Prior to version 2012-02-12, a SAS request did not specify a version in the “sv” parameter of the SAS token. The SAS token parameters of these requests were interpreted using the rules for the 2009-07-17 REST processing version. These requests will still work, but now the request will be processed with 2015-04-05 version. We advise you in this case to ensure that you either send “x-ms-version” with a non-deprecated version or set a default version on your account.

Anonymous Requests with no explicit version

For any anonymous requests (with no authentication) with no version specified, the service assumes that the request is version agnostic. Effective August 1st 2016, anonymous requests will be processed with version 2015-04-05. The version used for anonymous requests may change again in the future.

Note that we make no guarantees about whether or not there will be breaking changes when unversioned requests are processed with a new service version. Instances of these requests include browser-initiated HTTP requests and HTTP requests without the service version specified that are made from applications not using Storage client libraries. If your application is unable to send an x-ms-version for anonymous requests (for example, from a browser), then you can set a default REST version for your account through Set Blob Service Properties, for the Blob service for instance.

Default Service Version

If Set Blob Service Properties (REST API) has been used to set the default version of requests to version 2009-09-19 or higher, the version set will be used. If default service version was set to a version that is now removed, that request is considered to be explicitly versioned, and will fail with “400 Bad Request”. If default service version was set to a version that is still supported, that version will continue to be used.

Client Libraries

The latest versions of all of our client libraries and tools will not be affected by this announcement. However the .Net client library v1.5.1 uses Version 2009-09-19 and will be impacted when that version is eventually removed. If you are still using this library, please update to the latest .Net client library before the version is removed. For a list of .Net client libraries using various REST endpoints, please visit https://msdn.microsoft.com/en-us/library/azure/dn744252.aspx. If you are using non-.Net libraries, then you should not be impacted. For more information, please look at the at the Minimum Supported Versions/Libraries/SDK’s section in this article.

Azure CloudDrive

If you are using Azure CloudDrive, then you are not impacted by this announcement since it uses REST Version 2009-09-19. We will have an announcement in the near future on CloudDrive migration.

What should I do?

To ensure that your application continues to work properly after removal of older versions, you should do the following things.

Check your application to find what versions it is using

The first thing to do is to determine what REST versions your application is using. If your application is under your control and you are aware of all components that call Azure Storage, then you can verify this by checking the components against the above list, or by inspecting your code if you have written your own code to make calls to storage.

As a stronger check, or if you are unsure which versions of the components have been deployed, you can enable logging, which will log the requests being made to your storage account. The logs have the request version used included, which can be used to find if any requests are being made using versions with planned removal.

Here is a sample log entry, with the version used highlighted in red – in this case the request was an anonymous and unversioned GetBlob request which implicitly used the 2009-09-19 version:

1.0;2011-08-09T18:52:40.9241789Z;GetBlob;AnonymousSuccess;200;18;10;anonymous;;myaccount;blob;”https:// myaccount.blob.core.windows.net/thumbnails/lake.jpg?timeout=30000″;”/myaccount/thumbnails/lake.jpg”;a84aa705-8a85-48c5-b064-b43bd22979c3;0;123.100.2.10;2009-09-19;252;0;265;100;0;;;”0x8CE1B6EA95033D5″;Friday, 09-Aug-11 18:52:40 GMT;;;;”8/9/2011 6:52:40 PM ba98eb12-700b-4d53-9230-33a3330571fc”

Similar to the above, you can look at log entries to identify any references to service versions that are being removed.

What to change

If you find any log entries which show that a version to be removed is being used, you will need to find that component and either validate that it will continue to work (unversioned requests may continue to work as their implicit version will simply increase – see above), or take appropriate steps to change the version being used. Most commonly, one of the following two steps will be used:

  1. Change the version specified in the request. If you are using client libraries, you can accomplish this by migrating to a later version of the libraries/tools. When possible, migrate to the latest version to get the most improvements and fixes.
  2. Set the default service version to one of the supported versions now so that the behavior can be verified prior to removal. This only applies to anonymous requests with no explicit version.

When migrating your applications to newer versions, you should review the above linked change lists for each service version and test thoroughly to ensure that your application is working properly after you’ve updated it. Please note that service version updates have included both included syntactic breaks (the request will receive a response that either is a failure or formed very differently) and semantic breaks (the request will receive a similar response that means something different).

Post migration validation

After migration, you should validate in the logs that you do not find any of the earlier versions being used. Make sure to check the logs over long enough durations of time to be sure that there are no tasks/workloads running rarely that would still use the older versions (scheduled tasks that run once per day, for example).

Conclusion

It is recommended that users begin their applications upgrades now in order to avoid being impacted when the earlier service versions are removed on August 1st, 2016. Additionally, it is considered a best practice to explicitly version all requests made to the storage service. See MSDN for a discussion of versioning in Azure Storage and best practices.

Thank you.

Dinesh Murthy
Principal Program Manager
Microsoft Azure Storage

(Cross-Post) SAS Update: Account SAS Now Supports All Storage Services

$
0
0

Shared Access Signatures (SAS) enable customers to delegate access rights to data within their storage accounts without having to share their storage account keys. In late 2015 we announced a new type of SAS token called Account SAS that provided support for the Blob and File Services. Today we are pleased to announce that Account SAS is also supported for the Azure Storage Table and Queue services. These capabilities are available with Version 2015-04-05 of the Azure Storage Service.

Account SAS delegates access to resources in one or more of the storage services providing parity with the Storage account keys. This enables you to delegate access rights for creating and modifying blob containers, tables, queues, and file shares, as well as providing access to meta-data operations such as Get/Set Service Properties and Get Service Stats. For security reasons Account SAS does not enable access to permission related operations including “Set Container ACL”, “Set Table ACL”, “Set Queue ACL”, and “Set Share ACL”.

The below code snippet creates a new access policy used to issue a new Account SAS token for the Blob and Table Service including read, write, list, create and delete permissions. The Account SAS token is configured to expire in 24 hours from now.

SharedAccessAccountPolicy policy = new SharedAccessAccountPolicy()
{
Permissions = SharedAccessAccountPermissions.Read |
SharedAccessAccountPermissions.Write |
SharedAccessAccountPermissions.List |
SharedAccessAccountPermissions.Create |
SharedAccessAccountPermissions.Delete,

Services = SharedAccessAccountServices.Blob | SharedAccessAccountServices.Table,

ResourceTypes = SharedAccessAccountResourceTypes.Container | SharedAccessAccountResourceTypes.Object,

SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),

Protocols = SharedAccessProtocol.HttpsOrHttp
};

// Create a storage account SAS token by using the above Shared Access Account Policy.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(‘YOUR CONNECTION STRING’);
string sasToken = storageAccount.GetSharedAccessSignature(policy);
Please read the following resources for more details:

We recommend using SAS tokens to delegate access to storage users rather than sharing storage account keys. As always, please let us know if you have any further questions via comments on this post.

Thanks!

Perry Skountrianos
Azure Storage Team

(Cross-Post) Announcing Azure Storage Data Movement Library 0.2.0

$
0
0

In the previous announcement post for DMLib 0.1.0, we committed that the newest release of the Data Movement Library would support more advanced features. Great news, those are now available and include the following:

  • Download, upload, and copy directories (local file directories, Azure Blob virtual directories, Azure File directories)
  • Transfer directories in recursive mode
  • Transfer directories in flat mode (local file directories)
  • Specify the search pattern when copying files and directories
  • Provide Event to get single file transfer result in a transfer
  • Download Snapshots under directories
  • Changed TransferConfigurations.UserAgentSuffix to TransferConfigurations.UserAgentPrefix

With these new features, you can perform data movement at the Blob container and Blob virtual directory level, or the File share and File directory level.

We are actively adding more code samples to the Github library, and any community contributions to these code samples are highly appreciated.

You can install the Azure Storage Data Movement Library from Nuget or download the source code from Github. For more details, please read the Getting Started documentation.

As always, we look forward to your feedback, so please don’t hesitate to utilize the comments section below.

Thanks!

Azure Storage Team


(Cross-Post) Build 2016: Azure Storage announcements

$
0
0

It’s time for Build 2016, and the Azure Storage team has several exciting announcements to make. This blog post provides an overview of new announcements and updates on existing programs. We hope that these new features and updates will enable you to make better use of Azure Storage for your services, applications and other needs.

Preview Program Announcements

Storage Service Encryption Preview

Storage Service Encryption helps you address organizational security and compliance requirements by automatically encrypting data in Blob Storage, including block blobs, page blobs, and append blobs. Azure Storage handles all the encryption, decryption, and key management in a transparent fashion using AES 256-bit encryption, one of the strongest encryption ciphers available. There is no additional charge for enabling this feature.

Access to the preview program can be requested by registering your subscription using Azure Portal or Azure PowerShell. Once your subscription has been approved, you can create a new storage account using the Azure Portal, and enable the feature.

To learn more about this feature, please see Getting started with Storage Service Encryption.

Near Term Roadmap Announcements

GetPageRanges API for copying incremental snapshots

The Azure Storage team will soon be adding a new feature to the GetPageRanges API for page blobs, which will allow you to build faster and more efficient backup solutions for Azure virtual machines. The API will return the list of changes between the base blob and its snapshots, allowing you to identify and copy only the changes unique to each snapshot. This will significantly reduce the amount of data you need to transfer during incremental backups of the virtual machine disks. The API will support page blobs on premium storage as well as standard storage. The feature will be available in April 2016 via the REST API and the .NET client library, with more client libraries support to follow.

Azure Import/Export

Azure Import/Export now supports up to 8 TB hard drives in all regions where the service is offered. In addition, Azure Import/Export will be coming to Japan and Australia in summer 2016. With this launch, customers who have storage accounts in Japan or Australia can ship disks to a domestic address within the region rather than shipping to other regions.

Azure Backup support for Azure Premium Storage

Azure Premium Storage is ideal for running IO intensive applications on Azure VMs. Azure Backup Service delivers a powerful and affordable cloud backup solution, and will be adding support for Azure Premium Storage. You can protect your critical applications running on Premium Storage VMs with the help of Azure Backup service.

Learn more about Azure Backup and Premium Storage.

Client Library and Tooling Updates

Java Client-Side Encryption GA

We are pleased to announce the general availability of the client-side encryption feature in our Azure Storage client Java library. This allows developers to encrypt blob, table, and queue data before sending it to Azure Storage. Additionally, integration with Azure Key Vault is supported so you can store and manage your keys in Azure Key Vault. With this release, data that is encrypted with .Net in Windows can be decrypted with Java in Linux and vice versa.

To learn more, please visit our getting started documentation.

Storage Node.js Preview Update

We are pleased to announce the latest preview (0.10) of the Azure Storage Node.js client library. This includes a rich developer experience, full support for AccountSAS capability, IPACL and Protocol specifications for Service SAS along with addressing customer usability feedback. You can start using the Node.js preview Azure Storage library in your applications now by leveraging the storage package on npmjs.

To learn more and get access to the source code, please visit our GitHub repo.

Storage Python Preview Update

We are pleased to announce the latest preview (0.30) of the Azure Storage Python client library. With this version comes all features included in the 2015-04-05 REST version including support for append blobs, Azure File storage, account SAS, JSON table formatting and much more.

To learn more, please visit our getting started documentation and review our latest documentation, upgrade guide, usage samples and breaking changes log.

Azure Storage Explorer

We are happy to announce the latest public preview of the Azure Storage Explorer. This release adds support for Table Storage including exporting to a CSV file, Queue Storage, AccountSAS and an updated UI experience.

For more information and to download the explorer for the Windows/Linux/Mac platforms, please visit www.storageexplorer.com.

Documentation and Samples Updates

Storage Security Guide

Azure Storage provides a comprehensive set of security capabilities which enable developers to build secure applications. You can secure the management of your storage account, encrypt the storage objects in transit, encrypt the data stored in the storage account and much more. The Azure Storage Security Guide provides an overview of these security features and pointers to resources providing deeper knowledge.

To learn more, see the Storage Security Guide.

Storage Samples

The Azure Storage team continues to strive towards improving the end-user experience for developers. We have recently developed a standardized set of samples that are easy to discover and enable you to get started in just 5 minutes. The samples are well documented, fully functional, community-friendly, and can be accessed from a centralized landing page that allows you to find the samples you need, for the platform you use. The code is open source and is readily usable from Github making it possible for the community to contribute to the samples repository.

To get started with the samples, please visit our storage samples landing page.

 

Finally, if you are new to Azure Storage, please check out the Azure Storage documentation page. It’s the quickest way to learn and start using Azure Storage.

Thanks
Azure Storage Team

(Cross Post) Announcing the preview of Azure Storage Service Encryption for data at rest

$
0
0

We are excited to announce the preview of Azure Storage Service Encryption for data at rest. This capability is one of the features most requested by enterprise customers looking to protect sensitive data as part of their regulatory or compliance needs.

Storage Service Encryption automatically encrypts your Azure Blob storage data prior to persisting to storage, and decrypts prior to retrieval. The encryption, decryption and key management is transparent to users, requires no change to your applications, and frees your engineering team from having to implement complex key management processes.

This capability is supported for all Azure Blob storage, Block blobs, Append blobs, and Page blobs, and is enabled through configuration on each storage account. This capability is available for storage accounts created through the Azure Resource Manager (ARM). All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure Storage – LRS, ZRS, GRS and RA-GRS. Storage Service Encryption is also supported for both Standard and Premium Storage. There is no additional charge for enabling this feature.

As with most previews, this should not be used for production workloads until the feature becomes generally available.

To learn more please visit Storage Service Encryption.

(Cross-Post) Microsoft Azure Storage Explorer preview: March update

$
0
0

(Originally posted at: https://azure.microsoft.com/en-us/blog/storage-explorer-march-update/)

Today we’re happy to announce the March update of Microsoft Azure Storage Explorer (preview), which includes support for Tables and Queues.

After our first release, we received hundreds of requests asking for Table and Queue support. Based on this feedback, we’re extremely excited to share this new version of Storage Explorer with the following features:

  • Table support
  • Queue support
  • SAS features, including SAS support for Storage Account
  • Performance improvements
  • Updated look and feel
  • Update notifications

Tables

For tables, you’ll be able to view entities inside a container as well as write queries against them. You can also easily insert common query snippets, such as the ability to filter by partition key and row key, or retrieving based on a Timestamp period.

Storage Explorer query

Once you find the entity or entities you’re looking for, you can manually edit the values of its properties or delete it. Lastly, you can export the contents of your table to a CSV file, or import existing CSV files into any table. You could also copy tables from one Storage Account to another if you’d prefer to keep the transfers server-side.

Queues

For queues, we focused on the basic features. You can peek at the most recent 32 messages. From there you can view a specific message, enqueue new messages, dequeue the top message, or clear the entire queue.

SAS features

Both tables and queues support the same SAS functionality as blob containers: you can create SAS URIs for queues and tables, and also connect to a specific queue or table by providing a SAS key.

With this release, you’ll be able to generate Shared Access Signatures for Storage Accounts. Additionally, you’ll have the ability to connect to Storage Accounts by providing a SAS URI for the Storage Account. The SAS generation and connection features are also available for Tables and Queues.

To generate a SAS URI, simply right-click on the Storage Account and select “Get Shared Access Signature…”; to attach the resource, right-click on the parent “Storage Accounts” node and select “Attach Account using SAS.”

Storage Explorer - attach with SAS

Storage Explorer - SAS dialog

Update notifications

Lastly, starting with this version of Storage Explorer you’ll receive notifications for new updates for the application. These will appear as an infobar message linking to the latest version on storageexplorer.com.

Summary

While we’re excited to finally share these features with you, our work is not done yet – we haven’t forgotten about File Shares! We’ll also continue to add features to Blob Containers, Tables, and Queues. If you have any suggestions or requests for features you’d like to see in Storage Explorer, you can send us feedback directly from the application.

Storage Explorer Feedback

Let us know what you think!

-The Storage Explorer Team

(Cross-Post) Introducing Azure Cool Blob Storage

$
0
0

Originally posted in Microsoft Azure Blog.

Data in the cloud is growing at an exponential pace, and we have been working on ways to help you manage the cost of storing this data. An important aspect of managing storage costs is tiering your data based on attributes like frequency of access, retention period, etc. A common tier of customer data is cool data which is infrequently accessed but requires similar latency and performance to hot data.

Today, we are excited to announce the general availability of Cool Blob Storage – low cost storage for cool object data. Example use cases for cool storage include backups, media content, scientific data, compliance and archival data. In general, any data which lives for a longer period of time and is accessed less than once a month is a perfect candidate for cool storage.

With the new Blob storage accounts, you will be able to choose between Hot and Cool access tiers to store object data based on its access pattern. Capabilities of Blob storage accounts include:

  • Cost effective: You can now store your less frequently accessed data in the Cool access tier at a low storage cost (as low as $0.01 per GB in some regions), and your more frequently accessed data in the Hot access tier at a lower access cost. For more details on regional pricing, see​ Azure Storage Pricing.
  • Compatibility: We have designed Blob storage accounts to be 100% API compatible with our existing Blob storage offering which allows you to make use of the new storage accounts in existing applications seamlessly.
  • Performance: Data in both access tiers have a similar performance profile in terms of latency and throughput.
  • Availability: The Hot access tier guarantees high availability of 99.9% while the Cool access tier offers a slightly lower availability of 99%. With the RA-GRS redundancy option, we provide a higher read SLA of 99.99% for the Hot access tier and 99.9% for the Cool access tier.
  • Durability: Both access tiers provide the same high durability that you have come to expect from Azure Storage and the same data replication options that you use today.
  • Scalability and Security: Blob storage accounts provide the same scalability and security features as our existing offering.
  • Global reach: Blob storage accounts are available for use starting today in most Azure regions with additional regions coming soon. You can find the updated list of available regions on the Azure Services by Regions page.

For more details on how to start using this feature, please see our getting started documentation.

Many of you use Azure Storage via partner solutions as part of your existing data infrastructure. Here are updates from some of our partners on their support for Cool storage:

  • Commvault: Commvault’s Windows/Azure Centric “Commvault Integrated Solutions Portfolio” software solution enables a single solution for enterprise data management. Commvault’s native support for Azure has been a key benefit for customers considering a move to Azure and Commvault remains committed to continuing our integration and compatibility efforts with Microsoft, befitting a close relationship between the companies that has existed for over 17 years. With this new Cool Storage offering, Microsoft again makes significant enhancements to their Azure offering and we expect that this service will be an important driver of new opportunities for both Commvault and Microsoft.
  • Veritas: Market leader Veritas NetBackup™ protects enterprise data in on a global scale in both management and performance – for any workload, on any storage device, located anywhere.  The proven global enterprise capabilities in NetBackup converges on and off-premise data protection with scalable, cloud-ready solutions to cover any use case.  In concert with the Microsoft announcement of Azure Cool storage, Veritas is announcing beta availability of the integrated Azure Cloud Connector in NetBackup 8.0 Beta which enables customers to test and experience the ease of use, manageability, and performance of leveraging Azure Storage as a key component of their enterprise hybrid cloud data protection strategy. Click here to go to the NetBackup 8.0 Beta registration and download website.
  • SoftNAS: SoftNAS™® will soon be supporting Azure Cool storage. SoftNAS Cloud® NAS customers will get a virtually bottomless storage pool for applications and workloads that need standard file protocols like NFS, CFS/SMB, and iSCSI. By summer 2016, customers can leverage SoftNAS Cloud NAS with Azure Cool storage as an economical alternative to increasing storage costs. SoftNAS helps customers make the cloud move without changing applications while providing enterprise-class NAS features like de-duplication, compression, directory integration, encryption, snapshotting, and much more. SoftNAS StorageCenter™ console will allow a central means to choose the optimal file storage location ranging from hot (block-backed) to cool (Blob-object backed) and enables content movement to where it makes sense over the data lifecycle.
  • Cohesity: Cohesity delivers the world’s first hyper-converged storage system for enterprise data.  Cohesity consolidates fragmented, inefficient islands of secondary storage into an infinitely expandable and limitless storage platform that can run both on-premises and in the public cloud.  Designed with the latest web-scale distributed systems technology, Cohesity radically simplifies existing backup, file shares, object, and dev/test storage silos by creating a unified, instantly-accessible storage pool.  The Cohesity platform seamlessly interoperates with Azure Cool storage for three primary use cases:  long-term data retention and archival, tiering of infrequently-accessed data into the cloud, and replication to provide disaster recovery. Azure Cool storage can be easily registered and assigned via Cohesity’s policy-based administration portal to any data protection workload running on the Cohesity platform.
  • CloudBerry Lab: CloudBerry Backup for Microsoft Azure is designed to automate data backup to Microsoft Azure cloud storage. It is capable of compressing and encrypting the data with a user-controlled password before the data leaves the computer. It then securely transfers it to the cloud either on schedule or in real time. CloudBerry Backup also comes with file-system and image-based backup, SQL Server and MS Exchange support, as well as flexible retention policies and incremental backup. CloudBerry Backup now supports Azure Blob storage accounts for storing backup data.

The list of partners integrating with cool storage will continue to grow in the coming months.

As always, we look forward to your feedback and suggestions.

Thanks,

The Azure Storage Team.

Azure Storage PowerShell v.1.7 – Hotfix to v1.4 Breaking Changes

$
0
0

Breaking changes were introduced in Azure PowerShell v1.4. These breaking changes are present in Azure PowerShell versions 1.4-1.6 and versions 2.0 and later. The following Azure Storage cmdlets were impacted:

  • Get-AzureRmStorageAccountKey – Accessing Keys.
  • New-AzureRmStorageAccountKey – Accessing Keys.
  • New-AzureRmStorageAccount – Specifying Account Type and Endpoints.
  • Get-AzureRmStorageAccount – Specifying Account Type and Endpoints.
  • Set-AzureRmStorageAccount – Specifying Account Type and Endpoints.

To minimize impact to cmdlets, we are releasing Azure PowerShell v1.7 – a hotfix that addresses all of the breaking changes with the exception of specifying the Endpoint properties for New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount. This means no code change will be required by customers where the hotfix is applicable. This hotfix will not be present in Azure PowerShell versions 2.0 and later. Please plan to update the above cmdlets when you update to Azure PowerShell v2.0.

Below, you’ll find examples for how the above cmdlets work for different versions of Azure PowerShell and the action required:

Accessing Keys with Get-AzureRmStorageAccountKey and New-AzureRmStorageAccountKey

V1.3.2 and earlier:

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key1

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key2

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key1

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key2

V1.4-V1.6 and V2.0 and later:

The cmdlet now returns a list of keys, rather than an object with properties for each key.

# Replaces Key1
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[0].Value

# Replaces Key2
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[1].Value

# Replaces Key1
$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[0].Value

# Replaces Key2
$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[1].Value

V1.7 (Hotfix):

Both methods work.

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname).Key1

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname)[0].Value

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).StorageAccountKeys.Key1

$key = (New-AzureRmStorageAccountKey -ResourceGroupName $groupname -Name $accountname -KeyName $keyname).Keys[0].Value

Specifying Account Type in New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount

V1.3.2 and earlier:

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

V1.4-V1.6 and V2.0 and later:

AccountType field in output of this cmdlet is renamed to Sku.Name.

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

V1.7 (Hotfix):

Both methods work.

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).AccountType

$AccountType = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

$AccountType = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).Sku.Name

Specifying Endpoints in New-AzureRmStorageAccount, Get-AzureRmStorageAccount, and Set-AzureRmStorageAccount

V1.3.2 and earlier:

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

$blobEndpoint = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

$blobEndpoint = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.AbsolutePath

V1.4-V1.6 and V2.0 and later:

Output type for PrimaryEndpoints/Secondary endpoints blob/table/queue/file changed from Uri to String.

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

$blobEndpoint = (New-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

$blobEndpoint = (Set-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob

Note: The ToString() method for these cmdlets will continue to work. For example:

$blobEndpoint = (Get-AzureRmStorageAccount -ResourceGroupName $groupname -Name $accountname).PrimaryEndpoints.Blob.ToString()

V1.7 (Hotfix):

No hotfix was provided for this breaking change. The return value’s endpoints will have to continue to be string, as changing these back to Uri would introduce an additional break.

Next steps

Thanks,

Microsoft Azure Storage Team

Viewing all 55 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>