The result of a multipart upload is a single object that behaves the same as objects for which the data was stored by means of a single PUT object request. Multipart upload is the process of creating an object by breaking the object data into parts and uploading the parts to HCP individually. New for the HS3 API in HCP release 8.0, you can use multipart uploads to store large objects - as large as five TB. The main tradeoff is between the increased storage efficiency that comes with erasure-coded protection and the ability of whole-object protection to provide protection against concurrent system failures. You can assign different protection methods to different namespaces. If one system becomes unavailable, the distribution of data and parity chunks ensures that the object can still be read from any available system. The data and parity chunks are distributed across the systems in an erasure coding topology such that, for any given object, each system stores one chunk.Īn erasure-coded object can be read from any system in the erasure coding topology. An additional chunk contains parity for the data chunks. With erasure-coded protection, the data for each object in a replicated namespace is encoded and broken into multiple chunks. Until now, the only supported method for geo-protection has been the replication of whole objects to one or more other HCP systems in a replication topology. Release 8.0 of HCP introduces erasure coding as an alternative method for geographically distributed data protection. Erasure coding for geographically distributed data protection
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |