Storage Context
Storage Context Overview
Section titled “Storage Context Overview”A Storage Context represents a connection to a specific storage provider and data set. Unlike the auto-managed approach in the Storage Operations Guide, contexts give you explicit control over these key capabilities:
- Provider Selection: Choose specific providers for your data
- Data Set Management: Create, reuse, and manage data sets explicitly
- Batch Operations: Upload multiple pieces efficiently with progress tracking
- Lifecycle Control: Terminate data sets and delete pieces when needed
- Download Strategies: Choose between SP-agnostic and SP-specific retrieval
This guide assumes you’ve already completed the Storage Operations Guide and understand the basics of uploading and downloading data.
Creating a Storage Context
Section titled “Creating a Storage Context”Creation Options
Section titled “Creation Options”interface interface StorageServiceOptions
StorageServiceOptions { StorageServiceOptions.providerId?: number
providerId?: number; // Specific provider ID to use (optional) StorageServiceOptions.excludeProviderIds?: number[]
excludeProviderIds?: number[]; // Do not select any of these providers (optional) StorageServiceOptions.providerAddress?: string
providerAddress?: string; // Specific provider address to use (optional) StorageServiceOptions.dataSetId?: number
dataSetId?: number; // Specific data set ID to use (optional) StorageServiceOptions.withCDN?: boolean
withCDN?: boolean; // Enable CDN services (optional) StorageServiceOptions.forceCreateDataSet?: boolean
forceCreateDataSet?: boolean; // Force creation of a new data set, even if a candidate exists (optional) StorageServiceOptions.callbacks?: StorageContextCallbacks
callbacks?: type StorageContextCallbacks = { onProviderSelected?: (provider: PDPProvider) => void; onDataSetResolved?: (info: { isExisting: boolean; dataSetId: bigint; provider: PDPProvider; }) => void;}
StorageContextCallbacks; // Progress callbacks (optional) StorageServiceOptions.metadata?: Record<string, string>
metadata?: type Record<K extends keyof any, T> = { [P in K]: T; }
Construct a type with a set of properties K of type T
Record<string, string>; // Metadata requirements for data set selection/creation StorageServiceOptions.uploadBatchSize?: number
uploadBatchSize?: number; // Max uploads per batch (default: 32, min: 1)}Monitor the creation process with detailed callbacks to track progress:
const const storageContext: StorageContext
storageContext = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext({ StorageServiceOptions.providerAddress?: `0x${string}`
providerAddress: "0x...", // Optional: use specific provider address StorageServiceOptions.withCDN?: boolean
withCDN: true, // Optional: enable CDN for faster downloads StorageServiceOptions.metadata?: Record<string, string>
metadata: { type Application: string
Application: "Filecoin Storage DApp", type Version: string
Version: "1.0.0", type Category: string
Category: "AI", }, StorageServiceOptions.callbacks?: StorageContextCallbacks
callbacks: { StorageContextCallbacks.onDataSetResolved?: (info: { isExisting: boolean; dataSetId: bigint; provider: PDPProvider;}) => void
onDataSetResolved: (info: { isExisting: boolean; dataSetId: bigint; provider: PDPProvider;}
info) => { if (info: { isExisting: boolean; dataSetId: bigint; provider: PDPProvider;}
info.isExisting: boolean
isExisting) { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Data set with id ${info: { isExisting: boolean; dataSetId: bigint; provider: PDPProvider;}
info.dataSetId: bigint
dataSetId}`, `matches your context criteria and will be reused` ); } else { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `No matching data set found`, `A new data set will be created in the next file upload`, `In a single transaction!` ); } }, StorageContextCallbacks.onProviderSelected?: (provider: PDPProvider) => void
onProviderSelected: (provider: PDPProvider
provider) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Selected Provider with`, ` id: ${provider: PDPProvider
provider.PDPProvider.id: bigint
id}`, ` name: ${provider: PDPProvider
provider.name: string
name}`, ` description: ${provider: PDPProvider
provider.description: string
description}`, ` address: ${provider: PDPProvider
provider.serviceProvider: `0x${string}`
serviceProvider}` ); }, },});Data Set Selection and Matching
Section titled “Data Set Selection and Matching”The SDK intelligently manages data sets to minimize on-chain transactions. The selection behavior depends on the parameters you provide:
Selection Scenarios:
- Explicit data set ID: If you specify
dataSetId, that exact data set is used (must exist and be accessible) - Specific provider: If you specify
providerIdorproviderAddress, the SDK searches for matching data sets only within that provider’s existing data sets - Automatic selection: Without specific parameters, the SDK searches across all your data sets with any approved provider
Exact Metadata Matching: In scenarios 2 and 3, the SDK will reuse an existing data set only if it has exactly the same metadata keys and values as requested. This ensures data sets remain organized according to your specific requirements.
Selection Priority: When multiple data sets match your criteria:
- Data sets with existing pieces are preferred over empty ones
- Within each group (with pieces vs. empty), the oldest data set (lowest ID) is selected
Provider Selection (when no matching data sets exist):
- If you specify a provider (via
providerIdorproviderAddress), that provider is used - Otherwise, the SDK currently uses random selection from all approved providers
- Before finalizing selection, the SDK verifies the provider is reachable via a ping test
- If a provider fails the ping test, the SDK tries the next candidate
- After the provider is selected, the SDK will automatically create a new data set in the next file upload in a single transaction.
API Design:
interface interface StorageContextAPI
StorageContextAPI { // Properties readonly StorageContextAPI.provider: PDPProvider
provider: (alias) interface PDPProviderimport PDPProvider
PDPProvider; readonly StorageContextAPI.serviceProvider: string
serviceProvider: string; readonly StorageContextAPI.withCDN: boolean
withCDN: boolean; readonly StorageContextAPI.dataSetId: number | undefined
dataSetId: number | undefined; readonly StorageContextAPI.dataSetMetadata: Record<string, string>
dataSetMetadata: type Record<K extends keyof any, T> = { [P in K]: T; }
Construct a type with a set of properties K of type T
Record<string, string>;
// Upload & Download StorageContextAPI.upload(data: Uint8Array | ArrayBuffer, options?: UploadOptions): Promise<UploadResult>
upload( data: ArrayBuffer | Uint8Array<ArrayBufferLike>
data: interface Uint8Array<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike>
A typed array of 8-bit unsigned integer values. The contents are initialized to 0. If the
requested number of bytes could not be allocated an exception is raised.
Uint8Array | interface ArrayBuffer
Represents a raw buffer of binary data, which is used to store data for the
different typed arrays. ArrayBuffers cannot be read from or written to directly,
but can be passed to a typed array or DataView Object to interpret the raw
buffer as needed.
ArrayBuffer, options: UploadOptions | undefined
options?: type UploadOptions = { metadata?: Record<string, string>; onUploadComplete?: (pieceCid: PieceCID) => void; onPiecesAdded?: (transaction?: Hex, pieces?: { pieceCid: PieceCID; }[]) => void; onPiecesConfirmed?: (dataSetId: number, pieces: PieceRecord[]) => void;}
Options for uploading individual pieces to an existing storage context
UploadOptions ): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<(alias) interface UploadResultimport UploadResult
UploadResult>; StorageContextAPI.download(pieceCid: string | PieceCID): Promise<Uint8Array>
download(pieceCid: string | PieceLink
pieceCid: string | type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<interface Uint8Array<TArrayBuffer extends ArrayBufferLike = ArrayBufferLike>
A typed array of 8-bit unsigned integer values. The contents are initialized to 0. If the
requested number of bytes could not be allocated an exception is raised.
Uint8Array>;
// Piece Queries StorageContextAPI.hasPiece(pieceCid: string | PieceCID): Promise<boolean>
hasPiece(pieceCid: string | PieceLink
pieceCid: string | type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<boolean>; StorageContextAPI.pieceStatus(pieceCid: string | PieceCID): Promise<PieceStatus>
pieceStatus(pieceCid: string | PieceLink
pieceCid: string | type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<(alias) interface PieceStatusimport PieceStatus
PieceStatus>; StorageContextAPI.getDataSetPieces(): Promise<PieceCID[]>
getDataSetPieces(): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID[]>;
// Piece Management StorageContextAPI.deletePiece(piece: string | PieceCID | number): Promise<string>
deletePiece(piece: string | number | PieceLink
piece: string | type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID | number): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<string>;
// Info & Preflight StorageContextAPI.getProviderInfo(): Promise<PDPProvider>
getProviderInfo(): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<(alias) interface PDPProviderimport PDPProvider
PDPProvider>; StorageContextAPI.preflightUpload(size: number): Promise<PreflightInfo>
preflightUpload(size: number
size: number): interface Promise<T>
Represents the completion of an asynchronous operation
Promise<(alias) interface PreflightInfoimport PreflightInfo
PreflightInfo>;
// Lifecycle StorageContextAPI.terminate(): Transaction
terminate(): type Transaction = Promise<`0x${string}`>
Transaction;}Storage Context Methods
Section titled “Storage Context Methods”const const storageContext: StorageContext
storageContext = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext({ StorageServiceOptions.providerAddress?: `0x${string}`
providerAddress: "0x...", // Optional: use specific provider address StorageServiceOptions.withCDN?: boolean
withCDN: true, // Optional: enable CDN for faster downloads StorageServiceOptions.metadata?: Record<string, string>
metadata: { type Application: string
Application: "Filecoin Storage DApp", type Version: string
Version: "1.0.0", type Category: string
Category: "AI", },});
const const llmModel: "sonnnet-4.5"
llmModel = "sonnnet-4.5";const const conversationId: "1234567890"
conversationId = "1234567890";
const const data: Uint8Array<ArrayBuffer>
data = new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("Deep research on decentralization...");
const const preflight: PreflightInfo
preflight = await const storageContext: StorageContext
storageContext.StorageContext.preflightUpload(size: number): Promise<PreflightInfo>
preflightUpload(const data: Uint8Array<ArrayBuffer>
data.Uint8Array<ArrayBuffer>.length: number
The length of the array.
length);
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Estimated costs:", const preflight: PreflightInfo
preflight.PreflightInfo.estimatedCost: { perEpoch: bigint; perDay: bigint; perMonth: bigint;}
estimatedCost);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Allowance sufficient:", const preflight: PreflightInfo
preflight.PreflightInfo.allowanceCheck: { sufficient: boolean; message?: string;}
allowanceCheck.sufficient: boolean
sufficient);
const { const pieceCid: PieceLink
pieceCid, const size: number
size, const pieceId: bigint | undefined
pieceId } = await const storageContext: StorageContext
storageContext.StorageContext.upload(data: Uint8Array | ReadableStream<Uint8Array>, options?: UploadOptions): Promise<UploadResult>
upload(const data: Uint8Array<ArrayBuffer>
data, { UploadOptions.metadata?: MetadataObject
metadata: { llmModel: string
llmModel, conversationId: string
conversationId }, UploadCallbacks.onUploadComplete?: (pieceCid: PieceCID) => void
onUploadComplete: (piece: PieceLink
piece) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Uploaded PieceCID: ${piece: PieceLink
piece.Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toV1(): Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
toV1().Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toString<string>(base?: MultibaseEncoder<string> | undefined): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>
Returns a string representation of an object.
toString()} to storage provider!` ); }, UploadCallbacks.onPiecesAdded?: (transaction: Hex, pieces?: { pieceCid: PieceCID;}[]) => void
onPiecesAdded: (hash: `0x${string}`
hash, pieces: { pieceCid: PieceCID;}[] | undefined
pieces) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `🔄 Waiting for transaction to be confirmed on chain (txHash: ${hash: `0x${string}`
hash})` ); var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Batch includes PieceCIDs: ${ pieces: { pieceCid: PieceCID;}[] | undefined
pieces?.Array<{ pieceCid: PieceCID; }>.map<U>(callbackfn: (value: { pieceCid: PieceCID;}, index: number, array: { pieceCid: PieceCID;}[]) => U, thisArg?: any): U[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(({ pieceCid: PieceLink
pieceCid }) => pieceCid: PieceLink
pieceCid.Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toString<string>(base?: MultibaseEncoder<string> | undefined): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>
Returns a string representation of an object.
toString()).Array<ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>>.join(separator?: string): string
Adds all the elements of an array into a string, separated by the specified separator string.
join(", ") ?? "" }` ); }, UploadCallbacks.onPiecesConfirmed?: (dataSetId: bigint, pieces: PieceRecord[]) => void
onPiecesConfirmed: (dataSetId: bigint
dataSetId, pieces: PieceRecord[]
pieces) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Data set ${dataSetId: bigint
dataSetId} confirmed with provider`); var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Piece ID mapping: ${pieces: PieceRecord[]
pieces .Array<PieceRecord>.map<string>(callbackfn: (value: PieceRecord, index: number, array: PieceRecord[]) => string, thisArg?: any): string[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(({ pieceId: bigint
pieceId, pieceCid: PieceLink
pieceCid }) => `${pieceId: bigint
pieceId}:${pieceCid: PieceLink
pieceCid}`) .Array<string>.join(separator?: string): string
Adds all the elements of an array into a string, separated by the specified separator string.
join(", ")}` ); },});
const const receivedData: Uint8Array<ArrayBufferLike>
receivedData = await const storageContext: StorageContext
storageContext.StorageContext.download(pieceCid: string | PieceCID, options?: DownloadOptions): Promise<Uint8Array>
download(const pieceCid: PieceLink
pieceCid);
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Received data: ${new var TextDecoder: new (label?: string, options?: TextDecoderOptions) => TextDecoder
The TextDecoder interface represents a decoder for a specific text encoding, such as UTF-8, ISO-8859-2, KOI8-R, GBK, etc.
TextDecoder().TextDecoder.decode(input?: AllowSharedBufferSource, options?: TextDecodeOptions): string
The TextDecoder.decode() method returns a string containing text decoded from the buffer passed as a parameter.
decode(const receivedData: Uint8Array<ArrayBufferLike>
receivedData)}`);
// Get the list of piece CIDs in the current data set by querying the providerconst const pieceCids: PieceLink[]
pieceCids = await const storageContext: StorageContext
storageContext.StorageContext.getDataSetPieces(): Promise<PieceCID[]>
getDataSetPieces();var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Piece CIDs: ${const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>, thisArg?: any): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<...>, Version>, string>[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map((cid: PieceLink
cid) => cid: PieceLink
cid.Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>.toString<string>(base?: MultibaseEncoder<string> | undefined): ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>
Returns a string representation of an object.
toString()).Array<ToString<Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, Version>, string>>.join(separator?: string): string
Adds all the elements of an array into a string, separated by the specified separator string.
join(", ")}`);
// Check the status of a piece on the service providerconst const status: PieceStatus
status = await const storageContext: StorageContext
storageContext.StorageContext.pieceStatus(pieceCid: string | PieceCID): Promise<PieceStatus>
pieceStatus(const pieceCid: PieceLink
pieceCid);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Piece exists: ${const status: PieceStatus
status.PieceStatus.exists: boolean
exists}`);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Data set last proven: ${const status: PieceStatus
status.PieceStatus.dataSetLastProven: Date | null
dataSetLastProven}`);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Data set next proof due: ${const status: PieceStatus
status.PieceStatus.dataSetNextProofDue: Date | null
dataSetNextProofDue}`);Efficient Batch Uploads
Section titled “Efficient Batch Uploads”When uploading multiple files, the SDK automatically batches operations for efficiency. Due to blockchain transaction ordering requirements, uploads are processed sequentially. To maximize efficiency:
The SDK batches up to 32 uploads by default (configurable via uploadBatchSize). If you have more than 32 files, they’ll be processed in multiple batches automatically.
Terminating a Data Set
Section titled “Terminating a Data Set”Irreversible Operation
Data set termination cannot be undone. Once initiated:
- The termination transaction is irreversible
- After the termination period, the provider may delete all data
- Payment rails associated with the data set will be terminated
- You cannot cancel the termination
Only terminate data sets when you’re certain you no longer need the data.
To delete an entire data set and discontinue payments for the service, call context.terminate().
This method submits an on-chain transaction to initiate the termination process. Following a defined termination period, payments will cease, and the service provider will be able to delete the data set.
You can also terminate a data set using synapse.storage.terminateDataSet(dataSetId), in a case that creation of the context is not possible or dataSetId is known and creation of the context is not necessary.
const const storageContext: StorageContext
storageContext = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext({ StorageServiceOptions.providerAddress?: `0x${string}`
providerAddress: "0x...", // Optional: use specific provider address StorageServiceOptions.withCDN?: boolean
withCDN: true, // Optional: enable CDN for faster downloads});const const hash: `0x${string}`
hash = await const storageContext: StorageContext
storageContext.StorageContext.terminate(): Promise<Hash>
terminate();var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Dataset termination transaction: ${const hash: `0x${string}`
hash}`);
await const synapse: Synapse
synapse.Synapse.client: Client<Transport, Chain, Account, PublicRpcSchema, PublicActions<Transport, Chain>>
client.waitForTransactionReceipt: (args: WaitForTransactionReceiptParameters<Chain>) => Promise<TransactionReceipt>
Waits for the Transaction to be included on a Block (one confirmation), and then returns the Transaction Receipt. If the Transaction reverts, then the action will throw an error.
- Docs: https://viem.sh/docs/actions/public/waitForTransactionReceipt
- Example: https://stackblitz.com/github/wevm/viem/tree/main/examples/transactions_sending-transactions
- JSON-RPC Methods:
- Polls
eth_getTransactionReceipt on each block until it has been processed.
- If a Transaction has been replaced:
- Calls
eth_getBlockByNumber and extracts the transactions
- Checks if one of the Transactions is a replacement
- If so, calls
eth_getTransactionReceipt.
waitForTransactionReceipt({ hash: `0x${string}`
The hash of the transaction.
hash });var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Dataset terminated successfully");Deleting a Piece
Section titled “Deleting a Piece”To delete an individual piece from the data set, call context.deletePiece(pieceCid).
This method submits an on-chain transaction to initiate the deletion process.
Important: Piece deletion is irreversible and cannot be canceled once initiated.
const const storageContext: StorageContext
storageContext = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext({ StorageServiceOptions.providerAddress?: `0x${string}`
providerAddress: "0x...", // Optional: use specific provider address StorageServiceOptions.withCDN?: boolean
withCDN: true, // Optional: enable CDN for faster downloads});
// Collect all pieces at onceconst const pieces: any[]
pieces = [];for await (const const piece: PieceRecord
piece of const storageContext: StorageContext
storageContext.StorageContext.getPieces(options?: { batchSize?: bigint; signal?: AbortSignal;}): AsyncGenerator<PieceRecord>
getPieces()) { const pieces: any[]
pieces.Array<any>.push(...items: any[]): number
Appends new elements to the end of an array, and returns the new length of the array.
push(const piece: PieceRecord
piece);}
// Delete the first pieceawait const storageContext: StorageContext
storageContext.StorageContext.deletePiece(piece: string | PieceCID | bigint): Promise<Hash>
deletePiece(const pieces: PieceRecord[]
pieces[0].PieceRecord.pieceId: bigint
pieceId);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log( `Piece ${const pieces: PieceRecord[]
pieces[0].PieceRecord.pieceCid: PieceLink
pieceCid} (ID: ${const pieces: PieceRecord[]
pieces[0].PieceRecord.pieceId: bigint
pieceId}) deleted successfully`);Download Options
Section titled “Download Options”The SDK provides flexible download options with clear semantics:
SP-Agnostic Download (from anywhere)
Section titled “SP-Agnostic Download (from anywhere)”Download pieces from any available provider using the StorageManager:
// Download from any provider that has the piececonst data = await synapse.storage.download(pieceCid);
// Download with CDN optimization (if available)const dataWithCDN = await synapse.storage.download(pieceCid, { withCDN: true });
// Prefer a specific provider (falls back to others if unavailable)const dataFromProvider = await synapse.storage.download(pieceCid, { providerAddress: "0x...",});Context-Specific Download (from this provider)
Section titled “Context-Specific Download (from this provider)”When using a StorageContext, downloads are automatically restricted to that specific provider:
// Downloads from the provider associated with this contextconst context = await synapse.storage.createContext({ providerAddress: "0x...",});const data = await context.download(pieceCid);
// The context passes its withCDN setting to the downloadconst contextWithCDN = await synapse.storage.createContext({ withCDN: true });const dataWithCDN = await contextWithCDN.download(pieceCid); // Uses CDN if availableCDN Option Inheritance
Section titled “CDN Option Inheritance”The withCDN option (which is an alias for metadata: { withCDN: '' }) follows a clear inheritance hierarchy:
- Synapse level: Default setting for all operations
- StorageContext level: Can override Synapse’s default
- Method level: Can override instance settings
// Example of inheritanceconst synapse = await Synapse.create({ withCDN: true }); // Global default: CDN enabledconst context = await synapse.storage.createContext({ withCDN: false }); // Context override: CDN disabledawait synapse.storage.download(pieceCid); // Uses Synapse's withCDN: trueawait context.download(pieceCid); // Uses context's withCDN: falseawait synapse.storage.download(pieceCid, { withCDN: false }); // Method override: CDN disabledNote: When withCDN: true is set, it adds { withCDN: '' } to the data set’s metadata, ensuring CDN-enabled and non-CDN data sets remain separate.
Next Steps
Section titled “Next Steps”Now that you understand Storage Context and advanced operations:
-
Calculate Storage Costs → Plan your budget and fund your storage account. Use the quick calculator to estimate monthly costs.
-
Storage Operations Basics → Review fundamental storage concepts and auto-managed operations. Good for a refresher on the simpler approach.
-
Component Architecture → Understand how StorageContext fits into the SDK design. Deep dive into the component architecture.
-
Payment Management → Manage deposits, approvals, and payment rails. Required before your first upload.