LazyMemmapStorage¶
- class torchrl.data.replay_buffers.LazyMemmapStorage(max_size: int, *, scratch_dir=None, device: device = 'cpu', ndim: int = 1, existsok: bool = False, compilable: bool = False)[原始碼]¶
記憶體對映的張量和 tensordicts 儲存。
- 引數:
max_size (int) – 儲存大小,即緩衝區中儲存的最大元素數量。
- 關鍵字引數:
scratch_dir (str 或 path) – memmap-tensors 將寫入的目錄。
device (torch.device, 可選) – 取樣張量將被儲存和傳送到的裝置。預設為
torch.device("cpu")。如果提供None,則裝置將從傳遞的第一個資料批次自動收集。此功能預設不啟用,以避免因資料意外放置在 GPU 上而導致 OOM 問題。ndim (int, optional) – 計算儲存大小時要考慮的維度數。例如,形狀為
[3, 4]的儲存,如果ndim=1,則容量為3;如果ndim=2,則容量為12。預設為1。existsok (bool, 可選) – 如果磁碟上已存在任何張量,是否應引發錯誤。預設為
True。如果為False,則將按原樣開啟張量,不覆蓋。
注意
在檢查點(checkpointing)
LazyMemmapStorage時,可以提供一個與儲存已儲存位置相同的路徑,以避免執行已儲存在磁碟上的資料的長時間複製。這僅在使用預設的TensorStorageCheckpointer檢查點時才有效。例如>>> from tensordict import TensorDict >>> from torchrl.data import TensorStorage, LazyMemmapStorage, ReplayBuffer >>> import tempfile >>> from pathlib import Path >>> import time >>> td = TensorDict(a=0, b=1).expand(1000).clone() >>> # We pass a path that is <main_ckpt_dir>/storage to LazyMemmapStorage >>> rb_memmap = ReplayBuffer(storage=LazyMemmapStorage(10_000_000, scratch_dir="dump/storage")) >>> rb_memmap.extend(td); >>> # Checkpointing in `dump` is a zero-copy, as the data is already in `dump/storage` >>> rb_memmap.dumps(Path("./dump"))
示例
>>> data = TensorDict({ ... "some data": torch.randn(10, 11), ... ("some", "nested", "data"): torch.randn(10, 11, 12), ... }, batch_size=[10, 11]) >>> storage = LazyMemmapStorage(100) >>> storage.set(range(10), data) >>> len(storage) # only the first dimension is considered as indexable 10 >>> storage.get(0) TensorDict( fields={ some data: MemoryMappedTensor(shape=torch.Size([11]), device=cpu, dtype=torch.float32, is_shared=False), some: TensorDict( fields={ nested: TensorDict( fields={ data: MemoryMappedTensor(shape=torch.Size([11, 12]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([11]), device=cpu, is_shared=False)}, batch_size=torch.Size([11]), device=cpu, is_shared=False)}, batch_size=torch.Size([11]), device=cpu, is_shared=False)
此類也支援 tensorclass 資料。
示例
>>> from tensordict import tensorclass >>> @tensorclass ... class MyClass: ... foo: torch.Tensor ... bar: torch.Tensor >>> data = MyClass(foo=torch.randn(10, 11), bar=torch.randn(10, 11, 12), batch_size=[10, 11]) >>> storage = LazyMemmapStorage(10) >>> storage.set(range(10), data) >>> storage.get(0) MyClass( bar=MemoryMappedTensor(shape=torch.Size([11, 12]), device=cpu, dtype=torch.float32, is_shared=False), foo=MemoryMappedTensor(shape=torch.Size([11]), device=cpu, dtype=torch.float32, is_shared=False), batch_size=torch.Size([11]), device=cpu, is_shared=False)
- attach(buffer: Any) None¶
此函式將取樣器附加到此儲存。
從該儲存讀取的緩衝區必須透過呼叫此方法作為已附加實體包含進來。這確保了當儲存中的資料發生變化時,元件能夠感知到這些變化,即使該儲存與其他緩衝區(例如,Priority Samplers)共享。
- 引數:
buffer – 讀取此儲存的物件。
- dump(*args, **kwargs)¶
dumps()的別名。
- load(*args, **kwargs)¶
loads()的別名。
- save(*args, **kwargs)¶
dumps()的別名。