Skip to content

Commit

Permalink
docs update
Browse files Browse the repository at this point in the history
  • Loading branch information
FayazRahman committed Dec 26, 2022
1 parent f24554a commit cba1cf8
Show file tree
Hide file tree
Showing 5 changed files with 52 additions and 1 deletion.
14 changes: 14 additions & 0 deletions deeplake/api/tests/test_nifti.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import numpy as np

import deeplake
import pytest
import os


Expand Down Expand Up @@ -76,3 +77,16 @@ def test_nifti_2(memory_ds):
ds.nifti2.append(sample)
np.testing.assert_array_equal(ds.nifti2.numpy()[0], img.get_fdata())
assert ds.nifti2.shape == (1, *sample.shape)


def test_nifti_raw_compress(memory_ds):
with memory_ds as ds:
ds.create_tensor("abc", htype="nifti", sample_compression="nii.gz")

with pytest.raises(NotImplementedError):
ds.abc.append(np.ones((40, 40, 10)))

ds.create_tensor("xyz", htype="nifti", sample_compression=None)
ds.xyz.append(np.ones((40, 40, 10)))

np.testing.assert_array_equal(ds.xyz[0].numpy(), np.ones((40, 40, 10)))
5 changes: 5 additions & 0 deletions deeplake/core/compression.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,6 +237,11 @@ def compress_array(array: np.ndarray, compression: Optional[str]) -> bytes:
raise NotImplementedError(
"In order to store mesh data, you should use `deeplake.read(path_to_file)`. Compressing raw data is not yet supported."
)
elif compr_type == NIFTI_COMPRESSION:
raise NotImplementedError(
"In order to store nifti data, you should use `deeplake.read(path_to_file)` or use a None compression. "
"Compressing raw data is not yet supported."
)
if compression == "apng":
return _compress_apng(array)
try:
Expand Down
31 changes: 31 additions & 0 deletions docs/source/Htypes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -553,6 +553,37 @@ Appending polygons with numpy arrays
>>> sample = [poly1, poly2, poly3]
>>> ds.polygons.append(sample)

.. _nifti-htype:

Nifti Htype
~~~~~~~~~~~

- :bluebold:`Sample dimensions:` ``(# height, # width, # slices)`` or ``(# height, # width, # slices, # time unit)`` in case of time-series data.

:blue:`Creating a nifti tensor`
-------------------------------

A nifti tensor can be created using

>>> ds.create_tensor("patients", htype="nifti", sample_compression="nii.gz")

- Supported compressions:

>>> ["nii.gz", "nii", None]

:blue:`Appending nifti data`
----------------------------

- Nifti samples can be of type ``np.ndarray`` or :class:`~deeplake.core.sample.Sample` which is returned by :meth:`deeplake.read`.
- Deep Lake does not support compression of raw nifti data. Therefore, array of raw frames can only be appended to tensors with
``None`` compression.

:bluebold:`Examples`

>>> ds.patients.append(deeplake.read("data/patient0.nii.gz"))

>>> ds.patients.extend([deeplake.read(f"data/patient{i}.nii.gz") for i in range(10)])

.. _point_cloud-htype:

Point Cloud Htype
Expand Down
2 changes: 1 addition & 1 deletion docs/source/Random-Split.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. currentmodule:: deeplake.core.dataset

Random Split
=======
============

Splits the dataset into non overlapping new datasets of given lengths.
The resulting datasets are generated in such a way that when creating a dataloader from the view and training on it,
Expand Down
1 change: 1 addition & 0 deletions docs/source/_static/csv/htypes.csv
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ text, str, None
json, Any, None
list, List, None
dicom, None, dcm
:ref:`nifti <nifti-htype>`,None,Required arg
:ref:`point_cloud <point_cloud-htype>`, None, las
:ref:`mesh <mesh-htype>`, None, ply
instance_label, uint32, None
Expand Down

0 comments on commit cba1cf8

Please sign in to comment.