Module pyucrio.data.ucalgary
Data downloading and reading routines for data provided by the University of Calgary.
Sub-modules
pyucrio.data.ucalgary.read
Classes
class Data (data: Any,
timestamp: List[datetime.datetime],
metadata: List[Dict],
problematic_files: List[pyucalgarysrs.data.classes.ProblematicFile],
calibrated_data: Any,
dataset: pyucalgarysrs.data.classes.Dataset | None = None)-
Representation of the data read in from a
pyucrio.data.ucalgary.read
call.Attributes
data
:Any
- The loaded data. This can be one of the following types: ndarray, List[Skymap], List[Calibration].
timestamp
:List[datetime.datetime]
- List of timestamps for the read in data.
metadata
:List[Dict]
- List of dictionaries containing metadata specific to each timestamp/image/record.
problematic_files
:List[ProblematicFiles]
- A list detailing any files that encountered issues during reading.
calibrated_data
:Any
- A calibrated version of the data. Populated and utilized by data analysis libraries. Has a
None
value until calibrated data is inserted manually. dataset
:Dataset
- The
Dataset
object for this data.
Expand source code
@dataclass class Data: """ Representation of the data read in from a `read` call. Attributes: data (Any): The loaded data. This can be one of the following types: ndarray, List[Skymap], List[Calibration]. timestamp (List[datetime.datetime]): List of timestamps for the read in data. metadata (List[Dict]): List of dictionaries containing metadata specific to each timestamp/image/record. problematic_files (List[ProblematicFiles]): A list detailing any files that encountered issues during reading. calibrated_data (Any): A calibrated version of the data. Populated and utilized by data analysis libraries. Has a `None` value until calibrated data is inserted manually. dataset (Dataset): The `Dataset` object for this data. """ data: Any timestamp: List[datetime.datetime] metadata: List[Dict] problematic_files: List[ProblematicFile] calibrated_data: Any dataset: Optional[Dataset] = None def __str__(self) -> str: return self.__repr__() def __repr__(self) -> str: # set data value if (isinstance(self.data, ndarray) is True): data_str = "array(dims=%s, dtype=%s)" % (self.data.shape, self.data.dtype) if (isinstance(self.data, GridData) is True): data_str = self.data.__repr__() elif (isinstance(self.data, list) is True): if (len(self.data) == 0): data_str = "[0 items]" elif (isinstance(self.data[0], Skymap) is True): if (len(self.data) == 1): data_str = "[1 Skymap object]" else: data_str = "[%d Skymap objects]" % (len(self.data)) elif (isinstance(self.data[0], Calibration) is True): if (len(self.data) == 1): data_str = "[1 Calibration object]" else: data_str = "[%d Calibration objects]" % (len(self.data)) elif (isinstance(self.data[0], RiometerData) is True): if (len(self.data) == 1): data_str = "[1 RiometerData object]" else: data_str = "[%d RiometerData objects]" % (len(self.data)) elif (len(self.data) == 1): data_str = "[1 item]" else: data_str = "[%d items]" % (len(self.data)) else: data_str = self.data.__repr__() # set timestamp string if (len(self.timestamp) == 0): timestamp_str = "[]" elif (len(self.timestamp) == 1): timestamp_str = "[1 datetime]" else: timestamp_str = "[%d datetimes]" % (len(self.timestamp)) # set metadata string if (len(self.metadata) == 0): metadata_str = "[]" elif (len(self.metadata) == 1): metadata_str = "[1 dictionary]" else: metadata_str = "[%d dictionaries]" % (len(self.metadata)) # set rest of values problematic_files_str = "[]" if len(self.problematic_files) == 0 else "[%d problematic files]" % (len(self.problematic_files)) calibrated_data_str = "None" if self.calibrated_data is None else "array(dims=%s, dtype=%s)" % (self.calibrated_data.shape, self.calibrated_data.dtype) dataset_str = "None" if self.dataset is None else self.dataset.__repr__()[0:75] + "...)" # return return "Data(data=%s, timestamp=%s, metadata=%s, problematic_files=%s, calibrated_data=%s, dataset=%s)" % ( data_str, timestamp_str, metadata_str, problematic_files_str, calibrated_data_str, dataset_str, ) def pretty_print(self): """ A special print output for this class. """ # set data value if (isinstance(self.data, ndarray) is True): data_str = "array(dims=%s, dtype=%s)" % (self.data.shape, self.data.dtype) elif (isinstance(self.data, list) is True): if (len(self.data) == 0): data_str = "[0 items]" elif (isinstance(self.data[0], Skymap) is True): if (len(self.data) == 1): data_str = "[1 Skymap object]" else: data_str = "[%d Skymap objects]" % (len(self.data)) elif (isinstance(self.data[0], Calibration) is True): if (len(self.data) == 1): data_str = "[1 Calibration object]" else: data_str = "[%d Calibration objects]" % (len(self.data)) elif (isinstance(self.data[0], RiometerData) is True): if (len(self.data) == 1): data_str = "[1 RiometerData object]" else: data_str = "[%d HSRData objects]" % (len(self.data)) elif (isinstance(self.data[0], HSRData) is True): if (len(self.data) == 1): data_str = "[1 HSRData object]" else: data_str = "[%d HSRData objects]" % (len(self.data)) elif (len(self.data) == 1): data_str = "[1 item]" else: data_str = "[%d items]" % (len(self.data)) else: data_str = self.data.__repr__() # set timestamp string if (len(self.timestamp) == 0): timestamp_str = "[]" elif (len(self.timestamp) == 1): timestamp_str = "[1 datetime]" else: timestamp_str = "[%d datetimes]" % (len(self.timestamp)) # set metadata string if (len(self.metadata) == 0): metadata_str = "[]" elif (len(self.metadata) == 1): metadata_str = "[1 dictionary]" else: metadata_str = "[%d dictionaries]" % (len(self.metadata)) # set rest of values problematic_files_str = "[]" if len(self.problematic_files) == 0 else "[%d problematic files]" % (len(self.problematic_files)) calibrated_data_str = "None" if self.calibrated_data is None else "array(dims=%s, dtype=%s)" % (self.calibrated_data.shape, self.calibrated_data.dtype) dataset_str = "None" if self.dataset is None else self.dataset.__repr__()[0:75] + "...)" # print print("Data:") print(" %-22s: %s" % ("data", data_str)) print(" %-22s: %s" % ("timestamp", timestamp_str)) print(" %-22s: %s" % ("metadata", metadata_str)) print(" %-22s: %s" % ("problematic_files", problematic_files_str)) print(" %-22s: %s" % ("calibrated_data", calibrated_data_str)) print(" %-22s: %s" % ("dataset", dataset_str))
Class variables
var calibrated_data : Any
var data : Any
var dataset : pyucalgarysrs.data.classes.Dataset | None
var metadata : List[Dict]
var problematic_files : List[pyucalgarysrs.data.classes.ProblematicFile]
var timestamp : List[datetime.datetime]
Methods
def pretty_print(self)
-
A special print output for this class.
class Dataset (name: str,
short_description: str,
long_description: str,
data_tree_url: str,
file_listing_supported: bool,
file_reading_supported: bool,
level: str,
supported_libraries: List[str],
file_time_resolution: str,
doi: str | None = None,
doi_details: str | None = None,
citation: str | None = None)-
A dataset available from the UCalgary Space Remote Sensing API, with possibly support for downloading and/or reading.
Attributes
name
:str
- Dataset name
short_description
:str
- A short description about the dataset
long_description
:str
- A longer description about the dataset
data_tree_url
:str
- The data tree URL prefix. Used for saving data locally with a similar data tree structure compared to the UCalgary Open Data archive.
file_listing_supported
:bool
- Flag indicating if file listing (downloading) is supported for this dataset.
file_reading_supported
:bool
- Flag indicating if file reading is supported for this dataset.
file_time_resolution
:str
- Time resolution of the files for this dataset, represented as a string. Possible values are: 1min, 1hr, 1day, not_applicable.
level
:str
- Dataset level as per L0/L1/L2/etc standards.
doi
:str
- Dataset DOI unique identifier.
doi_details
:str
- Further details about the DOI.
citation
:str
- String to use when citing usage of the dataset.
provider
:str
- Data provider.
supported_libraries
:List[str]
- Libraries that support usage of this dataset.
Expand source code
class Dataset: """ A dataset available from the UCalgary Space Remote Sensing API, with possibly support for downloading and/or reading. Attributes: name (str): Dataset name short_description (str): A short description about the dataset long_description (str): A longer description about the dataset data_tree_url (str): The data tree URL prefix. Used for saving data locally with a similar data tree structure compared to the UCalgary Open Data archive. file_listing_supported (bool): Flag indicating if file listing (downloading) is supported for this dataset. file_reading_supported (bool): Flag indicating if file reading is supported for this dataset. file_time_resolution (str): Time resolution of the files for this dataset, represented as a string. Possible values are: 1min, 1hr, 1day, not_applicable. level (str): Dataset level as per L0/L1/L2/etc standards. doi (str): Dataset DOI unique identifier. doi_details (str): Further details about the DOI. citation (str): String to use when citing usage of the dataset. provider (str): Data provider. supported_libraries (List[str]): Libraries that support usage of this dataset. """ def __init__(self, name: str, short_description: str, long_description: str, data_tree_url: str, file_listing_supported: bool, file_reading_supported: bool, level: str, supported_libraries: List[str], file_time_resolution: str, doi: Optional[str] = None, doi_details: Optional[str] = None, citation: Optional[str] = None): self.name = name self.short_description = short_description self.long_description = long_description self.data_tree_url = data_tree_url self.file_listing_supported = file_listing_supported self.file_reading_supported = file_reading_supported self.level = level self.doi = doi self.doi_details = doi_details self.citation = citation self.provider = "UCalgary" self.supported_libraries = supported_libraries self.file_time_resolution = file_time_resolution def __str__(self) -> str: return self.__repr__() def __repr__(self) -> str: return "Dataset(name=%s, short_description='%s', provider='%s', level='%s', doi_details='%s', ...)" % ( self.name, self.short_description, self.provider, self.level, self.doi_details, ) def pretty_print(self): """ A special print output for this class. """ print("Dataset:") for var_name in dir(self): # exclude methods if (var_name.startswith("__") or var_name == "pretty_print"): continue # convert var to string format we want var_value = getattr(self, var_name) print(" %-27s: %s" % (var_name, None if var_value is None else var_value))
Methods
def pretty_print(self)
-
A special print output for this class.
class FileDownloadResult (filenames: List[str],
count: int,
total_bytes: int,
output_root_path: str,
dataset: pyucalgarysrs.data.classes.Dataset)-
Representation of the results from a data download call.
Attributes
filenames
:List[str]
- List of downloaded files, as absolute paths of their location on the local machine.
count
:int
- Number of files downloaded
total_bytes
:int
- Cumulative amount of bytes saved on the local machine.
output_root_path
:str
- The root path of where the data was saved to on the local machine.
dataset
:Dataset
- The
Dataset
object for this data.
Expand source code
@dataclass class FileDownloadResult: """ Representation of the results from a data download call. Attributes: filenames (List[str]): List of downloaded files, as absolute paths of their location on the local machine. count (int): Number of files downloaded total_bytes (int): Cumulative amount of bytes saved on the local machine. output_root_path (str): The root path of where the data was saved to on the local machine. dataset (Dataset): The `Dataset` object for this data. """ filenames: List[str] count: int total_bytes: int output_root_path: str dataset: Dataset def pretty_print(self): """ A special print output for this class. """ print("FileListingResponse:") for var_name in dir(self): # exclude methods if (var_name.startswith("__") or var_name == "pretty_print"): continue # convert var to string format we want var_value = getattr(self, var_name) if (var_name == "filenames"): print(" %-18s: [%d filenames]" % (var_name, len(var_value))) else: print(" %-18s: %s" % (var_name, None if var_value is None else var_value))
Class variables
var count : int
var dataset : pyucalgarysrs.data.classes.Dataset
var filenames : List[str]
var output_root_path : str
var total_bytes : int
Methods
def pretty_print(self)
-
A special print output for this class.
class FileListingResponse (urls: List[str],
path_prefix: str,
count: int,
dataset: pyucalgarysrs.data.classes.Dataset,
total_bytes: int | None = None)-
Representation of the file listing response from the UCalgary Space Remote Sensing API.
Attributes
urls
:List[str]
- A list of URLs for available data files.
path_prefix
:str
- The URL prefix, which is sed for saving data locally with a similar data tree structure compared to the UCalgary Open Data archive.
count
:int
- The number of URLs available.
dataset
:Dataset
- The
Dataset
object for this data. total_bytes
:int
- The cumulative amount of bytes for the available URLs.
Expand source code
@dataclass class FileListingResponse: """ Representation of the file listing response from the UCalgary Space Remote Sensing API. Attributes: urls (List[str]): A list of URLs for available data files. path_prefix (str): The URL prefix, which is sed for saving data locally with a similar data tree structure compared to the UCalgary Open Data archive. count (int): The number of URLs available. dataset (Dataset): The `Dataset` object for this data. total_bytes (int): The cumulative amount of bytes for the available URLs. """ urls: List[str] path_prefix: str count: int dataset: Dataset total_bytes: Optional[int] = None def pretty_print(self): """ A special print output for this class. """ print("FileListingResponse:") for var_name in dir(self): # exclude methods if (var_name.startswith("__") or var_name == "pretty_print"): continue # convert var to string format we want var_value = getattr(self, var_name) if (var_name == "urls"): print(" %-13s: [%d URLs]" % (var_name, len(var_value))) else: print(" %-13s: %s" % (var_name, None if var_value is None else var_value))
Class variables
var count : int
var dataset : pyucalgarysrs.data.classes.Dataset
var path_prefix : str
var total_bytes : int | None
var urls : List[str]
Methods
def pretty_print(self)
-
A special print output for this class.
class Observatory (uid: str, full_name: str, geodetic_latitude: float, geodetic_longitude: float)
-
Representation for an observatory.
Attributes
uid
:str
- 4-letter unique identifier (traditionally referred to as the site UID)
full_name
:str
- full location string for the observatory
geodetic_latitude
:float
- geodetic latitude for the observatory, in decimal format (-90 to 90)
geodetic_longitude
:float
- geodetic longitude for the observatory, in decimal format (-180 to 180)
provider
:str
- Data provider.
Expand source code
class Observatory: """ Representation for an observatory. Attributes: uid (str): 4-letter unique identifier (traditionally referred to as the site UID) full_name (str): full location string for the observatory geodetic_latitude (float): geodetic latitude for the observatory, in decimal format (-90 to 90) geodetic_longitude (float): geodetic longitude for the observatory, in decimal format (-180 to 180) provider (str): Data provider. """ def __init__(self, uid: str, full_name: str, geodetic_latitude: float, geodetic_longitude: float): self.uid = uid self.full_name = full_name self.geodetic_latitude = geodetic_latitude self.geodetic_longitude = geodetic_longitude self.provider = "UCalgary" def __str__(self) -> str: return self.__repr__() def __repr__(self) -> str: return "Observatory(uid=%s, full_name='%s', geodetic_latitude=%s, geodetic_longitude=%s, provider='%s')" % ( self.uid, self.full_name, self.geodetic_latitude, self.geodetic_longitude, self.provider, ) def pretty_print(self): """ A special print output for this class. """ print("Observatory:") for var_name in dir(self): # exclude methods if (var_name.startswith("__") or var_name == "pretty_print"): continue # convert var to string format we want var_value = getattr(self, var_name) print(" %-22s: %s" % (var_name, None if var_value is None else var_value))
Methods
def pretty_print(self)
-
A special print output for this class.
class UCalgaryManager (rio_obj)
-
The UCalgaryManager object is initialized within every PyUCRio object. It acts as a way to access the submodules and carry over configuration information in the super class.
Expand source code
class UCalgaryManager: """ The UCalgaryManager object is initialized within every PyUCRio object. It acts as a way to access the submodules and carry over configuration information in the super class. """ __DEFAULT_DOWNLOAD_N_PARALLEL = 5 def __init__(self, rio_obj): self.__rio_obj: PyUCRio = rio_obj # initialize sub-modules self.__readers = ReadManager(self.__rio_obj) @property def readers(self): """ Access to the `read` submodule from within a PyUCRio object. """ return self.__readers def list_datasets(self, name: Optional[str] = None, timeout: Optional[int] = None) -> List[Dataset]: """ List available datasets Args: name (str): Supply a name used for filtering. If that name is found in the available dataset names received from the API, it will be included in the results. This parameter is optional. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: A list of [`Dataset`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.Dataset) objects. Raises: pyucrio.exceptions.PyUCRioAPIError: An API error was encountered. """ try: return self.__rio_obj.srs_obj.data.list_datasets( name=name, timeout=timeout, supported_library="pyucrio", ) except SRSAPIError as e: raise PyUCRioAPIError(e) from e def get_dataset(self, name: str, timeout: Optional[int] = None) -> Dataset: """ Get a specific dataset Args: name (str): The dataset name to get. Case is insensitive. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: The found [`Dataset`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.Dataset) object. Raises an exception if not found. Raises: pyucrio.exceptions.PyUCRioAPIError: An API error was encountered. """ try: return self.__rio_obj.srs_obj.data.get_dataset(name, timeout=timeout) except Exception as e: raise PyUCRioAPIError(e) from e def list_observatories(self, instrument_array: Literal["norstar_riometer", "swan_hsr"], uid: Optional[str] = None, timeout: Optional[int] = None) -> List[Observatory]: """ List information about observatories Args: instrument_array (str): The instrument array to list observatories for. Valid values are: norstar_riometer, and swan_hsr. uid (str): Supply a observatory unique identifier used for filtering (usually 4-letter site code). If that UID is found in the available observatories received from the API, it will be included in the results. This parameter is optional. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: A list of [`Observatory`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.Observatory) objects. Raises: pyucrio.exceptions.PyUCRioAPIError: An API error was encountered. """ try: return self.__rio_obj.srs_obj.data.list_observatories(instrument_array, uid=uid, timeout=timeout) except SRSAPIError as e: raise PyUCRioAPIError(e) from e def list_supported_read_datasets(self) -> List[str]: """ List the datasets which have file reading capabilities supported. Returns: A list of the dataset names with file reading support. """ return self.__rio_obj.srs_obj.data.list_supported_read_datasets() def is_read_supported(self, dataset_name: str) -> bool: """ Check if a given dataset has file reading support. Not all datasets available in the UCalgary Space Remote Sensing Open Data Platform have special readfile routines in this library. This is because some datasets are handled by other libraries (ie. PyAuroraX for ASI data), or are in basic formats such as JPG or PNG, so unique functions aren't necessary. We leave it up to the user to open those basic files in whichever way they prefer. Use the `list_supported_read_datasets()` function to see all datasets that have special file reading functionality in this library. Args: dataset_name (str): The dataset name to check if file reading is supported. This parameter is required. Returns: Boolean indicating if file reading is supported. """ return self.__rio_obj.srs_obj.data.is_read_supported(dataset_name) def download(self, dataset_name: str, start: datetime.datetime, end: datetime.datetime, site_uid: Optional[str] = None, n_parallel: int = __DEFAULT_DOWNLOAD_N_PARALLEL, overwrite: bool = False, progress_bar_disable: bool = False, progress_bar_ncols: Optional[int] = None, progress_bar_ascii: Optional[str] = None, progress_bar_desc: Optional[str] = None, timeout: Optional[int] = None) -> FileDownloadResult: """ Download data from the UCalgary Space Remote Sensing Open Data Platform. The parameters `dataset_name`, `start`, and `end` are required. All other parameters are optional. Args: dataset_name (str): Name of the dataset to download data for. Use the `list_datasets()` function to get the possible values for this parameter. One example is "SWAN_HSR_K0_H5". Note that dataset names are case sensitive. This parameter is required. start (datetime.datetime): Start timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required. end (datetime.datetime): End timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required. site_uid (str): The site UID to filter for. If specified, data will be downloaded for only the site matching the given value. If excluded, data for all available sites will be downloaded. An example value could be 'gill', meaning all data from the Gillam observatory will be downloaded for the given dataset name, start, and end times. This parameter is optional. n_parallel (int): Number of data files to download in parallel. Default value is 5. Adjust as needed for your internet connection. This parameter is optional. overwrite (bool): By default, data will not be re-downloaded if it already exists locally. Use the `overwrite` parameter to force re-downloading. Default is `False`. This parameter is optional. progress_bar_disable (bool): Disable the progress bar. Default is `False`. This parameter is optional. progress_bar_ncols (int): Number of columns for the progress bar (straight passthrough of the `ncols` parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further information. progress_bar_ascii (str): ASCII value to use when constructing the visual aspect of the progress bar (straight passthrough of the `ascii` parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further details. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: A [`FileDownloadResult`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.FileDownloadResult) object containing details about what data files were downloaded. Raises: pyucrio.exceptions.PyUCRioDownloadError: an error was encountered while downloading a specific file pyucrio.exceptions.PyUCRioAPIError: an API error was encountered Notes: -------- The `progress_bar_*` parameters can be used to enable/disable/adjust the progress bar. Excluding the `progress_bar_disable` parameter, all others are straight pass-throughs to the tqdm progress bar function. The `progress_bar_ncols` parameter allows for adjusting the width. The `progress_bar_ascii` parameter allows for adjusting the appearance of the progress bar. And the `progress_bar_desc` parameter allows for adjusting the description at the beginning of the progress bar. Further details can be found on the [tqdm documentation](https://tqdm.github.io/docs/tqdm/#tqdm-objects). Data downloading will use the `download_data_root_path` variable within the super class' object ([`PyUCRio`](../../index.html#pyucrio.PyUCRio)) to determine where to save data to. If you'd like to change this path to somewhere else you can change that variable before your download() call, like so: ```python import pyucrio rio = pyucrio.PyUCRio() rio.data_download_root_path = "some_new_path" rio.data.download(dataset_name, start, end) ``` """ try: return self.__rio_obj.srs_obj.data.download( dataset_name, start, end, site_uid=site_uid, n_parallel=n_parallel, overwrite=overwrite, progress_bar_disable=progress_bar_disable, progress_bar_ncols=progress_bar_ncols, progress_bar_ascii=progress_bar_ascii, progress_bar_desc=progress_bar_desc, timeout=timeout, ) except SRSDownloadError as e: raise PyUCRioDownloadError(e) from e except SRSAPIError as e: raise PyUCRioAPIError(e) from e def download_using_urls(self, file_listing_response: FileListingResponse, n_parallel: int = __DEFAULT_DOWNLOAD_N_PARALLEL, overwrite: bool = False, progress_bar_disable: bool = False, progress_bar_ncols: Optional[int] = None, progress_bar_ascii: Optional[str] = None, progress_bar_desc: Optional[str] = None, timeout: Optional[int] = None) -> FileDownloadResult: """ Download data from the UCalgary Space Remote Sensing Open Data Platform using a FileListingResponse object. This would be used in cases where more customization is needed than the generic `download()` function. One example of using this function would start by using `get_urls()` to retrieve the list of URLs available for download, then further process this list to fewer files based on some other requirement (ie. time down-sampling such as one file per hour). Lastly using this function to download the new custom set URLs. Args: file_listing_response (FileListingResponse): A [`FileListingResponse`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.FileListingResponse) object returned from a `get_urls()` call, which contains a list of URLs to download for a specific dataset. This parameter is required. n_parallel (int): Number of data files to download in parallel. Default value is 5. Adjust as needed for your internet connection. This parameter is optional. overwrite (bool): By default, data will not be re-downloaded if it already exists locally. Use the `overwrite` parameter to force re-downloading. Default is `False`. This parameter is optional. progress_bar_disable (bool): Disable the progress bar. Default is `False`. This parameter is optional. progress_bar_ncols (int): Number of columns for the progress bar (straight passthrough of the `ncols` parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further information. progress_bar_ascii (str): ASCII value to use when constructing the visual aspect of the progress bar (straight passthrough of the `ascii` parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further details. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: A [`FileDownloadResult`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.FileDownloadResult) object containing details about what data files were downloaded. Raises: pyucrio.exceptions.PyUCRioDownloadError: an error was encountered while downloading a specific file pyucrio.exceptions.PyUCRioAPIError: an API error was encountered Notes: -------- The `progress_bar_*` parameters can be used to enable/disable/adjust the progress bar. Excluding the `progress_bar_disable` parameter, all others are straight pass-throughs to the tqdm progress bar function. The `progress_bar_ncols` parameter allows for adjusting the width. The `progress_bar_ascii` parameter allows for adjusting the appearance of the progress bar. And the `progress_bar_desc` parameter allows for adjusting the description at the beginning of the progress bar. Further details can be found on the [tqdm documentation](https://tqdm.github.io/docs/tqdm/#tqdm-objects). Data downloading will use the `download_data_root_path` variable within the super class' object ([`PyUCRio`](../../index.html#pyucrio.PyUCRio)) to determine where to save data to. If you'd like to change this path to somewhere else you can change that variable before your download() call, like so: ```python import pyucrio rio = pyucrio.PyUCRio() rio.data_download_root_path = "some_new_path" rio.data.download(dataset_name, start, end) ``` """ try: return self.__rio_obj.srs_obj.data.download_using_urls( file_listing_response, n_parallel=n_parallel, overwrite=overwrite, progress_bar_disable=progress_bar_disable, progress_bar_ncols=progress_bar_ncols, progress_bar_ascii=progress_bar_ascii, progress_bar_desc=progress_bar_desc, timeout=timeout, ) except SRSDownloadError as e: raise PyUCRioDownloadError(e) from e except SRSAPIError as e: raise PyUCRioAPIError(e) from e def get_urls(self, dataset_name: str, start: datetime.datetime, end: datetime.datetime, site_uid: Optional[str] = None, timeout: Optional[int] = None) -> FileListingResponse: """ Get URLs of data files The parameters `dataset_name`, `start`, and `end` are required. All other parameters are optional. Args: dataset_name (str): Name of the dataset to download data for. Use the `list_datasets()` function to get the possible values for this parameter. One example is "SWAN_HSR_K0_H5". Note that dataset names are case sensitive. This parameter is required. start (datetime.datetime): Start timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required. end (datetime.datetime): End timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required. site_uid (str): The site UID to filter for. If specified, data will be downloaded for only the site matching the given value. If excluded, data for all available sites will be downloaded. An example value could be 'gill', meaning all data from the Gillam observatory will be downloaded for the given dataset name, start, and end times. This parameter is optional. timeout (int): Represents how many seconds to wait for the API to send data before giving up. The default is 10 seconds, or the `api_timeout` value in the super class' `pyucrio.PyUCRio` object. This parameter is optional. Returns: A [`FileListingResponse`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.FileListingResponse) object containing a list of the available URLs, among other values. Raises: pyucrio.exceptions.PyUCRioAPIError: an API error was encountered """ try: return self.__rio_obj.srs_obj.data.get_urls( dataset_name, start, end, site_uid=site_uid, timeout=timeout, ) except SRSAPIError as e: raise PyUCRioAPIError(e) from e def read(self, dataset: Dataset, file_list: Union[List[str], List[Path], str, Path], n_parallel: int = 1, no_metadata: bool = False, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, quiet: bool = False) -> Data: """ Read in data files for a given dataset. Note that only one type of dataset's data should be read in using a single call. Args: dataset (Dataset): The dataset object for which the files are associated with. This parameter is required. file_list (List[str], List[Path], str, Path): The files to read in. Absolute paths are recommended, but not technically necessary. This can be a single string for a file, or a list of strings to read in multiple files. This parameter is required. n_parallel (int): Number of data files to read in parallel using multiprocessing. Default value is 1. Adjust according to your computer's available resources. This parameter is optional. no_metadata (bool): Skip reading of metadata. This is a minor optimization if the metadata is not needed. Default is `False`. This parameter is optional. start_time (datetime.datetime): The start timestamp to read data onwards from (inclusive). This can be utilized to read a portion of a data file, and could be paired with the `end_time` parameter. This tends to be utilized for datasets that are hour or day-long files where it is possible to only read a smaller bit of that file. If not supplied, it will assume the start time is the timestamp of the first record in the first file supplied (ie. beginning of the supplied data). This parameter is optional. end_time (datetime.datetime): The end timestamp to read data up to (inclusive). This can be utilized to read a portion of a data file, and could be paired with the `start_time` parameter. This tends to be utilized for datasets that are hour or day-long files where it is possible to only read a smaller bit of that file. If not supplied, it will it will assume the end time is the timestamp of the last record in the last file supplied (ie. end of the supplied data). This parameter is optional. quiet (bool): Do not print out errors while reading data files, if any are encountered. Any files that encounter errors will be, as usual, accessible via the `problematic_files` attribute of the returned `Data` object. This parameter is optional. Returns: A [`Data`](https://docs-pyucalgarysrs.phys.ucalgary.ca/data/classes.html#pyucalgarysrs.data.classes.Data) object containing the data read in, among other values. Raises: pyucrio.exceptions.PyUCRioUnsupportedReadError: an unsupported dataset was used when trying to read files. pyucrio.exceptions.PyUCRioError: a generic read error was encountered """ # NOTE: we do not wrap the exceptions here, instead we pass the call along # to the ReadManager object since the method and exception catching is # implemented there. No need to duplicate the exception handling logic. return self.__readers.read( dataset, file_list, n_parallel=n_parallel, no_metadata=no_metadata, start_time=start_time, end_time=end_time, quiet=quiet, )
Instance variables
prop readers
-
Access to the
pyucrio.data.ucalgary.read
submodule from within a PyUCRio object.Expand source code
@property def readers(self): """ Access to the `read` submodule from within a PyUCRio object. """ return self.__readers
Methods
def download(self,
dataset_name: str,
start: datetime.datetime,
end: datetime.datetime,
site_uid: str | None = None,
n_parallel: int = 5,
overwrite: bool = False,
progress_bar_disable: bool = False,
progress_bar_ncols: int | None = None,
progress_bar_ascii: str | None = None,
progress_bar_desc: str | None = None,
timeout: int | None = None) ‑> pyucalgarysrs.data.classes.FileDownloadResult-
Download data from the UCalgary Space Remote Sensing Open Data Platform.
The parameters
dataset_name
,start
, andend
are required. All other parameters are optional.Args
dataset_name
:str
- Name of the dataset to download data for. Use the
list_datasets()
function to get the possible values for this parameter. One example is "SWAN_HSR_K0_H5". Note that dataset names are case sensitive. This parameter is required. start
:datetime.datetime
- Start timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required.
end
:datetime.datetime
- End timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required.
site_uid
:str
- The site UID to filter for. If specified, data will be downloaded for only the site matching the given value. If excluded, data for all available sites will be downloaded. An example value could be 'gill', meaning all data from the Gillam observatory will be downloaded for the given dataset name, start, and end times. This parameter is optional.
n_parallel
:int
- Number of data files to download in parallel. Default value is 5. Adjust as needed for your internet connection. This parameter is optional.
overwrite
:bool
- By default, data will not be re-downloaded if it already exists locally. Use
the
overwrite
parameter to force re-downloading. Default isFalse
. This parameter is optional. progress_bar_disable
:bool
- Disable the progress bar. Default is
False
. This parameter is optional. progress_bar_ncols
:int
- Number of columns for the progress bar (straight passthrough of the
ncols
parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further information. progress_bar_ascii
:str
- ASCII value to use when constructing the visual aspect of the progress bar (straight
passthrough of the
ascii
parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further details. timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
A
FileDownloadResult
object containing details about what data files were downloaded.Raises
PyUCRioDownloadError
- an error was encountered while downloading a specific file
PyUCRioAPIError
- an API error was encountered
Notes:
The
progress_bar_*
parameters can be used to enable/disable/adjust the progress bar. Excluding theprogress_bar_disable
parameter, all others are straight pass-throughs to the tqdm progress bar function. Theprogress_bar_ncols
parameter allows for adjusting the width. Theprogress_bar_ascii
parameter allows for adjusting the appearance of the progress bar. And theprogress_bar_desc
parameter allows for adjusting the description at the beginning of the progress bar. Further details can be found on the tqdm documentation.Data downloading will use the
download_data_root_path
variable within the super class' object (PyUCRio
) to determine where to save data to. If you'd like to change this path to somewhere else you can change that variable before your download() call, like so:import pyucrio rio = pyucrio.PyUCRio() rio.data_download_root_path = "some_new_path" rio.data.download(dataset_name, start, end)
def download_using_urls(self,
file_listing_response: pyucalgarysrs.data.classes.FileListingResponse,
n_parallel: int = 5,
overwrite: bool = False,
progress_bar_disable: bool = False,
progress_bar_ncols: int | None = None,
progress_bar_ascii: str | None = None,
progress_bar_desc: str | None = None,
timeout: int | None = None) ‑> pyucalgarysrs.data.classes.FileDownloadResult-
Download data from the UCalgary Space Remote Sensing Open Data Platform using a FileListingResponse object. This would be used in cases where more customization is needed than the generic
download()
function.One example of using this function would start by using
get_urls()
to retrieve the list of URLs available for download, then further process this list to fewer files based on some other requirement (ie. time down-sampling such as one file per hour). Lastly using this function to download the new custom set URLs.Args
file_listing_response
:FileListingResponse
- A
FileListingResponse
object returned from aget_urls()
call, which contains a list of URLs to download for a specific dataset. This parameter is required. n_parallel
:int
- Number of data files to download in parallel. Default value is 5. Adjust as needed for your internet connection. This parameter is optional.
overwrite
:bool
- By default, data will not be re-downloaded if it already exists locally. Use
the
overwrite
parameter to force re-downloading. Default isFalse
. This parameter is optional. progress_bar_disable
:bool
- Disable the progress bar. Default is
False
. This parameter is optional. progress_bar_ncols
:int
- Number of columns for the progress bar (straight passthrough of the
ncols
parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further information. progress_bar_ascii
:str
- ASCII value to use when constructing the visual aspect of the progress bar (straight
passthrough of the
ascii
parameter in a tqdm progress bar). This parameter is optional. See Notes section below for further details. timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
A
FileDownloadResult
object containing details about what data files were downloaded.Raises
PyUCRioDownloadError
- an error was encountered while downloading a specific file
PyUCRioAPIError
- an API error was encountered
Notes:
The
progress_bar_*
parameters can be used to enable/disable/adjust the progress bar. Excluding theprogress_bar_disable
parameter, all others are straight pass-throughs to the tqdm progress bar function. Theprogress_bar_ncols
parameter allows for adjusting the width. Theprogress_bar_ascii
parameter allows for adjusting the appearance of the progress bar. And theprogress_bar_desc
parameter allows for adjusting the description at the beginning of the progress bar. Further details can be found on the tqdm documentation.Data downloading will use the
download_data_root_path
variable within the super class' object (PyUCRio
) to determine where to save data to. If you'd like to change this path to somewhere else you can change that variable before your download() call, like so:import pyucrio rio = pyucrio.PyUCRio() rio.data_download_root_path = "some_new_path" rio.data.download(dataset_name, start, end)
def get_dataset(self, name: str, timeout: int | None = None) ‑> pyucalgarysrs.data.classes.Dataset
-
Get a specific dataset
Args
name
:str
- The dataset name to get. Case is insensitive.
timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
The found
Dataset
object. Raises an exception if not found.Raises
PyUCRioAPIError
- An API error was encountered.
def get_urls(self,
dataset_name: str,
start: datetime.datetime,
end: datetime.datetime,
site_uid: str | None = None,
timeout: int | None = None) ‑> pyucalgarysrs.data.classes.FileListingResponse-
Get URLs of data files
The parameters
dataset_name
,start
, andend
are required. All other parameters are optional.Args
dataset_name
:str
- Name of the dataset to download data for. Use the
list_datasets()
function to get the possible values for this parameter. One example is "SWAN_HSR_K0_H5". Note that dataset names are case sensitive. This parameter is required. start
:datetime.datetime
- Start timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required.
end
:datetime.datetime
- End timestamp to use (inclusive), expected to be in UTC. Any timezone data will be ignored. This parameter is required.
site_uid
:str
- The site UID to filter for. If specified, data will be downloaded for only the site matching the given value. If excluded, data for all available sites will be downloaded. An example value could be 'gill', meaning all data from the Gillam observatory will be downloaded for the given dataset name, start, and end times. This parameter is optional.
timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
A
FileListingResponse
object containing a list of the available URLs, among other values.Raises
PyUCRioAPIError
- an API error was encountered
def is_read_supported(self, dataset_name: str) ‑> bool
-
Check if a given dataset has file reading support.
Not all datasets available in the UCalgary Space Remote Sensing Open Data Platform have special readfile routines in this library. This is because some datasets are handled by other libraries (ie. PyAuroraX for ASI data), or are in basic formats such as JPG or PNG, so unique functions aren't necessary. We leave it up to the user to open those basic files in whichever way they prefer. Use the
list_supported_read_datasets()
function to see all datasets that have special file reading functionality in this library.Args
dataset_name
:str
- The dataset name to check if file reading is supported. This parameter is required.
Returns
Boolean indicating if file reading is supported.
def list_datasets(self, name: str | None = None, timeout: int | None = None) ‑> List[pyucalgarysrs.data.classes.Dataset]
-
List available datasets
Args
name
:str
- Supply a name used for filtering. If that name is found in the available dataset names received from the API, it will be included in the results. This parameter is optional.
timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
A list of
Dataset
objects.Raises
PyUCRioAPIError
- An API error was encountered.
def list_observatories(self,
instrument_array: Literal['norstar_riometer', 'swan_hsr'],
uid: str | None = None,
timeout: int | None = None) ‑> List[pyucalgarysrs.data.classes.Observatory]-
List information about observatories
Args
instrument_array
:str
- The instrument array to list observatories for. Valid values are: norstar_riometer, and swan_hsr.
uid
:str
- Supply a observatory unique identifier used for filtering (usually 4-letter site code). If that UID is found in the available observatories received from the API, it will be included in the results. This parameter is optional.
timeout
:int
- Represents how many seconds to wait for the API to send data before giving up. The
default is 10 seconds, or the
api_timeout
value in the super class'PyUCRio
object. This parameter is optional.
Returns
A list of
Observatory
objects.Raises
PyUCRioAPIError
- An API error was encountered.
def list_supported_read_datasets(self) ‑> List[str]
-
List the datasets which have file reading capabilities supported.
Returns
A list of the dataset names with file reading support.
def read(self,
dataset: pyucalgarysrs.data.classes.Dataset,
file_list: List[str] | List[pathlib.Path] | str | pathlib.Path,
n_parallel: int = 1,
no_metadata: bool = False,
start_time: datetime.datetime | None = None,
end_time: datetime.datetime | None = None,
quiet: bool = False) ‑> pyucalgarysrs.data.classes.Data-
Read in data files for a given dataset. Note that only one type of dataset's data should be read in using a single call.
Args
dataset
:Dataset
- The dataset object for which the files are associated with. This parameter is required.
file_list
:List[str], List[Path], str, Path
- The files to read in. Absolute paths are recommended, but not technically necessary. This can be a single string for a file, or a list of strings to read in multiple files. This parameter is required.
n_parallel
:int
- Number of data files to read in parallel using multiprocessing. Default value is 1. Adjust according to your computer's available resources. This parameter is optional.
no_metadata
:bool
- Skip reading of metadata. This is a minor optimization if the metadata is not needed.
Default is
False
. This parameter is optional. start_time
:datetime.datetime
- The start timestamp to read data onwards from (inclusive). This can be utilized to
read a portion of a data file, and could be paired with the
end_time
parameter. This tends to be utilized for datasets that are hour or day-long files where it is possible to only read a smaller bit of that file. If not supplied, it will assume the start time is the timestamp of the first record in the first file supplied (ie. beginning of the supplied data). This parameter is optional. end_time
:datetime.datetime
- The end timestamp to read data up to (inclusive). This can be utilized to read a
portion of a data file, and could be paired with the
start_time
parameter. This tends to be utilized for datasets that are hour or day-long files where it is possible to only read a smaller bit of that file. If not supplied, it will it will assume the end time is the timestamp of the last record in the last file supplied (ie. end of the supplied data). This parameter is optional. quiet
:bool
- Do not print out errors while reading data files, if any are encountered. Any files
that encounter errors will be, as usual, accessible via the
problematic_files
attribute of the returnedData
object. This parameter is optional.
Returns
A
Data
object containing the data read in, among other values.Raises
PyUCRioUnsupportedReadError
- an unsupported dataset was used when trying to read files.
PyUCRioError
- a generic read error was encountered