geospatial_tools
auth #
This module contains authentication-related functions.
get_copernicus_credentials #
get_copernicus_credentials(logger: Logger = LOGGER) -> tuple[str, str] | None
Retrieves Copernicus credentials from environment variables or prompts the user.
This function first checks for COPERNICUS_USERNAME and COPERNICUS_PASSWORD
environment variables. If they are not set, it interactively prompts the user
for their username and password.
Using environment variables is recommended for security and to comply with the 12-factor app methodology, which separates configuration from code. This prevents hardcoding sensitive information and makes the application more portable across different environments (development, testing, production).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
| Type | Description |
|---|---|
tuple[str, str] | None
|
A tuple containing the username and password, or None if they could not be |
tuple[str, str] | None
|
obtained. |
Source code in src/geospatial_tools/auth.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | |
get_copernicus_token #
get_copernicus_token(logger: Logger = LOGGER) -> str | None
Retrieves an access token from the Copernicus Data Space Ecosystem.
This function uses the credentials obtained from get_copernicus_credentials
to request an access token from the authentication endpoint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
| Type | Description |
|---|---|
str | None
|
The access token as a string, or None if authentication fails. |
Source code in src/geospatial_tools/auth.py
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | |
copernicus #
Copernicus Data Space Ecosystem (CDSE) tools and constants.
CopernicusS2Band #
Bases: str, Enum
Copernicus Sentinel-2 Bands for Level-2A.
The value of each member corresponds to the asset key for the band. Base band names (e.g., 'B02') default to their native resolution. Explicit resolution members (e.g., 'B02_20m') are also provided.
native_res
property
#
native_res: int
Returns the native resolution of the band in meters.
Defaults to 10m if band base name is not recognized.
at_res #
at_res(resolution: int | CopernicusS2Resolution) -> str
Returns the asset key for this band at the specified resolution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
resolution
|
int | CopernicusS2Resolution
|
The resolution to get the key for (e.g., 20 or CopernicusS2Resolution.R20M). |
required |
Returns:
| Type | Description |
|---|---|
str
|
The asset key string (e.g., 'B02_20m'). |
Source code in src/geospatial_tools/copernicus/sentinel_2.py
174 175 176 177 178 179 180 181 182 183 184 185 | |
__str__ #
__str__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
187 188 189 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
191 192 193 | |
CopernicusS2Collection #
Bases: str, Enum
Copernicus Sentinel-2 Collections.
__str__ #
__str__() -> str
Returns the collection name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
39 40 41 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
43 44 45 | |
CopernicusS2Resolution #
Bases: int, Enum
Copernicus Sentinel-2 Resolutions in meters.
__str__ #
__str__() -> str
Returns the resolution as a string with 'm' suffix.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
55 56 57 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
59 60 61 | |
sentinel_2 #
This module contains Enums for Sentinel-2 on Copernicus Data Space Ecosystem (CDSE).
CopernicusS2Collection #
Bases: str, Enum
Copernicus Sentinel-2 Collections.
__str__ #
__str__() -> str
Returns the collection name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
39 40 41 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
43 44 45 | |
CopernicusS2Resolution #
Bases: int, Enum
Copernicus Sentinel-2 Resolutions in meters.
__str__ #
__str__() -> str
Returns the resolution as a string with 'm' suffix.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
55 56 57 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
59 60 61 | |
CopernicusS2Band #
Bases: str, Enum
Copernicus Sentinel-2 Bands for Level-2A.
The value of each member corresponds to the asset key for the band. Base band names (e.g., 'B02') default to their native resolution. Explicit resolution members (e.g., 'B02_20m') are also provided.
native_res
property
#
native_res: int
Returns the native resolution of the band in meters.
Defaults to 10m if band base name is not recognized.
at_res #
at_res(resolution: int | CopernicusS2Resolution) -> str
Returns the asset key for this band at the specified resolution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
resolution
|
int | CopernicusS2Resolution
|
The resolution to get the key for (e.g., 20 or CopernicusS2Resolution.R20M). |
required |
Returns:
| Type | Description |
|---|---|
str
|
The asset key string (e.g., 'B02_20m'). |
Source code in src/geospatial_tools/copernicus/sentinel_2.py
174 175 176 177 178 179 180 181 182 183 184 185 | |
__str__ #
__str__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
187 188 189 | |
__repr__ #
__repr__() -> str
Returns the band name as a string.
Source code in src/geospatial_tools/copernicus/sentinel_2.py
191 192 193 | |
download #
download_usa_polygon #
download_usa_polygon(
output_name: str = USA_POLYGON, output_directory: str | Path = DATA_DIR
) -> list[str | Path]
Download USA polygon file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_name
|
str
|
What name to give to downloaded file |
USA_POLYGON
|
output_directory
|
str | Path
|
Where to save the downloaded file |
DATA_DIR
|
Returns:
Source code in src/geospatial_tools/download.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 | |
download_s2_tiling_grid #
download_s2_tiling_grid(
output_name: str = SENTINEL_2_TILLING_GRID, output_directory: str | Path = DATA_DIR
) -> list[str | Path]
" Download Sentinel 2 tiling grid file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_name
|
str
|
What name to give to downloaded file |
SENTINEL_2_TILLING_GRID
|
output_directory
|
str | Path
|
Where to save the downloaded file |
DATA_DIR
|
Returns:
Source code in src/geospatial_tools/download.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | |
geotools_types #
This module contains constants and functions pertaining to data types.
BBoxLike
module-attribute
#
BBoxLike = tuple[float, float, float, float]
BBox like tuple structure used for type checking.
IntersectsLike
module-attribute
#
IntersectsLike = (
Point
| Polygon
| LineString
| MultiPolygon
| MultiPoint
| MultiLineString
| GeometryCollection
)
Intersect-like union of types used for type checking.
DateLike
module-attribute
#
DateLike = (
datetime
| str
| None
| tuple[datetime | str | None, datetime | str | None]
| list[datetime | str | None]
| Iterator[datetime | str | None]
)
Date-like union of types used for type checking.
planetary_computer #
sentinel_2 #
BestProductsForFeatures #
BestProductsForFeatures(
sentinel2_tiling_grid: GeoDataFrame,
sentinel2_tiling_grid_column: str,
vector_features: GeoDataFrame,
vector_features_column: str,
date_ranges: list[str] | None = None,
max_cloud_cover: int = 5,
max_no_data_value: int = 5,
logger: Logger = LOGGER,
)
Class made to facilitate and automate searching for Sentinel 2 products using the Sentinel 2 tiling grid as a reference.
Current limitation is that vector features used must fit, or be completely contained inside a single Sentinel 2 tiling grid.
For larger features, a mosaic of products will be necessary.
This class was conceived first and foremost to be used for numerous smaller vector
features, like polygon grids created from
geospatial_tools.vector.create_vector_grid
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sentinel2_tiling_grid
|
GeoDataFrame
|
GeoDataFrame containing Sentinel 2 tiling grid |
required |
sentinel2_tiling_grid_column
|
str
|
Name of the column in |
required |
vector_features
|
GeoDataFrame
|
GeoDataFrame containing the vector features for which the best Sentinel 2 products will be chosen for. |
required |
vector_features_column
|
str
|
Name of the column in |
required |
date_ranges
|
list[str] | None
|
Date range used to search for Sentinel 2 products. should be created using
|
None
|
max_cloud_cover
|
int
|
Maximum cloud cover used to search for Sentinel 2 products. |
5
|
logger
|
Logger
|
Logger instance |
LOGGER
|
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | |
max_cloud_cover
property
writable
#
max_cloud_cover
Max % of cloud cover used for Sentinel 2 product search.
create_date_ranges #
create_date_ranges(
start_year: int, end_year: int, start_month: int, end_month: int
) -> list[str]
This function create a list of date ranges.
For example, I want to create date ranges for 2020 and 2021, but only for the months from March to May. I therefore expect to have 2 ranges: [2020-03-01 to 2020-05-30, 2021-03-01 to 2021-05-30].
Handles the automatic definition of the last day for the end month, as well as periods that cross over years
For example, I want to create date ranges for 2020 and 2022, but only for the months from November to January. I therefore expect to have 2 ranges: [2020-11-01 to 2021-01-31, 2021-11-01 to 2022-01-31].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start_year
|
int
|
Start year for ranges |
required |
end_year
|
int
|
End year for ranges |
required |
start_month
|
int
|
Starting month for each period |
required |
end_month
|
int
|
End month for each period (inclusively) |
required |
Returns:
| Type | Description |
|---|---|
list[str]
|
List of date ranges |
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | |
find_best_complete_products #
find_best_complete_products(
max_cloud_cover: int | None = None, max_no_data_value: int = 5
) -> dict
Finds the best complete products for each Sentinel 2 tiles. This function will filter out all products that have more than 5% of nodata values.
Filtered out tiles will be stored in self.incomplete and tiles for which
the search has found no results will be stored in self.error_list
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_cloud_cover
|
int | None
|
Max percentage of cloud cover allowed used for the search (Default value = None) |
None
|
max_no_data_value
|
int
|
Max percentage of no-data coverage by individual Sentinel 2 product (Default value = 5) |
5
|
Returns:
| Type | Description |
|---|---|
dict
|
Dictionary of product IDs and their corresponding Sentinel 2 tile names. |
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | |
select_best_products_per_feature #
select_best_products_per_feature() -> GeoDataFrame
Return a GeoDataFrame containing the best products for each Sentinel 2 tile.
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | |
to_file #
to_file(output_dir: str | Path) -> None
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output_dir
|
str | Path
|
Output directory used to write to file |
required |
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
199 200 201 202 203 204 205 206 207 208 209 210 211 | |
sentinel_2_complete_tile_search #
sentinel_2_complete_tile_search(
tile_id: int,
date_ranges: list[str],
max_cloud_cover: int,
max_no_data_value: int = 5,
) -> tuple[int, str, float | None, float | None] | None
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tile_id
|
int
|
|
required |
date_ranges
|
list[str]
|
|
required |
max_cloud_cover
|
int
|
|
required |
max_no_data_value
|
int
|
(Default value = 5) |
5
|
Returns:
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | |
find_best_product_per_s2_tile #
find_best_product_per_s2_tile(
date_ranges: list[str],
max_cloud_cover: int,
s2_tile_grid_list: list,
max_no_data_value: int = 5,
num_of_workers: int = 4,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
date_ranges
|
list[str]
|
|
required |
max_cloud_cover
|
int
|
|
required |
s2_tile_grid_list
|
list
|
|
required |
max_no_data_value
|
int
|
(Default value = 5) |
5
|
num_of_workers
|
int
|
(Default value = 4) |
4
|
Returns:
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | |
write_best_product_ids_to_dataframe #
write_best_product_ids_to_dataframe(
spatial_join_results: GeoDataFrame,
tile_dictionary: dict,
best_product_column: str = "best_s2_product_id",
s2_tiles_column: str = "s2_tiles",
logger: Logger = LOGGER,
) -> None
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
spatial_join_results
|
GeoDataFrame
|
|
required |
tile_dictionary
|
dict
|
|
required |
best_product_column
|
str
|
|
'best_s2_product_id'
|
s2_tiles_column
|
str
|
|
's2_tiles'
|
logger
|
Logger
|
|
LOGGER
|
Returns:
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 | |
write_results_to_file #
write_results_to_file(
cloud_cover: int,
successful_results: dict,
incomplete_results: list | None = None,
error_results: list | None = None,
output_dir: str | Path = DATA_DIR,
logger: Logger = LOGGER,
) -> dict
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cloud_cover
|
int
|
|
required |
successful_results
|
dict
|
|
required |
incomplete_results
|
list | None
|
|
None
|
error_results
|
list | None
|
|
None
|
output_dir
|
str | Path
|
|
DATA_DIR
|
logger
|
Logger
|
|
LOGGER
|
Returns:
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 | |
download_and_process_sentinel2_asset #
download_and_process_sentinel2_asset(
product_id: str,
product_bands: list[str],
collections: str = "sentinel-2-l2a",
target_projection: int | str | None = None,
base_directory: str | Path = DATA_DIR,
delete_intermediate_files: bool = False,
logger: Logger = LOGGER,
) -> Asset
This function downloads a Sentinel 2 product based on the product ID provided.
It will download the individual asset bands provided in the bands argument,
merge then all in a single tif and then reproject them to the input CRS.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
product_id
|
str
|
ID of the Sentinel 2 product to be downloaded |
required |
product_bands
|
list[str]
|
List of the product bands to be downloaded |
required |
collections
|
str
|
Collections to be downloaded from. Defaults to |
'sentinel-2-l2a'
|
target_projection
|
int | str | None
|
The CRS project for the end product. If |
None
|
stac_client
|
StacSearch client to used. A new one will be created if not provided |
required | |
base_directory
|
str | Path
|
The base directory path where the downloaded files will be stored |
DATA_DIR
|
delete_intermediate_files
|
bool
|
Flag to determine if intermediate files should be deleted. Defaults to False |
False
|
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Asset
|
Asset instance |
Source code in src/geospatial_tools/planetary_computer/sentinel_2.py
436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 | |
radar #
nimrod #
extract_nimrod_from_archive #
extract_nimrod_from_archive(
archive_file_path: str | Path, output_directory: str | Path | None = None
) -> Path
Extract nimrod data from an archive file. If no output directory is provided, the extracted data will be saved to the archive file's directory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
archive_file_path
|
str | Path
|
Path to the archive file |
required |
output_directory
|
str | Path | None
|
Optional output directory. |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the extracted nimrod data file |
Source code in src/geospatial_tools/radar/nimrod.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | |
load_nimrod_cubes #
load_nimrod_cubes(filenames: list[str | Path]) -> Generator[Cube | Any, Any]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filenames
|
list[str | Path]
|
List of nimrod files |
required |
Returns:
| Type | Description |
|---|---|
Generator[Cube | Any, Any]
|
Generator of cubes |
Source code in src/geospatial_tools/radar/nimrod.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |
load_nimrod_from_archive #
load_nimrod_from_archive(filename: str | Path) -> Generator[Cube | Any, Any]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filename
|
str | Path
|
Path to the archive file |
required |
Returns:
| Type | Description |
|---|---|
Generator[Cube | Any, Any]
|
Generator of cubes |
Source code in src/geospatial_tools/radar/nimrod.py
79 80 81 82 83 84 85 86 87 88 89 90 91 92 | |
merge_nimrod_cubes #
merge_nimrod_cubes(cubes: list[Cube]) -> Cube
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cubes
|
list[Cube]
|
List of cubes to merge |
required |
Returns:
| Type | Description |
|---|---|
Cube
|
Merged cube |
Source code in src/geospatial_tools/radar/nimrod.py
95 96 97 98 99 100 101 102 103 104 105 106 | |
mean_nimrod_cubes #
mean_nimrod_cubes(merged_cubes: Cube) -> Cube
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
merged_cubes
|
Cube
|
Merged cube |
required |
Returns:
| Type | Description |
|---|---|
Cube
|
Mean cube |
Source code in src/geospatial_tools/radar/nimrod.py
109 110 111 112 113 114 115 116 117 118 119 | |
write_cube_to_file #
write_cube_to_file(cube: Cube, output_name: str | Path) -> None
Save a nimrod cube to a Netcdf file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cube
|
Cube
|
Cube to save |
required |
output_name
|
str | Path
|
Output filename |
required |
Source code in src/geospatial_tools/radar/nimrod.py
122 123 124 125 126 127 128 129 130 | |
assert_dataset_time_dim_is_valid #
assert_dataset_time_dim_is_valid(
dataset: Dataset, time_dimension_name: str = "time"
) -> None
Ths function checks that the time dimension of a given dataset
- Is composed of 5-minute time bins - Which is the native Nimrod format
- Contains a continuous time series, without any holes - which would lead to false statistics when resampling
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
Dataset
|
Merged nimrod cube |
required |
time_dimension_name
|
str
|
Name of the time dimension |
'time'
|
Returns:
| Type | Description |
|---|---|
None
|
Bool value indicating if the time bins are 5 minutes long and if there are no |
None
|
gaps in the time series |
Source code in src/geospatial_tools/radar/nimrod.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 | |
resample_nimrod_timebox_30min_bins #
resample_nimrod_timebox_30min_bins(
filenames: list[str | Path], output_name: str | Path
) -> str | Path
This will resample nimrod data's bins to 30-minute interval instead of their normal 5-minute interval. It uses a mean resampling, and creates time bins like follows :
ex. [[09h00, < 9h05], [09h05, < 9h10], ... ] -> [[09h00, < 9h30], [09h30, < 10h], ... ]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filenames
|
list[str | Path]
|
List of netcdf nimrod files |
required |
output_name
|
str | Path
|
Output filename |
required |
Returns:
| Type | Description |
|---|---|
str | Path
|
Path to the output file |
Source code in src/geospatial_tools/radar/nimrod.py
171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 | |
raster #
This module contains functions that process or create raster/image data.
reproject_raster #
reproject_raster(
dataset_path: str | Path,
target_crs: str | int,
target_path: str | Path,
logger: Logger = LOGGER,
) -> Path | None
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset_path
|
str | Path
|
Path to the dataset to be reprojected. |
required |
target_crs
|
str | int
|
EPSG code in string or int format. Can be given in the following ways: 5070 | "5070" | "EPSG:5070" |
required |
target_path
|
str | Path
|
Path and filename for reprojected dataset. |
required |
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/raster.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | |
clip_raster_with_polygon #
clip_raster_with_polygon(
raster_image: Path | str,
polygon_layer: Path | str | GeoDataFrame,
base_output_filename: str | None = None,
output_dir: str | Path = DATA_DIR,
num_of_workers: int | None = None,
logger: Logger = LOGGER,
) -> list[Path]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
raster_image
|
Path | str
|
Path to raster image to be clipped. |
required |
polygon_layer
|
Path | str | GeoDataFrame
|
Polygon layer which polygons will be used to clip the raster image. |
required |
base_output_filename
|
str | None
|
Base filename for outputs. If |
None
|
output_dir
|
str | Path
|
Directory path where output will be written. |
DATA_DIR
|
num_of_workers
|
int | None
|
The number of processes to use for parallel execution. If using
on a compute cluster, please set a specific amount (ex. 1 per CPU core requested).
Defaults to |
None
|
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/raster.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
get_total_band_count #
get_total_band_count(
raster_file_list: Sequence[Path | str], logger: Logger = LOGGER
) -> int
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
raster_file_list
|
Sequence[Path | str]
|
List of raster files to be processed. |
required |
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/raster.py
207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | |
create_merged_raster_bands_metadata #
create_merged_raster_bands_metadata(
raster_file_list: Sequence[Path | str], logger: Logger = LOGGER
) -> dict
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
raster_file_list
|
Sequence[Path | str]
|
|
required |
logger
|
Logger
|
|
LOGGER
|
Returns:
Source code in src/geospatial_tools/raster.py
226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 | |
merge_raster_bands #
merge_raster_bands(
raster_file_list: Sequence[Path | str],
merged_filename: Path | str,
merged_band_names: list[str] | None = None,
merged_metadata: dict | None = None,
logger: Logger = LOGGER,
) -> Path | None
This function aims to combine multiple overlapping raster bands into a single raster image.
Example use case: I have 3 bands, B0, B1 and B2, each as an independent raster file (like is the case with downloaded STAC data.
While it can probably be used to create spatial time series, and not just combine bands from a single image product, it has not yet been tested for that specific purpose.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
raster_file_list
|
Sequence[Path | str]
|
List of raster files to be processed. |
required |
merged_filename
|
Path | str
|
Name of output raster file. |
required |
merged_metadata
|
dict | None
|
Dictionary of metadata to use if you prefer to great it independently. |
None
|
merged_band_names
|
list[str] | None
|
Names of final output raster bands. For example : I have 3 images representing each a single band; raster_file_list = ["image01_B0.tif", "image01_B1.tif", "image01_B2.tif"]. With, merged_band_names, individual band id can be assigned for the final output raster; ["B0", "B1", "B2"]. |
None
|
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Path | None
|
Path to merged raster |
Source code in src/geospatial_tools/raster.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | |
s3_utils #
Utility module for S3 operations related to Copernicus Data Space Ecosystem.
get_s3_client #
get_s3_client(endpoint_url: str | None = None) -> client
Creates and returns a boto3 S3 client.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
endpoint_url
|
str | None
|
The S3 endpoint URL. If None, it attempts to use the COPERNICUS_S3_ENDPOINT environment variable. |
None
|
Returns:
| Type | Description |
|---|---|
client
|
A boto3 S3 client. |
Source code in src/geospatial_tools/s3_utils.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | |
parse_s3_url #
parse_s3_url(url: str) -> tuple[str, str]
Parses an S3 URL or a CDSE STAC href to extract the bucket and key.
Expected formats: - s3://bucket/key - https://eodata.dataspace.copernicus.eu/bucket/key - https://zipper.dataspace.copernicus.eu/download/uuid (this might not be a direct S3 key)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
The URL to parse. |
required |
Returns:
| Type | Description |
|---|---|
tuple[str, str]
|
A tuple of (bucket, key). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the URL cannot be parsed into a bucket and key. |
Source code in src/geospatial_tools/s3_utils.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | |
stac #
This module contains functions that are related to STAC API.
AssetSubItem #
AssetSubItem(asset: Item, item_id: str, band: str, filename: str | Path)
Class that represent a STAC asset sub item.
Generally represents a single satellite image band.
Initializes an AssetSubItem.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
asset
|
Item
|
The pystac Item this asset belongs to. |
required |
item_id
|
str
|
The ID of the item. |
required |
band
|
str
|
The band name of this sub-item. |
required |
filename
|
str | Path
|
The local filename of the downloaded asset. |
required |
Source code in src/geospatial_tools/stac.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 | |
Asset #
Asset(
asset_id: str,
bands: list[str] | None = None,
asset_item_list: list[AssetSubItem] | None = None,
merged_asset_path: str | Path | None = None,
reprojected_asset: str | Path | None = None,
logger: Logger = LOGGER,
)
Represents a STAC asset, potentially composed of multiple bands/sub-items.
Initializes an Asset object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
asset_id
|
str
|
Unique ID for the asset (usually the item ID). |
required |
bands
|
list[str] | None
|
List of bands this asset contains. |
None
|
asset_item_list
|
list[AssetSubItem] | None
|
List of AssetSubItem objects belonging to this asset. |
None
|
merged_asset_path
|
str | Path | None
|
Path to the merged multi-band raster file. |
None
|
reprojected_asset
|
str | Path | None
|
Path to the reprojected raster file. |
None
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Source code in src/geospatial_tools/stac.py
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | |
__iter__ #
__iter__() -> Iterator[AssetSubItem]
Allows direct iteration: for item in asset:
Source code in src/geospatial_tools/stac.py
193 194 195 | |
__len__ #
__len__() -> int
Allows checking size: len(asset)
Source code in src/geospatial_tools/stac.py
197 198 199 | |
__contains__ #
__contains__(band_name: str) -> bool
Allows checking for band existence: "B04" in asset
Source code in src/geospatial_tools/stac.py
201 202 203 | |
__getitem__ #
__getitem__(index: int) -> AssetSubItem
__getitem__(band_name: str) -> AssetSubItem
__getitem__(key: int | str) -> AssetSubItem
Allows indexing by position or band name:
asset[0] or asset["B04"]
Source code in src/geospatial_tools/stac.py
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 | |
add_asset_item #
add_asset_item(asset: AssetSubItem) -> None
Adds an AssetSubItem to the asset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
asset
|
AssetSubItem
|
The AssetSubItem to add. |
required |
Source code in src/geospatial_tools/stac.py
227 228 229 230 231 232 233 234 235 236 | |
show_asset_items #
show_asset_items() -> None
Show items that belong to this asset.
Source code in src/geospatial_tools/stac.py
238 239 240 241 242 243 | |
merge_asset #
merge_asset(
base_directory: str | Path | None = None, delete_sub_items: bool = False
) -> Path | None
Merges individual band rasters into a single multi-band raster file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_directory
|
str | Path | None
|
Directory where the merged file will be saved. |
None
|
delete_sub_items
|
bool
|
If True, delete individual band files after merging. |
False
|
Returns:
| Type | Description |
|---|---|
Path | None
|
The Path to the merged file if successful, else None. |
Source code in src/geospatial_tools/stac.py
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | |
reproject_merged_asset #
reproject_merged_asset(
target_projection: str | int,
base_directory: str | Path | None = None,
delete_merged_asset: bool = False,
) -> Path | None
Reprojects the merged multi-band raster to a target projection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target_projection
|
str | int
|
The target CRS (EPSG code or string). |
required |
base_directory
|
str | Path | None
|
Directory where the reprojected file will be saved. |
None
|
delete_merged_asset
|
bool
|
If True, delete the merged file after reprojection. |
False
|
Returns:
| Type | Description |
|---|---|
Path | None
|
The Path to the reprojected file if successful, else None. |
Source code in src/geospatial_tools/stac.py
288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | |
delete_asset_sub_items #
delete_asset_sub_items() -> None
Delete all asset sub items that belong to this asset.
Source code in src/geospatial_tools/stac.py
331 332 333 334 335 336 | |
delete_merged_asset #
delete_merged_asset() -> None
Delete merged asset.
Source code in src/geospatial_tools/stac.py
338 339 340 341 342 | |
delete_reprojected_asset #
delete_reprojected_asset() -> None
Delete reprojected asset.
Source code in src/geospatial_tools/stac.py
344 345 346 347 348 | |
StacSearch #
StacSearch(catalog_name: str, logger: Logger = LOGGER)
Utility class to help facilitate and automate STAC API searches through the use of pystac_client.Client.
Initializes a StacSearch instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
catalog_name
|
str
|
Name of the STAC catalog (e.g., 'planetary_computer', 'copernicus'). |
required |
logger
|
Logger
|
Logger instance. |
LOGGER
|
Source code in src/geospatial_tools/stac.py
411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 | |
search #
search(
date_range: DateLike = None,
max_items: int | None = None,
limit: int | None = None,
ids: list[str] | None = None,
collections: str | list[str] | None = None,
bbox: BBoxLike | None = None,
intersects: IntersectsLike | None = None,
query: dict[str, Any] | None = None,
sortby: list[dict[str, str]] | str | list[str] | None = None,
max_retries: int = 3,
delay: int = 5,
) -> list[Item]
STAC API search that will use search query and parameters. Essentially a wrapper on pystac_client.Client.
Parameter descriptions taken from pystac docs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
date_range
|
DateLike
|
Either a single datetime or datetime range used to filter results.
You may express a single datetime using a :class: |
None
|
max_items
|
int | None
|
The maximum number of items to return from the search, even if there are more matching results. |
None
|
limit
|
int | None
|
A recommendation to the service as to the number of items to return per page of results. |
None
|
ids
|
list[str] | None
|
List of one or more Item ids to filter on. |
None
|
collections
|
str | list[str] | None
|
List of one or more Collection IDs or pystac. Collection instances. Only Items in one of the provided Collections will be searched |
None
|
bbox
|
BBoxLike | None
|
A list, tuple, or iterator representing a bounding box of 2D or 3D coordinates. Results will be filtered to only those intersecting the bounding box. |
None
|
intersects
|
IntersectsLike | None
|
A string or dictionary representing a GeoJSON geometry, or an object that implements a geo_interface property, as supported by several libraries including Shapely, ArcPy, PySAL, and geojson. Results filtered to only those intersecting the geometry. |
None
|
query
|
dict[str, Any] | None
|
List or JSON of query parameters as per the STAC API query extension. |
None
|
sortby
|
list[dict[str, str]] | str | list[str] | None
|
A single field or list of fields to sort the response by |
None
|
max_retries
|
int
|
|
3
|
delay
|
int
|
|
5
|
Returns:
| Type | Description |
|---|---|
list[Item]
|
A list of pystac.Item objects matching the search criteria. |
Source code in src/geospatial_tools/stac.py
432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | |
search_for_date_ranges #
search_for_date_ranges(
date_ranges: Sequence[DateLike],
max_items: int | None = None,
limit: int | None = None,
collections: str | list[str] | None = None,
bbox: BBoxLike | None = None,
intersects: IntersectsLike | None = None,
query: dict[str, Any] | None = None,
sortby: list[dict[str, str]] | str | list[str] | None = None,
max_retries: int = 3,
delay: int = 5,
) -> list[Item]
STAC API search that will use search query and parameters for each date range in given list of date_ranges.
Date ranges can be generated with the help of the geospatial_tools.utils.create_date_range_for_specific_period
function for more complex ranges.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
date_ranges
|
Sequence[DateLike]
|
List containing datetime date ranges |
required |
max_items
|
int | None
|
The maximum number of items to return from the search, even if there are more matching results |
None
|
limit
|
int | None
|
A recommendation to the service as to the number of items to return per page of results. |
None
|
collections
|
str | list[str] | None
|
List of one or more Collection IDs or pystac. Collection instances. Only Items in one of the provided Collections will be searched |
None
|
bbox
|
BBoxLike | None
|
A list, tuple, or iterator representing a bounding box of 2D or 3D coordinates. Results will be filtered to only those intersecting the bounding box. |
None
|
intersects
|
IntersectsLike | None
|
A string or dictionary representing a GeoJSON geometry, or an object that implements a geo_interface property, as supported by several libraries including Shapely, ArcPy, PySAL, and geojson. Results filtered to only those intersecting the geometry. |
None
|
query
|
dict[str, Any] | None
|
List or JSON of query parameters as per the STAC API query extension. |
None
|
sortby
|
list[dict[str, str]] | str | list[str] | None
|
A single field or list of fields to sort the response by |
None
|
max_retries
|
int
|
|
3
|
delay
|
int
|
|
5
|
Returns:
| Type | Description |
|---|---|
list[Item]
|
A list of pystac.Item objects. |
Source code in src/geospatial_tools/stac.py
534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 | |
sort_results_by_cloud_coverage #
sort_results_by_cloud_coverage() -> list[Item] | None
Sorts the search results by cloud coverage (ascending).
Returns:
| Type | Description |
|---|---|
list[Item] | None
|
A list of sorted pystac.Item objects, or None if no results exist. |
Source code in src/geospatial_tools/stac.py
669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 | |
filter_no_data #
filter_no_data(property_name: str, max_no_data_value: int = 5) -> list[Item] | None
Filter results that are above a nodata value threshold.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
property_name
|
str
|
Name of the property containing nodata percentage. |
required |
max_no_data_value
|
int
|
Max allowed percentage of nodata. (Default value = 5) |
5
|
Returns:
| Type | Description |
|---|---|
list[Item] | None
|
Filtered list of pystac.Item objects. |
Source code in src/geospatial_tools/stac.py
685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 | |
download_search_results #
download_search_results(bands: list[str], base_directory: str | Path) -> list[Asset]
Downloads assets for all search results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bands
|
list[str]
|
List of bands to download. |
required |
base_directory
|
str | Path
|
The base directory for downloads. |
required |
Returns:
| Type | Description |
|---|---|
list[Asset]
|
A list of Asset objects for the downloaded search results. |
Source code in src/geospatial_tools/stac.py
788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 | |
download_sorted_by_cloud_cover_search_results #
download_sorted_by_cloud_cover_search_results(
bands: list[str],
base_directory: str | Path,
first_x_num_of_items: int | None = None,
) -> list[Asset]
Downloads sorted results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bands
|
list[str]
|
List of bands to download. |
required |
base_directory
|
str | Path
|
The base directory for downloads. |
required |
first_x_num_of_items
|
int | None
|
Optional number of top items to download. |
None
|
Returns:
| Type | Description |
|---|---|
list[Asset]
|
A list of Asset objects for the downloaded items. |
Source code in src/geospatial_tools/stac.py
821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 | |
download_best_cloud_cover_result #
download_best_cloud_cover_result(
bands: list[str], base_directory: str | Path
) -> Asset | None
Downloads the single best result based on cloud cover.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bands
|
list[str]
|
List of bands to download. |
required |
base_directory
|
str | Path
|
The base directory for downloads. |
required |
Returns:
| Type | Description |
|---|---|
Asset | None
|
The Asset object for the best result, or None if no results available. |
Source code in src/geospatial_tools/stac.py
844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 | |
create_planetary_computer_catalog #
create_planetary_computer_catalog(
max_retries: int = 3, delay: int = 5, logger: Logger = LOGGER
) -> Client | None
Creates a Planetary Computer Catalog Client.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_retries
|
int
|
The maximum number of retries for the API connection. (Default value = 3) |
3
|
delay
|
int
|
The delay between retry attempts in seconds. (Default value = 5) |
5
|
logger
|
Logger
|
The logger instance to use. (Default value = LOGGER) |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Client | None
|
A pystac_client.Client instance if successful, else None. |
Source code in src/geospatial_tools/stac.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | |
create_copernicus_catalog #
create_copernicus_catalog(
max_retries: int = 3, delay: int = 5, logger: Logger = LOGGER
) -> Client | None
Creates a Copernicus Data Space Ecosystem Catalog Client.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_retries
|
int
|
The maximum number of retries for the API connection. (Default value = 3) |
3
|
delay
|
int
|
The delay between retry attempts in seconds. (Default value = 5) |
5
|
logger
|
Logger
|
The logger instance to use. (Default value = LOGGER) |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Client | None
|
A pystac_client.Client instance if successful, else None. |
Source code in src/geospatial_tools/stac.py
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | |
catalog_generator #
catalog_generator(catalog_name: str, logger: Logger = LOGGER) -> Client | None
Generates a STAC Client for the specified catalog.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
catalog_name
|
str
|
The name of the catalog (e.g., 'planetary_computer', 'copernicus'). |
required |
logger
|
Logger
|
The logger instance to use. |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Client | None
|
A pystac_client.Client instance for the requested catalog if supported, else None. |
Source code in src/geospatial_tools/stac.py
97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | |
list_available_catalogs #
list_available_catalogs(logger: Logger = LOGGER) -> frozenset[str]
Lists all available STAC catalogs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
logger
|
Logger
|
The logger instance to use. |
LOGGER
|
Returns:
| Type | Description |
|---|---|
frozenset[str]
|
A frozenset of available catalog names. |
Source code in src/geospatial_tools/stac.py
121 122 123 124 125 126 127 128 129 130 131 132 | |
download_stac_asset #
download_stac_asset(
asset_url: str,
destination: Path,
method: str = "http",
headers: dict[str, str] | None = None,
s3_client: Any | None = None,
logger: Logger = LOGGER,
) -> Path | None
Generic dispatcher for downloading STAC assets via HTTP or S3.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
asset_url
|
str
|
URL/HREF of the asset to download. |
required |
destination
|
Path
|
Path where the file will be saved. |
required |
method
|
str
|
Download method ('http' or 's3'). |
'http'
|
headers
|
dict[str, str] | None
|
Headers for HTTP request. |
None
|
s3_client
|
Any | None
|
Boto3 S3 client (required for 's3' method). |
None
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Path | None
|
The Path to the downloaded file if successful, else None. |
Source code in src/geospatial_tools/stac.py
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 | |
utils #
This module contains general utility functions.
create_logger #
create_logger(logger_name: str) -> Logger
Creates a logger object using input name parameter that outputs to stdout.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
logger_name
|
str
|
Name of logger |
required |
Returns:
Source code in src/geospatial_tools/utils.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | |
get_yaml_config #
get_yaml_config(yaml_config_file: str, logger: Logger = LOGGER) -> dict
This function takes in the path, or name of the file if it can be found in the config/ folder, with of without the extension, and returns the values of the file in a dictionary format.
Ex. For a file named app_config.yml (or app_config.yaml), directly in the config/ folder,
the function could be called like so : params = get_yaml_config('app_config')
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
yaml_config_file
|
str
|
Path to yaml config file. If config file is in the config folder, you can use the file's name without the extension. |
required |
logger
|
Logger
|
Logger to handle messaging, by default LOGGER |
LOGGER
|
Returns:
Source code in src/geospatial_tools/utils.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
get_json_config #
get_json_config(json_config_file: str, logger=LOGGER) -> dict
This function takes in the path, or name of the file if it can be found in the config/ folder, with of without the extension, and returns the values of the file in a dictionary format.
Ex. For a file named app_config.json, directly in the config/ folder,
the function could be called like so : params = get_json_config('app_config')
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_config_file
|
str
|
Path to JSON config file. If config file is in the config folder, |
required |
logger
|
Logger to handle messaging |
LOGGER
|
Returns:
Source code in src/geospatial_tools/utils.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 | |
create_crs #
create_crs(dataset_crs: str | int, logger=LOGGER)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset_crs
|
str | int
|
EPSG code in string or int format. Can be given in the following ways: 5070 | "5070" | "EPSG:5070" |
required |
logger
|
Logger instance (Default value = LOGGER) |
LOGGER
|
Returns:
Source code in src/geospatial_tools/utils.py
151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | |
download_url #
download_url(
url: str,
filename: str | Path,
overwrite: bool = False,
headers: dict | None = None,
logger=LOGGER,
) -> Path | None
This function downloads a file from a given URL.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
Url to download |
required |
filename
|
str | Path
|
Filename (or full path) to save the downloaded file |
required |
overwrite
|
bool
|
If True, overwrite existing file |
False
|
headers
|
dict | None
|
Optional headers to include in the request (e.g., Authorization) |
None
|
logger
|
Logger instance |
LOGGER
|
Returns:
| Type | Description |
|---|---|
Path | None
|
Path to downloaded file |
Source code in src/geospatial_tools/utils.py
179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | |
unzip_file #
unzip_file(
zip_path: str | Path, extract_to: str | Path, logger: Logger = LOGGER
) -> list[str | Path]
This function unzips an archive to a specific directory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
zip_path
|
str | Path
|
Path to zip file |
required |
extract_to
|
str | Path
|
Path of directory to extract the zip file |
required |
logger
|
Logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/utils.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | |
create_date_range_for_specific_period #
create_date_range_for_specific_period(
start_year: int, end_year: int, start_month_range: int, end_month_range: int
) -> list[str]
This function create a list of date ranges.
For example, I want to create date ranges for 2020 and 2021, but only for the months from March to May. I therefore expect to have 2 ranges: [2020-03-01 to 2020-05-30, 2021-03-01 to 2021-05-30].
Handles the automatic definition of the last day for the end month, as well as periods that cross over years
For example, I want to create date ranges for 2020 and 2022, but only for the months from November to January. I therefore expect to have 2 ranges: [2020-11-01 to 2021-01-31, 2021-11-01 to 2022-01-31].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start_year
|
int
|
Start year for ranges |
required |
end_year
|
int
|
End year for ranges |
required |
start_month_range
|
int
|
Starting month for each period |
required |
end_month_range
|
int
|
End month for each period (inclusively) |
required |
Returns:
Source code in src/geospatial_tools/utils.py
238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 | |
parse_gzip_header #
parse_gzip_header(path: str | Path) -> dict[str, Any]
Parse the gzip header at the beginning of path (first member only).
Raises ValueError if file doesn't look like gzip.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str | Path
|
Path to gzip file |
required |
Returns:
| Name | Type | Description |
|---|---|---|
dict |
dict[str, Any]
|
Returns a dict with keys - compression_method (int) - flags (int) - mtime (int, Unix epoch or 0) - xflags (int) - os (int) - original_name (Optional[str]) # FNAME - comment (Optional[str]) # FCOMMENT - header_end_offset (int) # file offset where compressed data starts |
Source code in src/geospatial_tools/utils.py
286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 | |
vector #
This module contains functions that process or create vector data.
create_grid_coordinates #
create_grid_coordinates(
bounding_box: list | tuple | ndarray, grid_size: float, logger: Logger = LOGGER
) -> tuple[ndarray, ndarray]
Create grid coordinates based on input bounding box and grid size.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bounding_box
|
list | tuple | ndarray
|
The bounding box of the grid as (min_lon, min_lat, max_lon, max_lat). Unit needs to be based on projection used (meters, degrees, etc.). |
required |
grid_size
|
float
|
Cell size for grid. Unit needs to be based on projection used (meters, degrees, etc.). |
required |
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | |
generate_flattened_grid_coords #
generate_flattened_grid_coords(
lon_coords: ndarray, lat_coords: ndarray, logger: Logger = LOGGER
) -> tuple[ndarray, ndarray]
Takes in previously created grid coordinates and flattens them.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lon_coords
|
ndarray
|
Longitude grid coordinates |
required |
lat_coords
|
ndarray
|
Latitude grid coordinates |
required |
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | |
create_vector_grid #
create_vector_grid(
bounding_box: list | tuple | ndarray,
grid_size: float,
crs: str | int | None = None,
logger: Logger = LOGGER,
) -> GeoDataFrame
Create a grid of polygons within the specified bounds and cell size. This function uses NumPy vectorized arrays for optimized performance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bounding_box
|
list | tuple | ndarray
|
The bounding box of the grid as (min_lon, min_lat, max_lon, max_lat). |
required |
grid_size
|
float
|
The size of each grid cell in degrees. |
required |
crs
|
str | int | None
|
CRS code for projection. ex. 'EPSG:4326' |
None
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
create_vector_grid_parallel #
create_vector_grid_parallel(
bounding_box: list | tuple | ndarray,
grid_size: float,
crs: str | int | None = None,
num_of_workers: int | None = None,
logger: Logger = LOGGER,
) -> GeoDataFrame
Create a grid of polygons within the specified bounds and cell size. This function uses NumPy for optimized performance and ProcessPoolExecutor for parallel execution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bounding_box
|
list | tuple | ndarray
|
The bounding box of the grid as (min_lon, min_lat, max_lon, max_lat). |
required |
grid_size
|
float
|
The size of each grid cell in degrees. |
required |
crs
|
str | int | None
|
Coordinate reference system for the resulting GeoDataFrame. |
None
|
num_of_workers
|
int | None
|
The number of processes to use for parallel execution. Defaults to the min of number of CPU cores or number of cells in the grid |
None
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | |
dask_spatial_join #
dask_spatial_join(
select_features_from: GeoDataFrame,
intersected_with: GeoDataFrame,
join_type: str = "inner",
predicate: str = "intersects",
num_of_workers: int = 4,
) -> GeoDataFrame
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
select_features_from
|
GeoDataFrame
|
|
required |
intersected_with
|
GeoDataFrame
|
|
required |
join_type
|
str
|
str: |
'inner'
|
predicate
|
str
|
str: |
'intersects'
|
num_of_workers
|
int
|
|
4
|
Returns:
Source code in src/geospatial_tools/vector.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 | |
multiprocessor_spatial_join #
multiprocessor_spatial_join(
select_features_from: GeoDataFrame,
intersected_with: GeoDataFrame,
join_type: str = "inner",
predicate: str = "intersects",
num_of_workers: int = 4,
logger: Logger = LOGGER,
) -> GeoDataFrame
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
select_features_from
|
GeoDataFrame
|
Numpy array containing the polygons from which to select features from. |
required |
intersected_with
|
GeoDataFrame
|
Geodataframe containing the polygons that will be used to select features with via an intersect operation. |
required |
join_type
|
str
|
How the join will be executed. Available join_types are: ['left', 'right', 'inner']. Defaults to 'inner' |
'inner'
|
predicate
|
str
|
The predicate to use for selecting features from. Available predicates are: ['intersects', 'contains', 'within', 'touches', 'crosses', 'overlaps']. Defaults to 'intersects' |
'intersects'
|
num_of_workers
|
int
|
The number of processes to use for parallel execution. Defaults to 4. |
4
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | |
select_polygons_by_location #
select_polygons_by_location(
select_features_from: GeoDataFrame,
intersected_with: GeoDataFrame,
num_of_workers: int | None = None,
join_type: str = "inner",
predicate: str = "intersects",
join_function=multiprocessor_spatial_join,
logger: Logger = LOGGER,
) -> GeoDataFrame
This function executes a select by location operation on a GeoDataFrame. It is essentially a wrapper around
gpd.sjoin to allow parallel execution. While it does use sjoin, only the columns from select_features_from are
kept.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
select_features_from
|
GeoDataFrame
|
GeoDataFrame containing the polygons from which to select features from. |
required |
intersected_with
|
GeoDataFrame
|
Geodataframe containing the polygons that will be used to select features with via an intersect operation. |
required |
num_of_workers
|
int | None
|
Number of parallel processes to use for execution. If using on a compute cluster, please set a specific amount (ex. 1 per CPU core requested). Defaults to the min of number of CPU cores or number (cpu_count()) |
None
|
join_type
|
str
|
|
'inner'
|
predicate
|
str
|
The predicate to use for selecting features from. Available predicates are: ['intersects', 'contains', 'within', 'touches', 'crosses', 'overlaps']. Defaults to 'intersects' |
'intersects'
|
join_function
|
Function that will execute the join operation. Available functions are: 'multiprocessor_spatial_join'; 'dask_spatial_join'; or custom functions. (Default value = multiprocessor_spatial_join) |
multiprocessor_spatial_join
|
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 | |
to_geopackage #
to_geopackage(gdf: GeoDataFrame, filename: str | Path, logger=LOGGER) -> str | Path
Save GeoDataFrame to a Geopackage file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gdf
|
GeoDataFrame
|
The GeoDataFrame to save. |
required |
filename
|
str | Path
|
The filename to save to. |
required |
logger
|
Logger instance (Default value = LOGGER) |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 | |
to_geopackage_chunked #
to_geopackage_chunked(
gdf: GeoDataFrame, filename: str, chunk_size: int = 1000000, logger: Logger = LOGGER
) -> str
Save GeoDataFrame to a Geopackage file using chunks to help with potential memory consumption. This function can
potentially be slower than to_geopackage, especially if chunk_size is not adequately defined. Therefore, this
function should only be required if to_geopackage fails because of memory issues.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gdf
|
GeoDataFrame
|
The GeoDataFrame to save. |
required |
filename
|
str
|
The filename to save to. |
required |
chunk_size
|
int
|
The number of rows per chunk. |
1000000
|
logger
|
Logger
|
Logger instance. |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 | |
select_all_within_feature #
select_all_within_feature(
polygon_feature: GeoSeries, vector_features: GeoDataFrame
) -> GeoSeries
This function is quite small and simple, but exists mostly as a.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
polygon_feature
|
GeoSeries
|
Polygon feature that will be used to find which features of |
required |
vector_features
|
GeoDataFrame
|
The dataframe containing the features that will be grouped by polygon_feature. |
required |
Returns:
Source code in src/geospatial_tools/vector.py
381 382 383 384 385 386 387 388 389 390 391 392 393 | |
add_and_fill_contained_column #
add_and_fill_contained_column(
polygon_feature,
polygon_column_name: str,
vector_features,
vector_column_name: str,
logger=LOGGER,
) -> None
This function make in place changes to vector_geodataframe.
The purpose of this function is to first do a spatial search operation on which vector_features are within
polygon_feature, and then write the contents found in the polygon_column_name to the selected vector_features
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
polygon_feature
|
Polygon feature that will be used to find which features of |
required | |
polygon_column_name
|
str
|
The name of the column in |
required |
vector_features
|
The dataframe containing the features that will be grouped by polygon_feature. |
required | |
vector_column_name
|
str
|
The name of the column in |
required |
logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | |
find_and_write_all_contained_features #
find_and_write_all_contained_features(
polygon_features: GeoDataFrame,
polygon_column: str,
vector_features: GeoDataFrame,
vector_column_name: str,
logger=LOGGER,
) -> None
This function make in place changes to vector_geodataframe.
It iterates on all features of a dataframe containing polygons and executes a spatial search with each
polygon to find all vector features from vector_features that are contained by it.
The name/id of each polygon is added to a set in a new column in
vector_features to identify which features are within which polygon.
To make things simple, this is basically a "group by" operation based on the
"within" spatial operator. Each feature in vector_features will have a list of
all the polygons that contain it (contain as being completely within the polygon).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
polygon_features
|
GeoDataFrame
|
Dataframes containing polygons. Will be used to find which features of |
required |
polygon_column
|
str
|
The name of the column in |
required |
vector_features
|
GeoDataFrame
|
The dataframe containing the features that will be grouped by polygon. |
required |
vector_column_name
|
str
|
The name of the column in |
required |
logger
|
(Default value = LOGGER) |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 | |
spatial_join_within #
spatial_join_within(
polygon_features: GeoDataFrame,
polygon_column: str,
vector_features: GeoDataFrame,
vector_column_name: str,
join_type: str = "left",
predicate: str = "within",
logger=LOGGER,
) -> GeoDataFrame
This function does approximately the same thing as find_and_write_all_contained_features, but does not make in
place changes to vector_features and instead returns a new dataframe.
This function is more efficient than find_and_write_all_contained_features but offers less flexibility.
It does a spatial join based on a within operation between features to associate which vector_features
are within which polygon_features, groups the results by vector feature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
polygon_features
|
GeoDataFrame
|
Dataframes containing polygons. Will be used to find which features of |
required |
polygon_column
|
str
|
The name of the column in |
required |
vector_features
|
GeoDataFrame
|
The dataframe containing the features that will be grouped by polygon. |
required |
vector_column_name
|
str
|
The name of the column in |
required |
join_type
|
str
|
|
'left'
|
predicate
|
str
|
The predicate to use for the spatial join operation. Defaults to |
'within'
|
logger
|
Logger instance |
LOGGER
|
Returns:
Source code in src/geospatial_tools/vector.py
475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 | |