Skip to content

config

config

GPU configuration and detection utilities.

This module provides global configuration for GPU acceleration and functions to detect GPU availability at runtime, including a global configuration option to force CPU-only execution even when GPU is available.

Example

from osipy.common.backend import is_gpu_available, set_backend, GPUConfig

Check if GPU is available

if is_gpu_available(): ... print("GPU acceleration available")

Force CPU-only execution

set_backend(GPUConfig(force_cpu=True))

Or use environment variable: OSIPY_FORCE_CPU=1

References

.. [1] CuPy Installation Guide: https://docs.cupy.dev/en/stable/install.html

GPUConfig dataclass

GPUConfig(
    force_cpu=False,
    default_batch_size=10000,
    memory_limit_fraction=0.9,
    device_id=0,
    n_workers=0,
    gpu_dtype="float32",
)

Configuration for GPU/CPU backend selection.

This dataclass holds configuration options for controlling GPU acceleration. It can be used with set_backend() to configure global behavior.

PARAMETER DESCRIPTION
force_cpu

If True, force all operations to run on CPU even if GPU is available. Default is False, which allows automatic GPU usage when available.

TYPE: bool DEFAULT: False

default_batch_size

Default batch size for GPU batch processing. Larger values use more GPU memory but may be faster. Default is 10000.

TYPE: int DEFAULT: 10000

memory_limit_fraction

Fraction of GPU memory to use (0.0 to 1.0). Default is 0.9 (90%). This helps prevent out-of-memory errors by leaving headroom.

TYPE: float DEFAULT: 0.9

device_id

CUDA device ID to use. Default is 0 (first GPU).

TYPE: int DEFAULT: 0

n_workers

Number of threads for CPU-parallel chunk processing in fit_image(). 0 (default) = auto (os.cpu_count()); 1 = disable threading. Ignored when running on GPU. Can also be set via the OSIPY_NUM_THREADS environment variable.

TYPE: int DEFAULT: 0

ATTRIBUTE DESCRIPTION
force_cpu

Whether to force CPU-only execution.

TYPE: bool

default_batch_size

Default batch size for GPU operations.

TYPE: int

memory_limit_fraction

Fraction of GPU memory to use.

TYPE: float

device_id

CUDA device ID.

TYPE: int

n_workers

Number of threads for CPU-parallel chunk processing.

TYPE: int

Example

config = GPUConfig(force_cpu=True) set_backend(config)

__post_init__

__post_init__()

Validate configuration values.

is_gpu_available

is_gpu_available()

Check if GPU acceleration is available.

This function checks for CUDA-capable GPU and CuPy installation. The result is cached after the first call for performance.

RETURNS DESCRIPTION
bool

True if GPU acceleration is available, False otherwise.

Notes

GPU is considered available if: 1. CuPy is installed and can be imported 2. At least one CUDA device is detected 3. The OSIPY_FORCE_CPU environment variable is not set to "1"

The availability check is cached after the first call. To force a recheck, use _reset_gpu_cache() (internal use only).

Example

if is_gpu_available(): ... print("GPU acceleration enabled") ... else: ... print("Running on CPU only")

get_backend

get_backend()

Get the current backend configuration.

RETURNS DESCRIPTION
GPUConfig

The current global GPU configuration. If not explicitly set, returns a default configuration that respects environment variables.

Example

config = get_backend() print(f"Force CPU: {config.force_cpu}")

set_backend

set_backend(config)

Set the global backend configuration.

PARAMETER DESCRIPTION
config

The configuration to use globally.

TYPE: GPUConfig

Notes

This affects all subsequent calls to get_array_module(), to_gpu(), and other backend functions. Changes take effect immediately.

Example

Force CPU-only execution

set_backend(GPUConfig(force_cpu=True))

Re-enable GPU with custom batch size

set_backend(GPUConfig(force_cpu=False, default_batch_size=50000))

get_gpu_memory_info

get_gpu_memory_info()

Get information about GPU memory usage.

RETURNS DESCRIPTION
dict

Dictionary containing: - 'available': bool - whether GPU is available - 'total_bytes': int - total GPU memory in bytes (0 if unavailable) - 'used_bytes': int - used GPU memory in bytes (0 if unavailable) - 'free_bytes': int - free GPU memory in bytes (0 if unavailable) - 'device_name': str - GPU device name (empty if unavailable)

Example

info = get_gpu_memory_info() if info['available']: ... print(f"GPU: {info['device_name']}") ... print(f"Free: {info['free_bytes'] / 1e9:.2f} GB")

get_gpu_batch_size

get_gpu_batch_size()

Get optimal batch size based on GPU thread capacity.

Returns the total number of concurrent threads the GPU can handle (multiProcessorCount * maxThreadsPerMultiProcessor), which is a good heuristic for batch sizing.

Returns 0 if GPU is not available.