# Architecture

### Overview

The main class is `CelanturSDK::Processor`. It provides the interface to the anonymisation library.&#x20;

To create the instance of this class one needs:

1. `ProcessorParams` class with `inference_plugin` field instantiated.
2. `license_path` variable that points to the valid instance of the license.

After creating the processor, you need to load an model and use following functions to perform anonymisation:

1. `process` to post a new image to processing. The function is non-blocking and returns control immediately after posting the image to the processing queue.
2. `get_result` to get the next anonymised image from the queue.
3. `get_detections` to get the list of detections that were detected. One can use them e.g. to debug the results, display detections or create metadata JSON similar to [container](https://doc.celantur.com/container "mention").

### Modules

You need two modules to use SDK. First is `CppProcessing::CelanturSDK` that consists of the general interfaces to the SDK and is the entry point for interaction with it. Another one is `CppProcessing::common-module` , which consists of multiple definitions, classes and structures.

Other shared objects do not provide include files because they are transient dependencies of `CelanturSDK` .

### Plugins

Different inference plugins require different dependencies installed on the target machine. For example, `libONNXInference.so` depends on [ONNX](https://onnx.ai/) to run the detection on CPU. `libTensorRTRuntime.so` depends on NVIDIA's CUDA libraries to provide detections on GPU. To avoid having hard dependencies to these libraries, we use the plugin system where the dependencies are encapsulated in the plugin that is being loaded at runtime. You need to load at least one inference engine plugin for the SDK to work, which is covered in <https://github.com/celantur/SDKExample/>.
