# Usage

Check out the [SDK library reference](https://www.celantur.com/sdk/doc/current/) and [examples](https://github.com/celantur/SDKExample/).

## Workflow for model compilation <a href="#workflow-for-model-compilation" id="workflow-for-model-compilation"></a>

The default model is ONNX. You can transcompile the model to OpenVINO for better performance on Intel CPUs or TensorRT for better performance on NVIDIA GPUs.&#x20;

For NVIDIA GPUs, you need to compile a new model for each type of [GPU architecture](https://www.nvidia.com/en-us/technologies/).<br>

1. Create instance of `ModelCompiler`
2. Get settings `InferenceEnginePluginCompileSettings` from `ModelCompiler.preload_model(model_path)`;
3. Optional: Adjust settings `InferenceEnginePluginCompileSettings`
4. Execute model compilation with `ModelCompiler.compile_mode()`&#x20;

#### Example code for TensorRT

```cpp
#include "CelanturSDKInterface.h"
#include "CommonParameters.h"

// 1. Create instance of ModelCompiler
CelanturSDK::ModelCompilerParams compiler_params;
compiler_params.inference_plugin = "/usr/local/lib/libTensorRTRuntime.so";
CelanturSDK::ModelCompiler compiler("/path/to/license", compiler_params);

// 2. Get settings
celantur::InferenceEnginePluginCompileSettings settings = compiler.preload_model("/path/to/model.onnx.enc");

// 3. Adjust settings       
settings["precision"] = celantur::CompilePrecision::FP32;
settings["optimisation_level"] = celantur::OptimisationLevel::Low;

// 4. Compile model
compiler.compile_model(settings, model_path_compiled);
```

## Workflow for inference and blurring <a href="#workflow-for-inference-and-blurring" id="workflow-for-inference-and-blurring"></a>

### Initialise process engine <a href="#initialise-process-engine" id="initialise-process-engine"></a>

1. Create instance of `Processor` using the model.
2. Get `InferenceEnginePluginSettings` from `Processor.get_inference_settings()`
3. Optional: Adjust inference settings.
4. Load model with your adjusted settings.

#### Example Code

```cpp
#include "CelanturSDKInterface.h"
#include "CommonParameters.h"

// 1. Create instance of Processor 
celantur::ProcessorParams params;
//    If you use TensorRT
params.inference_plugin = "/usr/local/lib/libTensorRTRuntime.so";
params.swapRB = true;
CelanturSDK::Processor processor(params, "/path/to/license");

// 2. Get settings.
celantur::InferenceEnginePluginSettings settings = processor.get_inference_settings(model_path_compiled);

// 4. Load the compiled inference model.
processor.load_inference_model(settings);
```

### Run inference and blurring <a href="#run-inference-and-blurring" id="run-inference-and-blurring"></a>

5. Run inference with `Processor.process()`
6. Get anonymised image with `Processor.get_result()`
7. Get detections with `Processor.get_detections()` (necessary step to remove item from queue).

#### Example Code

```cpp
#include "CelanturSDKInterface.h"
#include "CelanturDetection.h"
#include <opencv2/opencv.hpp>

// Load image
cv::Mat image = cv::imread("/path/to/original/image");

// 5. Run inference
processor.process(image);

// 6. Get anonymised image
cv::Mat out = processor.get_result();

// 7. Get detections. Necessary to free up the memory.
processor.get_detections();

// Save the image
cv::imwrite("/path/to/anonymised/image", out);
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://doc.celantur.com/sdk/usage.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
