Architecture
Last updated
Last updated
The main class is CelanturSDK::Processor
. It provides the interface to the anonymisation library.
To create the instance of this class one needs:
ProcessorParams
class with inference_plugin
field instantiated.
license_path
variable that points to the valid instance of the license.
After creating the processor, you need to load an model and use following functions to perform anonymisation:
process
to post a new image to processing. The function is non-blocking and returns control immediately after posting the image to the processing queue.
get_result
to get the next anonymised image from the queue.
get_detections
to get the list of detections that were detected. One can use them e.g. to debug the results, display detections or create metadata JSON similar to Container.
You need two modules to use SDK. First is CppProcessing::CelanturSDK
that consists of the general interfaces to the SDK and is the entry point for interaction with it. Another one is CppProcessing::common-module
, which consists of multiple definitions, classes and structures.
Other shared objects do not provide include files because they are transient dependencies of CelanturSDK
.
Different inference plugins require different dependencies installed on the target machine. For example, libONNXInference.so
depends on to run the detection on CPU. libTensorRTRuntime.so
depends on NVIDIA's CUDA libraries to provide detections on GPU. To avoid having hard dependencies to these libraries, we use the plugin system where the dependencies are encapsulated in the plugin that is being loaded at runtime. You need to load at least one inference engine plugin for the SDK to work, which is covered in .