Level 5 API

Dataframe API

typedef void (*ccv_cnnp_column_data_enum_f)(const int column_idx, const int *const row_idxs, const int row_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A data enumeration function to supply data for given row indexes.

typedef void (*ccv_cnnp_column_data_deinit_f)(void *const data, void *const context)

A destructor for data.

typedef void (*ccv_cnnp_column_data_context_deinit_f)(void *const context)

A destructor for context.

typedef struct ccv_cnnp_dataframe_s ccv_cnnp_dataframe_t

An opaque structure point to the dataframe object.

typedef void (*ccv_cnnp_column_data_map_f)(void ***const column_data, const int column_size, const int batch_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A map function that takes the data from multiple columns and derive new data out of it.

typedef void (*ccv_cnnp_column_data_reduce_f)(void **const input_data, const int batch_size, void **const output_data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A reduce function that takes multiple rows of one column, and reduce to one row.

typedef struct ccv_cnnp_dataframe_iter_s ccv_cnnp_dataframe_iter_t

The opaque pointer to the iterator.

ccv_cnnp_dataframe_new(const ccv_cnnp_column_data_t *const column_data, const int column_size, const int row_count)

Create a dataframe object with given column data.

Parameters
  • column_data: The column data that can be loaded.
  • column_size: The size of column data array.
  • row_count: The number of rows in this dataframe.

ccv_cnnp_dataframe_add(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_enum_f data_enum, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)

Add a new column to the dataframe.

Return
The new column index.
Parameters
  • dataframe: The dataframe object to add column to.
  • data_enum: The data provider function for the new column.
  • stream_type: The type of stream context for this derived column.
  • data_deinit: The deinit function will be used to destroy the derived data.
  • context: The context that can be used to generate new column.
  • context_deinit: The deinit function will be used to destroy the context.

ccv_cnnp_dataframe_map(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_map_f map, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, const int *const column_idxs, const int column_idx_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)

Derive a new column out of existing columns in the dataframe.

Return
The new column index.
Parameters
  • dataframe: The dataframe object that contains existing columns.
  • map: The map function used to derive new column from existing columns.
  • stream_type: The type of stream context for this derived column.
  • data_deinit: The deinit function will be used to destroy the derived data.
  • column_idxs: The columns that will be used to derive new column.
  • column_idx_size: The size of existing columns array.
  • context: The context that can be used to generate new column.
  • context_deinit: The deinit function will be used to destroy the context.

void ccv_cnnp_dataframe_shuffle(ccv_cnnp_dataframe_t *const dataframe)

Shuffle an existing dataframe.

Parameters
  • dataframe: The dataframe that is about to be shuffled.

ccv_cnnp_dataframe_row_count(ccv_cnnp_dataframe_t *const dataframe)

Query row count of the dataframe.

Return
The row count of the dataframe.
Parameters
  • dataframe: The dataframe we want to query row count.

ccv_cnnp_dataframe_reduce_new(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_reduce_f reduce, ccv_cnnp_column_data_deinit_f data_deinit, const int column_idx, const int batch_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)

Reduce a dataframe by batch size. Thus, n rows are reduced to 1 row per reduce function on one specific column. This will also reduce the multi-column dataframe down to 1 column by selecting the one column to reduce.

Return
The reduced dataframe.
Parameters
  • dataframe: The dataframe that is about to be reduced.
  • reduce: The reduce function used to reduce n rows into 1.
  • data_deinit: The deinit function will be used to destroy the derived data.
  • column_idx: The column we selected to reduce.
  • batch_size: How many rows will be reduced to 1 row from the original data.
  • context: The context that can be used in reduce function.
  • context_deinit: The deinit function will be used to destroy the context.

ccv_cnnp_dataframe_iter_new(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size)

Get a new iterator of the dataframe.

Return
The opaque iterator object.
Parameters
  • dataframe: The dataframe object to iterate through.
  • column_idxs: The columns that will be iterated.
  • column_idx_size: The size of columns array.

int ccv_cnnp_dataframe_iter_next(ccv_cnnp_dataframe_iter_t *const iter, void **const data_ref, const int column_idx_size, ccv_nnc_stream_context_t *const stream_context)

Get the next item from the iterator.

Return
0 if the iteration is successful, -1 if it is ended.
Parameters
  • iter: The iterator to go through.
  • data_ref: The output for the data.
  • column_idx_size: The size of the data_ref array.
  • stream_context: The stream context to extract data asynchronously.

int ccv_cnnp_dataframe_iter_prefetch(ccv_cnnp_dataframe_iter_t *const iter, const int prefetch_count, ccv_nnc_stream_context_t *const stream_context)

Prefetch next item on the iterator with the given stream context. You can call this method multiple times to prefetch multiple items ahead of time.

Return
0 if the prefetch is successful, -1 if it is ended.
Parameters
  • iter: The iterator to go through.
  • prefetch_count: How much ahead we should advance for.
  • stream_context: The stream context to extract data asynchronously.

int ccv_cnnp_dataframe_iter_set_cursor(ccv_cnnp_dataframe_iter_t *const iter, const int idx)

Set the cursor of the iterator. When set to 0, the iterator effectively restarts.

Return
0 if it is successful, -1 if it is not (exceed the range).
Parameters
  • iter: The iterator to go through.
  • idx: The index of the cursor.

void ccv_cnnp_dataframe_iter_free(ccv_cnnp_dataframe_iter_t *const iter)

Free the dataframe iterator object.

Parameters
  • iter: The dataframe iterator to be freed.

void ccv_cnnp_dataframe_free(ccv_cnnp_dataframe_t *const dataframe)

Free the dataframe object.

Parameters
  • dataframe: The dataframe object to be freed.

struct ccv_cnnp_column_data_t
#include <ccv_nnc.h>

Column data.

Public Members

int stream_type

The type of stream context for this column. Each column only compatible with one stream type.

ccv_cnnp_column_data_enum_f data_enum

The data enumeration function for this column.

ccv_cnnp_column_data_deinit_f data_deinit

The deinit function that will be used to destroy the data.

void *context

The context go along with this column.

ccv_cnnp_column_data_context_deinit_f context_deinit

The deinit function that will be used to destroy the context.

Dataframe Add-ons

ccv_cnnp_dataframe_from_array_new(ccv_array_t *const array)

Turn a ccv_array_t to a dataframe object.

Return
The new dataframe object.
Parameters
  • array: The array we want to turn into a dataframe object.

ccv_cnnp_dataframe_copy_to_gpu(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int tensor_offset, const int tensor_size, int device_id)

Derive a new column that copies a tensor array from given column to the derived column on GPU.

Return
The index of the newly derived column.
Parameters
  • dataframe: The dataframe object that get the derived column.
  • column_idx: The original column contains tensor array on CPU.
  • tensor_offset: Only copy as outputs[i] = inputs[i + tensor_offset].
  • tensor_size: How many tensors in the tensor array.
  • device_id: The device we want to copy the tensors to.

ccv_cnnp_dataframe_add_aux(ccv_cnnp_dataframe_t *const dataframe, const ccv_nnc_tensor_param_t params)

Add a new column contains some tensors. This will add a new column that each row is the tensor specified as the parameters. It comes handy when you want to have some auxiliary tensors along with each row.

Return
The index of the newly added column.
Parameters
  • dataframe: The dataframe object that get the new column.
  • params: The parameters for the tensors.

Model API

enum [anonymous]::__anonymous94

Values:

CCV_CNNP_MODEL_CHECKPOINT_READ_WRITE

This is the default flag, if the model is not initialized, will attempt to read from the disk. Otherwise, will persist existing parameters to disk.

CCV_CNNP_MODEL_CHECKPOINT_READ_ONLY

Only read parameters out of disk, even it is already initialized.

CCV_CNNP_MODEL_CHECKPOINT_WRITE_ONLY

Only write parameters to disk.

enum [anonymous]::__anonymous95

Values:

CCV_CNNP_ACTIVATION_NONE
CCV_CNNP_ACTIVATION_RELU
CCV_CNNP_ACTIVATION_SOFTMAX
enum [anonymous]::__anonymous96

Values:

CCV_CNNP_NO_NORM
CCV_CNNP_BATCH_NORM
typedef struct ccv_cnnp_model_s ccv_cnnp_model_t

model type is an abstract type, you won’t interact with a naked model ever.

typedef struct ccv_cnnp_model_io_s *ccv_cnnp_model_io_t

With this type, now in NNC, we have 4 types that represents a “tensor”: ccv_nnc_tensor_t / ccv_nnc_tensor_view_t / ccv_nnc_tensor_multiview_t: a concrete tensor with memory allocated. ccv_nnc_tensor_symbol_t: a symbol representation of a tensor, with its data layout, device affinity, and type specified. ccv_nnc_tensor_variable_t: in dynamic graph, this represents a concrete tensor with memory allocated, but also associated with a recorded execution. ccv_cnnp_model_io_t: this is the most flexible one. No data layout, device affinity or type specified, the format has to be c / h / w, no batch size needed. This is a handle used by model API to associates model inputs / outputs.

ccv_cnnp_input(void)

Create a naked input.

Return
A ccv_cnnp_model_io_t represents an input.

ccv_cnnp_model_apply(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t *const inputs, const int input_size)

This method mimics Keras callable for model (thus, override __call__ method in Python class).

Return
A ccv_cnnp_model_io_t that represents the output of the given model.
Parameters
  • model: A model that we can apply a set of inputs to get one output.
  • inputs: The set of inputs.
  • input_size: The size of inputs array.

ccv_cnnp_model_new(const ccv_cnnp_model_io_t *const inputs, const int input_size, const ccv_cnnp_model_io_t *const outputs, const int output_size)

This method name is deceiving. It return a composed model, not a naked model. This composed model takes set of inputs, and run through various other models to arrive at the set of outputs.

Return
A composed model that takes inputs, and generate the outputs.
Parameters
  • inputs: The set of inputs.
  • input_size: The size of inputs array.
  • outputs: The set of outputs.
  • output_size: The size of outputs array.

ccv_cnnp_sequential_new(ccv_cnnp_model_t *const *const models, const int model_size)

This method returns a sequential model, which composed from a sequence of models.

Return
A composed model that applies these models one by one in sequence.
Parameters
  • models: The list of models, that takes one input, and emit one output, feeding into the subsequent one.
  • model_size: The size of the list.

void ccv_cnnp_model_compile(ccv_cnnp_model_t *const model, const ccv_nnc_tensor_param_t *const inputs, const int input_size, const ccv_nnc_cmd_t minimizer, const ccv_nnc_cmd_t loss)

Prepare the model to be trained, the input specifies the batch size etc. Input size technically is not needed, here is a safety check.

Parameters
  • model: The model to be compiled.
  • inputs: The tensor parameters for the model’s inputs, that can be used to derive all tensor shapes.
  • input_size: The size of the inputs array.
  • minimizer: The wrapped command that represents a particular optimization strategy.
  • loss: The wrapped command that computes the loss function.

void ccv_cnnp_model_dot(const ccv_cnnp_model_t *const model, const int flags, FILE *out)

Generate output that can be parsed by GraphViz (DOT language).

Parameters
  • model: The composed model.
  • flags: Either CCV_NNC_SHORT_DOT_GRAPH or CCV_NNC_LONG_DOT_GRAPH
  • out: The output file stream.

void ccv_cnnp_model_fit(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const fits, const int fit_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_stream_context_t *const stream_context)

Fit a model to a given input / output.

Parameters
  • model: The composed model.
  • inputs: The input tensors.
  • input_size: The size of the input tensors array.
  • fits: The target tensors.
  • fit_size: The size of the target tensors array.
  • outputs: The actual outputs from the model.
  • output_size: The size of the outputs array.
  • stream_context: The stream where the fit can be executed upon.

void ccv_cnnp_model_evaluate(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_stream_context_t *const stream_context)

Evaluate model with output.

Parameters
  • model: The composed model.
  • inputs: The input tensors.
  • input_size: The size of the input tensors array.
  • outputs: The actual outputs from the model.
  • output_size: The size of the outputs array.
  • stream_context: The stream where the fit can be executed upon.

void ccv_cnnp_model_checkpoint(ccv_cnnp_model_t *const model, const char *const fn, const int flags)

This method checkpoint the given model. If the model is initialized, it will persist all parameters to the given file path. If it is not initialized, this method will try to load tensors off the disk.

Parameters
  • model: The composed model.
  • fn: The file name.
  • flags: Whether we perform read / write on this checkpoint, or read only / write only.

void ccv_cnnp_model_set_data_parallel(ccv_cnnp_model_t *const model, const int parallel)

Apply data parallel to the composed model. This method has to be called before we call either evaluate or fit and after the model is compiled.

Parameters
  • model: The composed model.
  • parallel: Number of devices we want to run on. 0 will use all devices available. 1 will skip.

void ccv_cnnp_model_set_workspace_size(ccv_cnnp_model_t *const model, size_t workspace_size)

This method set the max workspace size. If the graph is already compiled. It will re-run autotune to use the new workspace size to find the best algorithm.

Parameters
  • model: The composed model.
  • workspace_size: The size in bytes that we can use as workspace (scratch memory).

void ccv_cnnp_model_set_minimizer(ccv_cnnp_model_t *const model, const ccv_nnc_cmd_t minimizer)

Set a new minimizer for the model. This is useful when you need to update learn rate for stochastic gradient descent for example. This method can be called any time during the training process (after compilation).

Parameters
  • model: The composed model.
  • minimizer: The wrapped command that represents a new optimization strategy.

ccv_cnnp_model_default_stream(const ccv_cnnp_model_t *const model)

Get the default stream from a compiled model. If the model is not compiled, the default stream is 0.

Return
The default stream for this model.
Parameters
  • model: The composed model.

ccv_cnnp_model_memory_size(const ccv_cnnp_model_t *const model)

Get the allocated memory size (exclude workspace) from a compiled model. If the model is not compiled the size is 0.

Return
The number of bytes for memory allocated.
Parameters
  • model: The composed model.

void ccv_cnnp_model_free(ccv_cnnp_model_t *const model)

Free a given model.

Parameters
  • model: The composed model.

ccv_cnnp_add(void)

Add multiple input tensors together.

Return
A model that can be applied with multiple inputs, and generate output that is a sum of the inputs.

ccv_cnnp_concat(void)

Concatenate input tensors together.

Return
A model that can be applied with multiple inputs, and generate output that is a concatenation of the inputs.

ccv_cnnp_identity(const ccv_cnnp_param_t params)

An identity layer that takes input and do nothing pass it as the output. Realistically, we use this because we want to apply some normalization / activation function on top of the input.

Return
A model that takes input and pass it as output.
Parameters
  • params: Parameters (such as hint and activation or norm).

ccv_cnnp_convolution(const int groups, const int filters, const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params)

A convolution model.

Return
A convolution model.
Parameters
  • groups: The number of kernel groups in the model.
  • filters: The total number of filters in the model (filters = groups * per group filters).
  • kdim: The dimensions of the kernel.
  • params: Other parameters (such as hint and activation or norm).

ccv_cnnp_dense(const int count, const ccv_cnnp_param_t params)

A dense layer model.

Return
A dense layer model.
Parameters
  • count: The output dimension.
  • params: Other parameters (such as hint and activation or norm).

ccv_cnnp_max_pool(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params)

A max pool model.

Return
A max pool model.
Parameters
  • kdim: The pooling window dimension.
  • params: Other parameters (such as hint and activation or norm).

ccv_cnnp_average_pool(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params)

An average pool model.

Return
An average pool model.
Parameters
  • kdim: The pooling window dimension.
  • params: Other parameters (such as hint and activation or norm).

ccv_cnnp_reshape(const int dim[CCV_NNC_MAX_DIM_ALLOC])

Reshape an input into a different dimension.

Return
A reshape layer model.
Parameters
  • dim: The new dimension for the input.

ccv_cnnp_flatten(void)

Flatten an input tensor into a one dimensional array.

Return
A flatten layer model.

struct ccv_cnnp_param_t

Public Members

int no_bias

No bias term.

int norm

The normalizations can be applied after activation such as CCV_CNNP_BATCH_NORM.

int activation

The activations can be applied for the output, such as CCV_CNNP_ACTIVATION_RELU or CCV_CNNP_ACTIVATION_SOFTMAX.

ccv_nnc_hint_t hint

The hint for a particular operation