Level 5 API

Dataframe API

typedef void (*ccv_cnnp_column_data_enum_f)(const int column_idx, const int *const row_idxs, const int row_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A data enumeration function to supply data for given row indexes.

typedef void (*ccv_cnnp_column_data_deinit_f)(void *const data, void *const context)

A destructor for data.

typedef void (*ccv_cnnp_column_data_context_deinit_f)(void *const context)

A destructor for context.

typedef struct ccv_cnnp_dataframe_s ccv_cnnp_dataframe_t

An opaque structure point to the dataframe object.

typedef void (*ccv_cnnp_column_data_map_f)(void *const *const *const column_data, const int column_size, const int batch_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A map function that takes the data from multiple columns and derive new data out of it.

typedef void (*ccv_cnnp_column_data_reduce_f)(void *const *const input_data, const int batch_size, void **const output_data, void *const context, ccv_nnc_stream_context_t *const stream_context)

A reduce function that takes multiple rows of one column, and reduce to one row.

typedef struct ccv_cnnp_dataframe_iter_s ccv_cnnp_dataframe_iter_t

The opaque pointer to the iterator.

ccv_cnnp_dataframe_new(const ccv_cnnp_column_data_t *const column_data, const int column_size, const int row_count)

Create a dataframe object with given column data.

Parameters
  • column_data: The column data that can be loaded.

  • column_size: The size of column data array.

  • row_count: The number of rows in this dataframe.

ccv_cnnp_dataframe_add(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_enum_f data_enum, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit, const char *name)

Add a new column to the dataframe.

Return

The new column index.

Parameters
  • dataframe: The dataframe object to add column to.

  • data_enum: The data provider function for the new column.

  • stream_type: The type of stream context for this derived column.

  • data_deinit: The deinit function will be used to destroy the derived data.

  • context: The context that can be used to generate new column.

  • context_deinit: The deinit function will be used to destroy the context.

  • name: The name of the newly added column.

ccv_cnnp_dataframe_map(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_map_f map, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, const int *const column_idxs, const int column_idx_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit, const char *name)

Derive a new column out of existing columns in the dataframe.

Return

The new column index.

Parameters
  • dataframe: The dataframe object that contains existing columns.

  • map: The map function used to derive new column from existing columns.

  • stream_type: The type of stream context for this derived column.

  • data_deinit: The deinit function will be used to destroy the derived data.

  • column_idxs: The columns that will be used to derive new column.

  • column_idx_size: The size of existing columns array.

  • context: The context that can be used to generate new column.

  • context_deinit: The deinit function will be used to destroy the context.

  • name: The name of the new column.

void ccv_cnnp_dataframe_shuffle(ccv_cnnp_dataframe_t *const dataframe)

Shuffle an existing dataframe.

Parameters
  • dataframe: The dataframe that is about to be shuffled.

ccv_cnnp_dataframe_row_count(ccv_cnnp_dataframe_t *const dataframe)

Query row count of the dataframe.

Return

The row count of the dataframe.

Parameters
  • dataframe: The dataframe we want to query row count.

ccv_cnnp_dataframe_column_name(ccv_cnnp_dataframe_t *const dataframe, const int column_idx)

Query the column name of a given column on the dataframe.

Return

The name of the column.

Parameters
  • dataframe: The dataframe we want to query the column name.

  • column_idx: The index of a column.

ccv_cnnp_dataframe_reduce_new(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_reduce_f reduce, ccv_cnnp_column_data_deinit_f data_deinit, const int column_idx, const int batch_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)

Reduce a dataframe by batch size. Thus, n rows are reduced to 1 row per reduce function on one specific column. This will also reduce the multi-column dataframe down to 1 column by selecting the one column to reduce.

Return

The reduced dataframe.

Parameters
  • dataframe: The dataframe that is about to be reduced.

  • reduce: The reduce function used to reduce n rows into 1.

  • data_deinit: The deinit function will be used to destroy the derived data.

  • column_idx: The column we selected to reduce.

  • batch_size: How many rows will be reduced to 1 row from the original data.

  • context: The context that can be used in reduce function.

  • context_deinit: The deinit function will be used to destroy the context.

ccv_cnnp_dataframe_extract_value(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t offset, const char *name)

Extract a value out of a struct. Assuming the data points to a struct. This method extract n-offset value of that struct. For example, if you have struct { ccv_nnc_tensor_t* a; ccv_nnc_tensor_t* b; } S; if you want to extract the b tensor to a different column, you can call this function with offsetof(S, b).

Return

The new column that contains the extracted value.

Parameters
  • dataframe: The dataframe object to be extracted.

  • column_idx: The column that we want to extract value of.

  • offset: The offset. For example, offsetof(S, b).

  • name: The name of the new column.

ccv_cnnp_dataframe_make_tuple(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size, const char *name)

Make a tuple out of columns specified. Thus, the new derived column will contains a tuple with data from all the columns specified here. Tuple here represented as void* tuple[], an array of void* pointers.

Return

The derived column with the tuple.

Parameters
  • dataframe: The dataframe that will contain the new column.

  • column_idxs: The columns to be tupled.

  • column_idx_size: The number of columns.

  • name: The name of the new column.

ccv_cnnp_dataframe_tuple_size(const ccv_cnnp_dataframe_t *const dataframe, const int column_idx)

The size of the tuple. It is equal to the number of columns we specified. The behavior of calling this method on a column that is not a tuple is undefined.

Return

The tuple size of the column.

Parameters
  • dataframe: The dataframe that contains the tuple column.

  • column_idx: The tuple column we are going to inspect.

ccv_cnnp_dataframe_extract_tuple(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int index, const char *name)

Extract a data out of a tuple.

Return

The derived column with the extracted value.

Parameters
  • dataframe: The dataframe that will contain the new column.

  • column_idx: The column that is a tuple.

  • index: The index into the tuple.

  • name: The name of the new column.

ccv_cnnp_dataframe_iter_new(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size)

Get a new iterator of the dataframe.

Return

The opaque iterator object.

Parameters
  • dataframe: The dataframe object to iterate through.

  • column_idxs: The columns that will be iterated.

  • column_idx_size: The size of columns array.

int ccv_cnnp_dataframe_iter_next(ccv_cnnp_dataframe_iter_t *const iter, void **const data_ref, const int column_idx_size, ccv_nnc_stream_context_t *const stream_context)

Get the next item from the iterator.

Return

0 if the iteration is successful, -1 if there is no more row. -2 if it is already ended.

Parameters
  • iter: The iterator to go through.

  • data_ref: The output for the data.

  • column_idx_size: The size of the data_ref array.

  • stream_context: The stream context to extract data asynchronously.

void ccv_cnnp_dataframe_iter_peek(ccv_cnnp_dataframe_iter_t *const iter, void **const data_ref, const int offset, const int data_ref_size, ccv_nnc_stream_context_t *const stream_context)

Assuming iterator is on the same row, peek into potentially different column index.

Parameters
  • iter: The iterator to go through.

  • data_ref: The output for the data.

  • offset: The offset for which column in this iterator to peek at.

  • data_ref_size: How many columns in this iterator to peek at.

  • stream_context: The stream context to extract data asynchronously.

int ccv_cnnp_dataframe_iter_prefetch(ccv_cnnp_dataframe_iter_t *const iter, const int prefetch_count, ccv_nnc_stream_context_t *const stream_context)

Prefetch next item on the iterator with the given stream context. You can call this method multiple times to prefetch multiple items ahead of time.

Return

0 if the prefetch is successful, -1 if it is ended.

Parameters
  • iter: The iterator to go through.

  • prefetch_count: How much ahead we should advance for.

  • stream_context: The stream context to extract data asynchronously.

int ccv_cnnp_dataframe_iter_set_cursor(ccv_cnnp_dataframe_iter_t *const iter, const int idx)

Set the cursor of the iterator. When set to 0, the iterator effectively restarts.

Return

0 if it is successful, -1 if it is not (exceed the range).

Parameters
  • iter: The iterator to go through.

  • idx: The index of the cursor.

void ccv_cnnp_dataframe_iter_free(ccv_cnnp_dataframe_iter_t *const iter)

Free the dataframe iterator object.

Parameters
  • iter: The dataframe iterator to be freed.

void ccv_cnnp_dataframe_free(ccv_cnnp_dataframe_t *const dataframe)

Free the dataframe object.

Parameters
  • dataframe: The dataframe object to be freed.

struct ccv_cnnp_column_data_t
#include <ccv_nnc.h>

Column data.

Public Members

int stream_type

The type of stream context for this column. Each column only compatible with one stream type.

char *name

The name of the column.

ccv_cnnp_column_data_enum_f data_enum

The data enumeration function for this column.

ccv_cnnp_column_data_deinit_f data_deinit

The deinit function that will be used to destroy the data.

void *context

The context go along with this column.

ccv_cnnp_column_data_context_deinit_f context_deinit

The deinit function that will be used to destroy the context.

Dataframe Add-ons

ccv_cnnp_dataframe_from_array_new(ccv_array_t *const array)

Turn a ccv_array_t to a dataframe object.

Return

The new dataframe object.

Parameters
  • array: The array we want to turn into a dataframe object.

ccv_cnnp_dataframe_copy_to_gpu(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int tensor_offset, const int tensor_size, const int device_id, const char *name)

Derive a new column that copies a tensor array from given column to the derived column on GPU.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that get the derived column.

  • column_idx: The original column contains tensor array on CPU.

  • tensor_offset: Only copy as outputs[i] = inputs[i + tensor_offset].

  • tensor_size: How many tensors in the tensor array.

  • device_id: The device we want to copy the tensors to.

  • name: The name of the new column.

ccv_cnnp_dataframe_cmd_exec(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const int input_offset, const int input_size, const ccv_nnc_tensor_param_t *const output_params, const int output_size, const int stream_type, const char *name)

Derive a new column by executing a generic command.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that get the derived column.

  • column_idx: The original column contains tensor array.

  • cmd: The command for this operation.

  • hint: The hint to run the command.

  • flags: The flags with the command.

  • input_offset: Use inputs[i + input_offset] to inputs[i + input_offset + input_size - 1] as the inputs

  • input_size: How many tensors in the input array.

  • output_params: The parameters for the outputs.

  • output_size: How many tensors in the output array.

  • stream_type: The type of stream context we are going to use.

  • name: The name of the new column.

ccv_cnnp_dataframe_add_aux(ccv_cnnp_dataframe_t *const dataframe, const ccv_nnc_tensor_param_t params, const char *name)

Add a new column contains some tensors. This will add a new column that each row is the tensor specified as the parameters. It comes handy when you want to have some auxiliary tensors along with each row.

Return

The index of the newly added column.

Parameters
  • dataframe: The dataframe object that get the new column.

  • params: The parameters for the tensors.

  • name: The name of the new column.

ccv_cnnp_dataframe_read_image(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t structof, const char *name)

Read image off a said column. That column should contain the filename (as char array). The new column will contain the ccv_dense_matrix_t / ccv_nnc_tensor_t (both are toll-free bridging) of the image.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that loads the images.

  • column_idx: The column which contains the filename.

  • structof: The offset to the filename (as char array) from that column. For example, the column could be a struct and filename could be one of the field. In that case, you can pass offsetof(S, filename)

  • name: The name of the new column.

ccv_cnnp_dataframe_image_random_jitter(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int datatype, const ccv_cnnp_random_jitter_t random_jitter, const char *name)

Apply random jitter on a image to generate a new image.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that contains the original image.

  • column_idx: The column which contains the original image.

  • datatype: The final datatype of the image. We only support CCV_32F right now.

  • random_jitter: The random jitter parameters to be applied to.

  • name: The name of the new column.

ccv_cnnp_dataframe_one_hot(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t structof, const int range, const float onval, const float offval, const int datatype, const int format, const char *name)

Generate a one-hot tensor off the label from a struct.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that contains the label.

  • column_idx: The column which contains the label (as int).

  • structof: The offset to the label (as int) from that column. For example, the column could be a struct and label could be one of the field. You can pass offsetof(S, filename)

  • range: The range of the label, from [0…range - 1]

  • onval: The value when it hit.

  • offval: The value for the others.

  • datatype: The datatype of the tensor.

  • format: The format of the tensor.

  • name: The name of the new column.

ccv_cnnp_dataframe_copy_scalar(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t structof, const int from_dt, const int to_dt, const int format, const char *name)

Generate a scalar tensor (a tensor with one value) off a value from a struct.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that contains the value.

  • column_idx: The column which contains the value (as datatype).

  • structof: The offset to the label (as int) from that column. For example, the column could be a struct and label could be one of the field. You can pass offsetof(S, filename)

  • from_dt: The datatype of the value.

  • to_dt: The datatype of the tensor.

  • format: The format of the tensor.

  • name: The name of the new column.

ccv_cnnp_dataframe_one_squared(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size, const int variable_size, const int max_length, const char *name)

Generate vector with ones up to a given length, the rest will be zeros. When applied to batched lengths array, this will generate a matrix of these vectors, squared. The derived column will be a tuple of vectors for the given number of columns.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that will contain the matrix.

  • column_idxs: The columns which contain the sequence lengths (a 1d tensor).

  • column_idx_size: The number of columns. The derived column will be a tuple of vectors.

  • variable_size: The size of the final vector can vary, depending on the max length of current batch.

  • max_length: The absolute max length for inputs.

  • name: The name of the new column.

ccv_cnnp_dataframe_truncate(ccv_cnnp_dataframe_t *const dataframe, const int *const vec_idxs, const int vec_idx_size, const int *len_idxs, const int len_idx_size, const char *name)

Truncate a given matrix (as a list of vector) to the given size provided by another vector. The truncated column will be a tuple of vectors for the given columns.

Return

The index of the newly derived column.

Parameters
  • dataframe: The dataframe object that will contain the matrix.

  • vec_idxs: The columns of the given matrix to be truncated.

  • vec_idx_size: The number of columns for vec_idxs.

  • len_idxs: The columns of the given sizes as a vector.

  • len_idx_size: The number of columns for len_idxs.

  • name: The name of the new column.

ccv_cnnp_dataframe_batching_new(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size, const int batch_count, const int group_count, const int format)

Batch multiple tensors in a column into one tensor. This method can take multiple columns, which will result a tuple of tensors. Each tensor in the tuple is a batched one from a given column.

Return

The newly created dataframe with the 0-th column is the tuple of batched tensors.

Parameters
  • dataframe: The dataframe contains the columns of tensors to be batched.

  • column_idxs: The columns that contain the tensors.

  • column_idx_size: The number of columns that contain the tensors.

  • batch_count: How many tensors in one column to be batched together.

  • group_count: We can generate many groups of batched tensor. For example, if you have column A, B, C, each have different tensors. If group_count is 1, the result tuple will be (A_b, B_b, C_b). If group count is 2, the result tuple will be (A_b1, B_b1, C_b1, A_b2, B_b2, C_b2). A_b1 etc. will still contain the same number of batch_count tensors.

  • format: The result format of the tensor. We support simply transformation NCHW <=> NHWC with the source tensor.

struct ccv_cnnp_random_jitter_t
#include <ccv_nnc.h>

The structure to describe how to apply random jitter to the image.

Public Members

float contrast

The random contrast, the final contrast will be [1 / (1 + contrast), 1 + contrast]

float saturation

The saturation, the final saturation will be [1 / (1 + saturation), 1 + saturation]

float brightness

The brightness, the final brightness will be between [1 / (1 + brightness), 1 + brightness]

float lighting

AlexNet style PCA based image jitter

float aspect_ratio

Stretch aspect ratio between [1 / (1 + asepct_ratio), 1 + aspect_ratio]

int symmetric

Apply random flip on x-axis (around y-axis

int seed

The seed for random generator.

int center_crop

Enable crop to the center (otherwise do random crop).

int min

The minimal dimension of resize

int max

The maximal dimension of resize. The final resize can be computed from min + (max - min) * random_unit

int roundup

The dimension on both height / width are a multiple of roundup value.

int rows

The height of the final image.

int cols

The width of the final image.

int x

The extra random offset on x-axis.

int y

The extra random offset on y-axis.

float mean[3]

Normalize the image with mean.

float std[3]

Normalize the image with std. pixel = (pixel - mean) / std

Dataframe CSV Support

enum [anonymous]

Values:

enumerator CCV_CNNP_DATAFRAME_CSV_FILE = 0
enumerator CCV_CNNP_DATAFRAME_CSV_MEMORY = 1
ccv_cnnp_dataframe_from_csv_new(void *const input, const int type, const size_t len, const char delim, const char quote, const int include_header, int *const column_size)

Create a dataframe object that read a CSV file. This will eagerly load the file into memory, parse each row / column into null-terminated strings, you can later convert these into numerics if needed. Each column will be a column indexed from 0 to column_size - 1. If there are syntax errors, the parser will make guesses and continue to parse to its best knowledge. If it cannot, we will return null for the object. We support both CRLF, LF, and LFCR termination.

Return

A dataframe that can represent the csv file. nullptr if failed.

Parameters
  • input: The FILE handle for on-disk file, or the pointer to the region of the memory we are going to use.

  • type: The type of either `CCV_CNNP_DATAFRAME_CSV_FILE` or `CCV_CNNP_DATAFRAME_CSV_MEMORY`

  • len: The length of the memory region, if it is `CCV_CNNP_DATAFRAME_CSV_MEMORY`.

  • delim: The delim, it is ‘,’ by default (if you provided ‘\0’)

  • quote: The quote for escape strings, it is ‘”’ by default (if you provided ‘\0’)

  • include_header: whether to parse the header seperately. 1 means we treat the first line as header.

  • column_size: The number of columns in the resulted dataframe.

Model API

enum [anonymous]

Values:

enumerator CCV_CNNP_PARAMETER_SELECT_WEIGHT = 0
enumerator CCV_CNNP_PARAMETER_SELECT_BIAS = 1
enum [anonymous]

Values:

enumerator CCV_CNNP_DISABLE_OUTGRAD_NONE = (uint64_t)0

Don’t disable any outgrad.

enumerator CCV_CNNP_DISABLE_OUTGRAD_ALL = (uint64_t)(int64_t)-1

Disable all inputs’ outgrads.

enum [anonymous]

Values:

enumerator CCV_CNNP_MODEL_CHECKPOINT_READ_WRITE

This is the default flag, if the model is not initialized, will attempt to read from the disk. Otherwise, will persist existing parameters to disk.

enumerator CCV_CNNP_MODEL_CHECKPOINT_READ_ONLY

Only read parameters out of disk, even it is already initialized.

enumerator CCV_CNNP_MODEL_CHECKPOINT_WRITE_ONLY

Only write parameters to disk.

typedef struct ccv_cnnp_model_io_s *ccv_cnnp_model_io_t

With this type, now in NNC, we have 4 types that represents a “tensor”:

1. ccv_nnc_tensor_t / ccv_nnc_tensor_view_t / ccv_nnc_tensor_multiview_t: a concrete tensor with memory allocated.

2. ccv_nnc_tensor_symbol_t: a symbol representation of a tensor, with its data layout, device affinity, and type specified.

3. ccv_nnc_tensor_variable_t: in dynamic graph, this represents a concrete tensor with memory allocated, but also associated with a recorded execution.

4. ccv_cnnp_model_io_t: this is the most flexible one. No data layout, device affinity or type specified. It can even represent a list of tensors rather than just one. This is a handle used by model API to associates model inputs / outputs.

typedef void (*ccv_cnnp_model_notify_f)(const ccv_cnnp_model_t *const model, const int tag, void *const payload, void *const context)

A notification function such that a model can be notified. This is useful to broadcast a message to all models as sub-model of someone else.

typedef ccv_cnnp_model_t *(*ccv_cnnp_model_dynamic_f)(const ccv_nnc_tensor_param_t *const inputs, const int input_size, void *const context)

A model generation function to be called for dynamic models.

ccv_cnnp_input(void)

Create a naked input.

Return

A ccv_cnnp_model_io_t represents an input.

ccv_cnnp_model_apply(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t *const inputs, const int input_size)

This method mimics Keras callable for model (thus, override __call__ method in Python class).

Return

A ccv_cnnp_model_io_t that represents the output of the given model.

Parameters
  • model: A model that we can apply a set of inputs to get one output.

  • inputs: The set of inputs.

  • input_size: The size of inputs array.

ccv_cnnp_model_parameters(ccv_cnnp_model_t *const model, const int selector, const int index)

This method exposes parameter for a model out as a potential input for another model. Since it is a ccv_cnnp_model_io_t, it can also be used by other methods.

Parameters
  • model: A model that we can extract parameters out.

  • selector: The selector for a parameter. ALL_PARAMETERS means all parameters, or you can select CCV_CNNP_PARAMETER_SELECT_WEIGHT or CCV_CNNP_PARAMETER_SELECT_BIAS.

  • index: The index into a parameter. ALL_PARAMETERS means all parameters.

void ccv_cnnp_model_notify_hook(ccv_cnnp_model_t *const model, ccv_cnnp_model_notify_f func, void *const context)

Hook into a model such that when there is a notification, the callback will receive it.

Parameters
  • model: A model that can be notified.

  • func: The callback function.

  • context: The context to be passed along to the callback function.

void ccv_cnnp_model_notify(const ccv_cnnp_model_t *const model, const int tag, void *const payload)

Notify a model and its sub-models with a tag and a payload. This will be triggered synchronously.

Parameters
  • model: A model that will be notified.

  • tag: An integer to help identify what kind of notification.

  • payload: A payload pointer that you can carry arbitrary information.

ccv_cnnp_model_new(const ccv_cnnp_model_io_t *const inputs, const int input_size, const ccv_cnnp_model_io_t *const outputs, const int output_size, const char *const name)

This method name is deceiving. It return a composed model, not a naked model. This composed model takes set of inputs, and run through various other models to arrive at the set of outputs.

Return

A composed model that takes inputs, and generate the outputs.

Parameters
  • inputs: The set of inputs.

  • input_size: The size of inputs array.

  • outputs: The set of outputs.

  • output_size: The size of outputs array.

  • name: The unique name of the model.

ccv_cnnp_sequential_new(ccv_cnnp_model_t *const *const models, const int model_size, const char *const name)

This method returns a sequential model, which composed from a sequence of models.

Return

A composed model that applies these models one by one in sequence.

Parameters
  • models: The list of models, that takes one input, and emit one output, feeding into the subsequent one.

  • model_size: The size of the list.

  • name: The unique name of the model.

ccv_cnnp_dynamic_new(ccv_cnnp_model_dynamic_f func, void *const context, const char *const name)

This method returns a model that will be recreated if it is recompiled. Put it this way, you can call ccv_cnnp_model_compile multiple times with different inputs and input size, however, the model will only be recompiled to some extent. For example, if you called ccv_cnnp_reshape, the shape is determined at the moment you create that model, recompilation won’t change. There are two ways to workaround this: 1. Use models that doesn’t have explicit shape specified, for example, ccv_cnnp_dense, and avoid models that is not as flexible, such as ccv_cnnp_reshape, or ccv_cnnp_cmd_exec. 2. Create with ccv_cnnp_dynamic_new such that the model will be recreated again whenever recompile.

Return

A model object that is yet to be created until build.

Parameters
  • func: The function to be called to create the model.

  • context: The context used along to create the model.

  • name: The unique name of the model.

void ccv_cnnp_model_compile(ccv_cnnp_model_t *const model, const ccv_nnc_tensor_param_t *const inputs, const int input_size, const ccv_nnc_cmd_t minimizer, const ccv_nnc_cmd_t loss)

Prepare the model to be trained, the input specifies the batch size etc. Input size technically is not needed, here is a safety check.

Parameters
  • model: The model to be compiled.

  • inputs: The tensor parameters for the model’s inputs, that can be used to derive all tensor shapes.

  • input_size: The size of the inputs array.

  • minimizer: The wrapped command that represents a particular optimization strategy.

  • loss: The wrapped command that computes the loss function.

void ccv_cnnp_model_absorb(ccv_cnnp_model_t *const model, ccv_cnnp_model_t *const init, const ccv_nnc_tensor_param_t *const inputs, const int input_size)

Absorb a new model into the existing model. This requires the new model has exactly the same parameters but other dimensionality’s can change. The new model has to not be compiled yet, its life-cycle management will be take over by the existing model. You don’t need to free it separately.

Parameters
  • model: The existing model.

  • init: The new model.

  • inputs: The tensor parameters for the model’s inputs, that can be used to derive all tensor shapes.

  • input_size: The size of the inputs array.

ccv_cnnp_model_copy(const ccv_cnnp_model_t *const model)

Create a copy of an existing model.

Return

The new model that is exactly the same copy of the old one.

Parameters
  • model: The existing model.

ccv_cnnp_model_output_size(const ccv_cnnp_model_t *const model)

Get the output size of the model.

Return

The output size of the model.

Parameters
  • model: The existing model.

void ccv_cnnp_model_tensor_auto(ccv_cnnp_model_t *const model, ccv_nnc_tensor_param_t *const outputs, const int output_size)

Compute the shape of the output tensor after the model applied to the input. This can only be called after the model is compiled with proper input parameters.

Parameters
  • model: The model to compute the output shapes.

  • outputs: The computed tensor parameters in the output.

  • output_size: The size of the output array, it has to match the model’s output.

void ccv_cnnp_model_dot(const ccv_cnnp_model_t *const model, const int flags, FILE **const outs, const int out_size)

Generate output that can be parsed by GraphViz (DOT language).

Parameters
  • model: The composed model.

  • flags: Either CCV_NNC_SHORT_DOT_GRAPH or CCV_NNC_LONG_DOT_GRAPH

  • outs: The output file streams.

  • out_size: The size of output file stream array.

void ccv_cnnp_model_fit(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const fits, const int fit_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Fit a model to a given input / output. This is a combination of running ccv_cnnp_model_evaluate / ccv_cnnp_model_backward / ccv_cnnp_model_apply_gradients. The difference is that when calling individual functions, the graph is compiled piece by piece, thus, is less efficient than calling ccv_cnnp_model_fit directly. However, having the separate functions makes this implementation much more versatile, for example, can accumulate gradients for multiple batches, or using custom gradients etc.

Parameters
  • model: The composed model.

  • inputs: The input tensors.

  • input_size: The size of the input tensors array.

  • fits: The target tensors.

  • fit_size: The size of the target tensors array.

  • outputs: The actual outputs from the model.

  • output_size: The size of the outputs array.

  • tensor_tape: An opaque tensor tape object to “backpropagate through time”.

  • stream_context: The stream where the fit can be executed upon.

void ccv_cnnp_model_evaluate(ccv_cnnp_model_t *const model, const ccv_cnnp_evaluate_param_t params, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Evaluate model with output.

Parameters
  • model: The composed model.

  • params: The parameters for how evaluation should behave.

  • inputs: The input tensors.

  • input_size: The size of the input tensors array.

  • outputs: The actual outputs from the model.

  • output_size: The size of the outputs array.

  • tensor_tape: An opaque tensor tape object to “backpropagate through time”.

  • stream_context: The stream where the evaluation can be executed upon.

void ccv_cnnp_model_backward(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const ingrads, const int ingrad_size, ccv_nnc_tensor_t *const *const outgrads, const int outgrad_size, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Based on the input gradients, compute the output gradients (w.r.t. the inputs). This also adds parameter gradients.

Parameters
  • model: The composed model.

  • ingrads: The input gradients.

  • ingrad_size: The size of the input gradients array.

  • outgrads: The output gradients (w.r.t. the inputs).

  • outgrad_size: The size of the output gradients array.

  • tensor_tape: An opaque tensor tape object to “backpropagate through time”.

  • stream_context: The stream where the gradient computation can be executed upon.

void ccv_cnnp_model_apply_gradients(ccv_cnnp_model_t *const model, ccv_nnc_stream_context_t *const stream_context)

Apply the computed gradients to the parameter tensors.

Parameters
  • model: The composed model.

  • stream_context: The stream where the gradient computation can be executed upon.

void ccv_cnnp_model_checkpoint(ccv_cnnp_model_t *const model, const char *const fn, const int flags)

This method checkpoint the given model. If the model is initialized, it will persist all parameters to the given file path. If it is not initialized, this method will try to load tensors off the disk. Under the hood, it calls ccv_cnnp_model_write / ccv_cnnp_model_read when appropriate.

Parameters
  • model: The composed model.

  • fn: The file name.

  • flags: Whether we perform read / write on this checkpoint, or read only / write only.

int ccv_cnnp_model_write(const ccv_cnnp_model_t *const model, void *const handle, const char *const name)

Write model’s tensors to a SQLite database with a given name. Note that we specifically say “model’s tensors” because it doesn’t persist the model’s structure. Hence, you shouldn’t expect us to take a name to then have a fully functional model restored from there. You still need to construct the model. This method only write the tensors (weights and other internal ones) to disk.

Return

CCV_IO_FINAL for success, otherwise error.

Parameters
  • model: The model.

  • handle: The SQLite handle.

  • name: The name to find the tensors related to the model in the database.

int ccv_cnnp_model_read(void *const handle, const char *const name, const ccv_cnnp_model_t *const model_out)

Read model’s tensors from a SQLite database with a given name.

Return

CCV_IO_FINAL for success, otherwise error.

Parameters
  • handle: The SQLite handle.

  • name: The name to find the tensors related to the model in the database.

  • model_out: The model which you want to restore the tensors. It should have the same structure as the one in write to.

void ccv_cnnp_model_set_data_parallel(ccv_cnnp_model_t *const model, const int parallel)

Apply data parallel to the composed model. This method has to be called before we call either evaluate or fit and after the model is compiled.

Parameters
  • model: The composed model.

  • parallel: Number of devices we want to run on. 0 will use all devices available. 1 will skip.

void ccv_cnnp_model_set_memory_compression(ccv_cnnp_model_t *const model, const int memory_compression)

Apply memory compression to the composed model. The memory compression technique can reduce memory usage up to 75% comparing with raw mix-precision model during training time.

Parameters
  • model: The composed model.

  • memory_compression: Whether to enable the memory compression (1 - enable, 0 - disable (default))

void ccv_cnnp_model_set_compile_params(ccv_cnnp_model_t *const model, const ccv_nnc_symbolic_graph_compile_param_t compile_params)

Set compile parameters on the model so it compiles the graph with the said parameters.

Parameters
  • model: The composed model.

  • compile_params: A ccv_nnc_symbolic_graph_compile_param_t struct defines compilation parameters.

void ccv_cnnp_model_set_workspace_size(ccv_cnnp_model_t *const model, size_t workspace_size)

This method set the max workspace size. If the graph is already compiled. It will re-run autotune to use the new workspace size to find the best algorithm.

Parameters
  • model: The composed model.

  • workspace_size: The size in bytes that we can use as workspace (scratch memory).

void ccv_cnnp_model_set_parameter(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t parameter, const ccv_nnc_tensor_t *const tensor)

Set a parameter that is specified by the parameter span. This will override whatever value in that parameter. The given tensor should match the dimension of the parameter. It doesn’t matter whether the given tensor is on CPU or GPU, it will be copied over. This method is limited, it can only set tensor once the model is compiled.

Parameters
  • model: The composed model.

  • parameter: The parameter that is used to specify which parameter to override.

  • tensor: The tensor contains the value we want to copy over.

void ccv_cnnp_model_parameter_copy(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t parameter, ccv_nnc_tensor_t *const tensor)

Copy a parameter that is specified by the parameter span out of a model. This will override the value in the tensor you provided. The given tensor should match the dimension of the parameter and should already be allocated. It doesn’t matter whether the given tensor is on CPU or GPU.

Parameters
  • model: The composed model.

  • parameter: The parameter that is used to specify which parameter to override.

  • tensor: The tensor that receives value.

void ccv_cnnp_model_set_parameters(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t parameters, const ccv_cnnp_model_t *const from_model, const ccv_cnnp_model_io_t from_parameters)

Set parameters from another model. This will override whatever values in these parameters. The given parameters from another model should match the dimension of the parameter. It doesn’t matter whether the given tensor is on CPU or GPU. This method can only set when both models are compiled.

Parameters
  • model: The composed model to be set on parameters.

  • parameters: The parameters to be override.

  • from_model: The model to copy parameters from.

  • from_parameters: The parameters to be copied from.

void ccv_cnnp_model_set_minimizer(ccv_cnnp_model_t *const model, const ccv_nnc_cmd_t minimizer, const int reset, const ccv_cnnp_model_io_t *const parameters, const int parameter_size)

Set a new minimizer for the model. This is useful when you need to update learn rate for stochastic gradient descent for example. This method can be called any time during the training process (after compilation).

Parameters
  • model: The composed model.

  • minimizer: The wrapped command that represents a new optimization strategy.

  • reset: Reset all previous states of minimizers. This only makes sense if both parameters and parameter_size is 0.

  • parameters: The parameters to be applied the minimizer on. 0 meant for all.

  • parameter_size: The number of parameter spans.

ccv_cnnp_model_minimizer(ccv_cnnp_model_t *const model)

Retrieve the default minimizer for the model. This is set either you call model compile or ccv_cnnp_model_set_minimizer with no parameter spans.

Return

The minimizer command.

Parameters
  • model: The composed model.

ccv_cnnp_model_default_stream(const ccv_cnnp_model_t *const model)

Get the default stream from a compiled model. If the model is not compiled, the default stream is 0.

Return

The default stream for this model.

Parameters
  • model: The composed model.

ccv_cnnp_model_memory_size(const ccv_cnnp_model_t *const model)

Get the allocated memory size (exclude workspace) from a compiled model. If the model is not compiled the size is 0.

Return

The number of bytes for memory allocated.

Parameters
  • model: The composed model.

void ccv_cnnp_model_free(ccv_cnnp_model_t *const model)

Free a given model.

Parameters
  • model: The composed model.

struct ccv_cnnp_evaluate_param_t
#include <ccv_nnc.h>

The parameters for how evaluation should behave.

Public Members

int requires_grad

Whether we need to keep intermediate results for gradient computations.

int is_test

Whether we evaluate it as test, or just as forward pass of the training process.

uint64_t disable_outgrad

Whether we can compute outflow gradients when call ccv_cnnp_model_backward later, this is a bitmask, you can mark for which input the outgrad is disabled.

Model Add-ons

enum [anonymous]

Values:

enumerator CCV_CNNP_IO

The parameter is a ccv_cnnp_io_t.

enumerator CCV_CNNP_NO_TENSOR

The parameter is not used.

enumerator CCV_CNNP_TENSOR_NOT_OUTPUT

This parameter indicates this is a tensor parameter, but it is not an output reflected as ccv_cnnp_io_t

enumerator CCV_CNNP_INIT_SHARED_TENSOR

The parameter is a provided tensor for initialization.

enumerator CCV_CNNP_INIT_SHARED_TENSOR_AS_TRAINABLE

The parameter is a provided tensor that can be updated.

typedef void (*ccv_cnnp_state_initializer_f)(void *const context, const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, ccv_nnc_tensor_t *const input, const ccv_nnc_tensor_symbol_t output_symbol)
typedef void (*ccv_cnnp_cmd_exec_init_state_f)(const ccv_nnc_tensor_symbol_t tensor_symbol, const ccv_cnnp_state_initializer_f initializer, void *const initializer_context, void *const context)
typedef void (*ccv_cnnp_cmd_exec_init_state_deinit_f)(void *const context)
ccv_cnnp_cmd_exec(const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const ccv_cnnp_cmd_exec_io_t *const inputs, const int input_size, const int *const outputs, const int output_size, const char *const name)

A generic model based on the command. If the tensors are labeled as ccv_cnnp_io_t, it will participate as the input / output of the model. If it is a init tensor, the model will use this tensor for that parameter. More over, if it is marked as parameter, that tensor will be differentiated against when you call ccv_cnnp_model_fit. This model however doesn’t take over ownership of the tensor. You should manage the life cycle of the given tensor and it is your responsibility to make sure they outlive the model. Also, all inputs and outputs marked as init tensors will be shared if you reuse this model in other places.

Return

A model based on the given command.

Parameters
  • cmd: The command to generate this model.

  • hint: The hint to run the command.

  • flags: The flags with the command.

  • inputs: A list of ccv_cnnp_cmd_exec_io_t identify each input as either a init tensor or a ccv_cnnp_io_t.

  • input_size: The size of input list.

  • outputs: A list of types identify each output as ccv_cnnp_io_t or a none tensor.

  • output_size: The size of the outputs. There is no need to give ccv_cnnp_tensor_param_t for outputs because all of them are CCV_CNNP_IO type.

  • name: The unique name of the model.

ccv_cnnp_cmd_exec_io_copy(const ccv_nnc_tensor_t *const tensor)

Copy a tensor as initialization for the given parameter.

Return

A init_state that can be passed to ccv_cnnp_cmd_exec_io_t

Parameters
  • tensor: The tensor to copy from.

ccv_cnnp_cmd_exec_io_set_by(const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const ccv_nnc_tensor_param_t params)

Initialize a given parameter with the command.

Return

A init_state that can be passed to ccv_cnnp_cmd_exec_io_t

Parameters
  • cmd: The command to call when need to initialize.

  • hint: The hint to accompany the command.

  • flags: The flags to accompany the command.

  • params: The tensor configuration.

ccv_cnnp_graph(const ccv_nnc_symbolic_graph_t *const graph, const ccv_cnnp_tensor_symbol_param_t *const tensor_symbol_params, const int tensor_symbol_param_size, ccv_nnc_tensor_symbol_t *const inputs, const int input_size, ccv_nnc_tensor_symbol_t *const outputs, const int output_size, const char *const name)

A generic model based on the symbolic graph we provided. A list of tensor symbols are labeled whether it is ccv_cnnp_io_t or not (we identify whether this is a input or output based on whether it is in the graph). If it is not, we init it with a given tensor. If it is marked as parameter, that tensor will be differentiated against when you call ccv_cnnp_model_fit. The model doesn’t take ownership over the init tensors. You are responsible to make sure the init tensors outlive the model until the initialization occurred. Also, these tensors will be shared if the model is reused.

Return

A model based on the given symbolic graph.

Parameters
  • graph: The symbolic graph that is our blue print for this model.

  • tensor_symbol_params: The list of tensor symbol parameters that labels a given symbol.

  • tensor_symbol_param_size: The size of the list.

  • inputs: The inputs to this graph. We can figure out which ones are inputs, but this gives us the order.

  • input_size: The size of the input list.

  • outputs: The outputs from this graph. We can figure out which ones are outputs, but this gives us the order.

  • output_size: The size of the output list.

  • name: The unique name of the model.

ccv_cnnp_sum(const char *const name)

Sum multiple input tensors together.

Return

A model that can be applied with multiple inputs, and generate output that is a sum of the inputs.

Parameters
  • name: The unique name of the model.

ccv_cnnp_concat(const char *const name)

Concatenate input tensors together.

Return

A model that can be applied with multiple inputs, and generate output that is a concatenation of the inputs.

Parameters
  • name: The unique name of the model.

ccv_cnnp_convolution(const int groups, const int filters, const int kdim[CCV_NNC_MAX_DIM_ALLOC], const int no_bias, ccv_nnc_hint_t hint, const char *const name)

A convolution model.

Return

A convolution model.

Parameters
  • groups: The number of kernel groups in the model.

  • filters: The total number of filters in the model (filters = groups * per group filters).

  • kdim: The dimensions of the kernel.

  • no_bias: Whether has bias term or not.

  • hint: The hint for alignment.

  • name: The unique name of the model.

ccv_cnnp_dense(const int count, const int no_bias, const char *const name)

A dense layer model.

Return

A dense layer model.

Parameters
  • count: The output dimension.

  • no_bias: Whether has a bias term or not.

  • name: The unique name of the model.

ccv_cnnp_batch_norm(const float momentum, const float epsilon, const char *const name)

A batch norm layer model.

Return

A batch norm layer model.

Parameters
  • momentum: The momentum in batch norm parameter.

  • epsilon: The epsilon in batch norm parameter.

  • name: The unique name of the model.

ccv_cnnp_relu(const char *const name)

A RELU activation layer model.

Return

A RELU activation layer model.

ccv_cnnp_sigmoid(const char *const name)

A sigmoid activation layer model.

Return

A sigmoid activation layer model.

ccv_cnnp_swish(const char *const name)

A swish activation layer model.

Return

A swish activation layer model.

ccv_cnnp_softmax(const char *const name)

A softmax activation layer model.

Return

A softmax activation layer model.

ccv_cnnp_max_pool(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_nnc_hint_t hint, const char *const name)

A max pool model.

Return

A max pool model.

Parameters
  • kdim: The pooling window dimension.

  • hint: The hint for alignment.

  • name: The unique name of the model.

ccv_cnnp_average_pool(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_nnc_hint_t hint, const char *const name)

An average pool model.

Return

An average pool model.

Parameters
  • kdim: The pooling window dimension.

  • hint: The hint for alignment.

  • name: The unique name of the model.

ccv_cnnp_reshape(const int dim[CCV_NNC_MAX_DIM_ALLOC], const int ofs[CCV_NNC_MAX_DIM_ALLOC], const int inc[CCV_NNC_MAX_DIM_ALLOC], const char *const name)

Reshape an input into a different dimension.

Return

A reshape layer model.

Parameters
  • dim: The new dimension for the input.

  • ofs: The offset on each of the dimension.

  • inc: The line size of each dimension.

  • name: The unique name of the model.

ccv_cnnp_flatten(const char *const name)

Flatten an input tensor into a one dimensional array.

Return

A flatten layer model.

Parameters
  • name: The unique name of the model.

ccv_cnnp_layer_norm(const float epsilon, const int axis[CCV_NNC_MAX_DIM_ALLOC], const int axis_count, const char *const name)

A layer norm model.

Return

A layer norm model.

Parameters
  • epsilon: The epsilon in layer norm parameter.

  • axis: The axis are the feature axis to compute norm.

  • axis_count: How many axis we count as feature.

  • name: The unique name of the model.

ccv_cnnp_add(const float p, const float q, const char *const name)

Add two input tensors together. Different from sum because this support broadcasting.

Return

A model that can be applied with two inputs, and generate output that is a product of the inputs.

Parameters
  • p: The weight for the first input.

  • q: The weight for the second input.

  • name: The unique name of the model.

ccv_cnnp_mul(const float p, const char *const name)

Multiple two input tensors together.

Return

A model that can be applied with two inputs, and generate output that is a product of the inputs.

Parameters
  • p: The weight for the output.

  • name: The unique name of the model.

ccv_cnnp_scalar_mul(const float a, const char *const name)

A scalar multiplication model. Y = aX where a is a scalar.

Return

A scalar multiplication model.

Parameters
  • a: The scalar parameter.

  • name: The unique name of the model.

ccv_cnnp_transpose(const int axis_a, const int axis_b, const char *const name)

A matrix transpose model.

Return

A matrix transpose model.

Parameters
  • axis_a: The axis to be exchanged with axis_b

  • axis_b: The axis to be exchanged with axis_a

  • name: The unique name of the model.

ccv_cnnp_matmul(const int transpose_a[2], const int transpose_b[2], const char *const name)

A batched matrix multiplication model.

Return

A batched matrix multiplication model.

Parameters
  • transpose_a: The axis to be transposed in the first matrix.

  • transpose_b: The axis to be transposed in the second matrix.

  • name: The unique name of the model.

ccv_cnnp_dropout(const float p, const int entirety, const char *const name)

A dropout model.

Return

A dropout model.

Parameters
  • p: The probability to drop the current value.

  • entirety: Drop the whole layer with the given probability.

  • name: The unique name of the model.

ccv_cnnp_masked_fill(const float eq, const float fill, const char *const name)

A masked fill model.

Return

A masked fill model.

Parameters
  • eq: If a value in the given mask tensor is equal to this.

  • fill: Fill in this value to the output tensor.

  • name: The unique name of the model.

ccv_cnnp_index_select(const int datatype, const int vocab_size, const int embed_size, const char *const name)

A index select model.

Return

A index select model.

Parameters
  • datatype: The data type of the vocabulary.

  • vocab_size: The size of the vocabulary.

  • embed_size: The size of the embedding.

  • name: The unique name of the model.

ccv_cnnp_model_t *ccv_cnnp_upsample(const float width_scale, const float height_scale, const char *const name)

A upsample model.

Return

A upsample model.

Parameters
  • width_scale: The scale of the width of the input.

  • height_scale: The scale of the height of the input.

  • name: The unique name of the model.

struct ccv_cnnp_cmd_exec_io_init_state_t

Public Members

ccv_nnc_tensor_param_t info

The tensor parameter for this one.

void *context

The context for which we initialize tensor.

ccv_cnnp_cmd_exec_init_state_f init

The function to init state for a tensor.

ccv_cnnp_cmd_exec_init_state_deinit_f deinit

The function to release the context.

struct ccv_cnnp_cmd_exec_io_t

Public Members

int type

The type of the parameter, could be CCV_CNNP_IO, NO_TENSOR, INIT_SHARED_TENSOR, or INIT_SHARED_TENSOR_TRAINABLE

struct ccv_cnnp_tensor_symbol_param_t

Public Members

ccv_nnc_tensor_symbol_t symbol

The tensor symbol this is reference to.

int type

The type of the parameter, could be CCV_CNNP_IO, INIT_SHARED_TENSOR, or INIT_SHARED_TENSOR_TRAINABLE