Level 5 API¶
Dataframe API¶

typedef void (*
ccv_cnnp_column_data_enum_f
)(const int column_idx, const int *const row_idxs, const int row_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)¶ A data enumeration function to supply data for given row indexes.

typedef void (*
ccv_cnnp_column_data_deinit_f
)(void *const data, void *const context)¶ A destructor for data.

typedef void (*
ccv_cnnp_column_data_context_deinit_f
)(void *const context)¶ A destructor for context.

typedef struct ccv_cnnp_dataframe_s
ccv_cnnp_dataframe_t
¶ An opaque structure point to the dataframe object.

typedef void (*
ccv_cnnp_column_data_map_f
)(void ***const column_data, const int column_size, const int batch_size, void **const data, void *const context, ccv_nnc_stream_context_t *const stream_context)¶ A map function that takes the data from multiple columns and derive new data out of it.

typedef void (*
ccv_cnnp_column_data_reduce_f
)(void **const input_data, const int batch_size, void **const output_data, void *const context, ccv_nnc_stream_context_t *const stream_context)¶ A reduce function that takes multiple rows of one column, and reduce to one row.

typedef struct ccv_cnnp_dataframe_iter_s
ccv_cnnp_dataframe_iter_t
¶ The opaque pointer to the iterator.

ccv_cnnp_dataframe_new
(const ccv_cnnp_column_data_t *const column_data, const int column_size, const int row_count)¶ Create a dataframe object with given column data.
 Parameters
column_data
: The column data that can be loaded.column_size
: The size of column data array.row_count
: The number of rows in this dataframe.

ccv_cnnp_dataframe_add
(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_enum_f data_enum, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)¶ Add a new column to the dataframe.
 Return
 The new column index.
 Parameters
dataframe
: The dataframe object to add column to.data_enum
: The data provider function for the new column.stream_type
: The type of stream context for this derived column.data_deinit
: The deinit function will be used to destroy the derived data.context
: The context that can be used to generate new column.context_deinit
: The deinit function will be used to destroy the context.

ccv_cnnp_dataframe_map
(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_map_f map, const int stream_type, ccv_cnnp_column_data_deinit_f data_deinit, const int *const column_idxs, const int column_idx_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)¶ Derive a new column out of existing columns in the dataframe.
 Return
 The new column index.
 Parameters
dataframe
: The dataframe object that contains existing columns.map
: The map function used to derive new column from existing columns.stream_type
: The type of stream context for this derived column.data_deinit
: The deinit function will be used to destroy the derived data.column_idxs
: The columns that will be used to derive new column.column_idx_size
: The size of existing columns array.context
: The context that can be used to generate new column.context_deinit
: The deinit function will be used to destroy the context.

void
ccv_cnnp_dataframe_shuffle
(ccv_cnnp_dataframe_t *const dataframe)¶ Shuffle an existing dataframe.
 Parameters
dataframe
: The dataframe that is about to be shuffled.

ccv_cnnp_dataframe_row_count
(ccv_cnnp_dataframe_t *const dataframe)¶ Query row count of the dataframe.
 Return
 The row count of the dataframe.
 Parameters
dataframe
: The dataframe we want to query row count.

ccv_cnnp_dataframe_reduce_new
(ccv_cnnp_dataframe_t *const dataframe, ccv_cnnp_column_data_reduce_f reduce, ccv_cnnp_column_data_deinit_f data_deinit, const int column_idx, const int batch_size, void *const context, ccv_cnnp_column_data_context_deinit_f context_deinit)¶ Reduce a dataframe by batch size. Thus, n rows are reduced to 1 row per reduce function on one specific column. This will also reduce the multicolumn dataframe down to 1 column by selecting the one column to reduce.
 Return
 The reduced dataframe.
 Parameters
dataframe
: The dataframe that is about to be reduced.reduce
: The reduce function used to reduce n rows into 1.data_deinit
: The deinit function will be used to destroy the derived data.column_idx
: The column we selected to reduce.batch_size
: How many rows will be reduced to 1 row from the original data.context
: The context that can be used in reduce function.context_deinit
: The deinit function will be used to destroy the context.

ccv_cnnp_dataframe_extract_value
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t offset)¶ Extract a value out of a struct. Assuming the data points to a struct. This method extract noffset value of that struct. For example, if you have struct { ccv_nnc_tensor_t* a; ccv_nnc_tensor_t* b; } S; if you want to extract the b tensor to a different column, you can call this function with offsetof(S, b).
 Return
 The new column that contains the extracted value.
 Parameters
dataframe
: The dataframe object to be extracted.column_idx
: The column that we want to extract value of.offset
: The offset. For example, offsetof(S, b).

ccv_cnnp_dataframe_make_tuple
(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size)¶ Make a tuple out of columns specified. Thus, the new derived column will contains a tuple with data from all the columns specified here. Tuple here represented as void* tuple[], an array of void* pointers.
 Return
 The derived column with the tuple.
 Parameters
dataframe
: The dataframe that will contain the new column.column_idxs
: The columns to be tupled.column_idx_size
: The number of columns.

ccv_cnnp_dataframe_tuple_size
(const ccv_cnnp_dataframe_t *const dataframe, const int column_idx)¶ The size of the tuple. It is equal to the number of columns we specified. The behavior of calling this method on a column that is not a tuple is undefined.
 Return
 The tuple size of the column.
 Parameters
dataframe
: The dataframe that contains the tuple column.column_idx
: The tuple column we are going to inspect.

ccv_cnnp_dataframe_extract_tuple
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int index)¶ Extract a data out of a tuple.
 Return
 The derived column with the extracted value.
 Parameters
dataframe
: The dataframe that will contain the new column.column_idx
: The column that is a tuple.index
: The index into the tuple.

ccv_cnnp_dataframe_iter_new
(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size)¶ Get a new iterator of the dataframe.
 Return
 The opaque iterator object.
 Parameters
dataframe
: The dataframe object to iterate through.column_idxs
: The columns that will be iterated.column_idx_size
: The size of columns array.

int
ccv_cnnp_dataframe_iter_next
(ccv_cnnp_dataframe_iter_t *const iter, void **const data_ref, const int column_idx_size, ccv_nnc_stream_context_t *const stream_context)¶ Get the next item from the iterator.
 Return
 0 if the iteration is successful, 1 if it is ended.
 Parameters
iter
: The iterator to go through.data_ref
: The output for the data.column_idx_size
: The size of the data_ref array.stream_context
: The stream context to extract data asynchronously.

int
ccv_cnnp_dataframe_iter_prefetch
(ccv_cnnp_dataframe_iter_t *const iter, const int prefetch_count, ccv_nnc_stream_context_t *const stream_context)¶ Prefetch next item on the iterator with the given stream context. You can call this method multiple times to prefetch multiple items ahead of time.
 Return
 0 if the prefetch is successful, 1 if it is ended.
 Parameters
iter
: The iterator to go through.prefetch_count
: How much ahead we should advance for.stream_context
: The stream context to extract data asynchronously.

int
ccv_cnnp_dataframe_iter_set_cursor
(ccv_cnnp_dataframe_iter_t *const iter, const int idx)¶ Set the cursor of the iterator. When set to 0, the iterator effectively restarts.
 Return
 0 if it is successful, 1 if it is not (exceed the range).
 Parameters
iter
: The iterator to go through.idx
: The index of the cursor.

void
ccv_cnnp_dataframe_iter_free
(ccv_cnnp_dataframe_iter_t *const iter)¶ Free the dataframe iterator object.
 Parameters
iter
: The dataframe iterator to be freed.

void
ccv_cnnp_dataframe_free
(ccv_cnnp_dataframe_t *const dataframe)¶ Free the dataframe object.
 Parameters
dataframe
: The dataframe object to be freed.

struct
ccv_cnnp_column_data_t
¶  #include <ccv_nnc.h>
Column data.
Public Members

int
stream_type
¶ The type of stream context for this column. Each column only compatible with one stream type.

ccv_cnnp_column_data_enum_f
data_enum
¶ The data enumeration function for this column.

ccv_cnnp_column_data_deinit_f
data_deinit
¶ The deinit function that will be used to destroy the data.

void *
context
¶ The context go along with this column.

ccv_cnnp_column_data_context_deinit_f
context_deinit
¶ The deinit function that will be used to destroy the context.

int
Dataframe Addons¶

ccv_cnnp_dataframe_from_array_new
(ccv_array_t *const array)¶ Turn a ccv_array_t to a dataframe object.
 Return
 The new dataframe object.
 Parameters
array
: The array we want to turn into a dataframe object.

ccv_cnnp_dataframe_copy_to_gpu
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int tensor_offset, const int tensor_size, const int device_id)¶ Derive a new column that copies a tensor array from given column to the derived column on GPU.
 Return
 The index of the newly derived column.
 Parameters
dataframe
: The dataframe object that get the derived column.column_idx
: The original column contains tensor array on CPU.tensor_offset
: Only copy as outputs[i] = inputs[i + tensor_offset].tensor_size
: How many tensors in the tensor array.device_id
: The device we want to copy the tensors to.

ccv_cnnp_dataframe_cmd_exec
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const int input_offset, const int input_size, const ccv_nnc_tensor_param_t *const output_params, const int output_size, const int stream_type)¶ Derive a new column by executing a generic command.
 Parameters
dataframe
: The dataframe object that get the derived column.column_idx
: The original column contains tensor array.cmd
: The command for this operation.hint
: The hint to run the command.flags
: The flags with the command.input_offset
: Use inputs[i + input_offset] to inputs[i + input_offset + input_size  1] as the inputsinput_size
: How many tensors in the input array.output_params
: The parameters for the outputs.output_size
: How many tensors in the output array.stream_type
: The type of stream context we are going to use.

ccv_cnnp_dataframe_add_aux
(ccv_cnnp_dataframe_t *const dataframe, const ccv_nnc_tensor_param_t params)¶ Add a new column contains some tensors. This will add a new column that each row is the tensor specified as the parameters. It comes handy when you want to have some auxiliary tensors along with each row.
 Return
 The index of the newly added column.
 Parameters
dataframe
: The dataframe object that get the new column.params
: The parameters for the tensors.

ccv_cnnp_dataframe_read_image
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t structof)¶ Read image off a said column. That column should contain the filename (as char array). The new column will contain the ccv_dense_matrix_t / ccv_nnc_tensor_t (both are tollfree bridging) of the image.
 Return
 The index of the newly derived column.
 Parameters
dataframe
: The dataframe object that loads the images.column_idx
: The column which contains the filename.structof
: The offset to the filename (as char array) from that column. For example, the column could be a struct and filename could be one of the field. In that case, you can pass offsetof(S, filename)

ccv_cnnp_dataframe_image_random_jitter
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const int datatype, const ccv_cnnp_random_jitter_t random_jitter)¶ Apply random jitter on a image to generate a new image.
 Return
 The index of the newly derived column.
 Parameters
dataframe
: The dataframe object that contains the original image.column_idx
: The column which contains the original image.datatype
: The final datatype of the image. We only support CCV_32F right now.random_jitter
: The random jitter parameters to be applied to.

ccv_cnnp_dataframe_one_hot
(ccv_cnnp_dataframe_t *const dataframe, const int column_idx, const off_t structof, const int range, const float onval, const float offval, const int datatype, const int format)¶ Generate a onehot tensor off the label from a struct.
 Return
 The index of the newly derived column.
 Parameters
dataframe
: The dataframe object that contains the label.column_idx
: The column which contains the label (as int).structof
: The offset to the label (as int) from that column. For example, the column could be a struct and label could be one of the field. You can pass offsetof(S, filename)range
: The range of the label, from [0…range  1]onval
: The value when it hit.offval
: The value for the others.datatype
: The datatype of the tensor.format
: The format of the tensor.

ccv_cnnp_dataframe_batching_new
(ccv_cnnp_dataframe_t *const dataframe, const int *const column_idxs, const int column_idx_size, const int batch_count, const int group_count, const int format)¶ Batch multiple tensors in a column into one tensor. This method can take multiple columns, which will result a tuple of tensors. Each tensor in the tuple is a batched one from a given column.
 Return
 The newly created dataframe with the 0th column is the tuple of batched tensors.
 Parameters
dataframe
: The dataframe contains the columns of tensors to be batched.column_idxs
: The columns that contain the tensors.column_idx_size
: The number of columns that contain the tensors.batch_count
: How many tensors in one column to be batched together.group_count
: We can generate many groups of batched tensor. For example, if you have column A, B, C, each have different tensors. If group_count is 1, the result tuple will be (A_b, B_b, C_b). If group count is 2, the result tuple will be (A_b1, B_b1, C_b1, A_b2, B_b2, C_b2). A_b1 etc. will still contain the same number of batch_count tensors.format
: The result format of the tensor. We support simply transformation NCHW <=> NHWC with the source tensor.

struct
ccv_cnnp_random_jitter_t
¶  #include <ccv_nnc.h>
The structure to describe how to apply random jitter to the image.
Public Members

float
contrast
¶ The random contrast, the final contrast will be [1 / (1 + contrast), 1 + contrast]

float
saturation
¶ The saturation, the final saturation will be [1 / (1 + saturation), 1 + saturation]

float
brightness
¶ The brightness, the final brightness will be between [1 / (1 + brightness), 1 + brightness]

float
lighting
¶ AlexNet style PCA based image jitter

float
aspect_ratio
¶ Stretch aspect ratio between [1 / (1 + asepct_ratio), 1 + aspect_ratio]

int
symmetric
¶ Apply random flip on xaxis (around yaxis

int
seed
¶ The seed for random generator.

int
center_crop
¶ Enable crop to the center (otherwise do random crop).

int
min
¶ The minimal dimension of resize

int
max
¶ The maximal dimension of resize. The final resize can be computed from min + (max  min) * random_unit

int
rows
¶ The height of the final image.

int
cols
¶ The width of the final image.

int
x
¶ The extra random offset on xaxis.

int
y
¶ The extra random offset on yaxis.

float
mean
[3]¶ Normalize the image with mean.

float
std
[3]¶ Normalize the image with std. pixel = (pixel  mean) / std

float
Model API¶

enum
[anonymous]
::
__anonymous103
¶ Values:

CCV_CNNP_MODEL_CHECKPOINT_READ_WRITE
¶ This is the default flag, if the model is not initialized, will attempt to read from the disk. Otherwise, will persist existing parameters to disk.

CCV_CNNP_MODEL_CHECKPOINT_READ_ONLY
¶ Only read parameters out of disk, even it is already initialized.

CCV_CNNP_MODEL_CHECKPOINT_WRITE_ONLY
¶ Only write parameters to disk.


enum
[anonymous]
::
__anonymous104
¶ Values:

CCV_CNNP_ACTIVATION_NONE
¶

CCV_CNNP_ACTIVATION_RELU
¶

CCV_CNNP_ACTIVATION_SOFTMAX
¶


enum
[anonymous]
::
__anonymous106
¶ Values:

CCV_CNNP_IO
¶ The parameter is a ccv_cnnp_io_t.

CCV_CNNP_NO_TENSOR
¶ The parameter is not used.

CCV_CNNP_INIT_SHARED_TENSOR
¶ The parameter is a provided tensor for initialization.

CCV_CNNP_INIT_SHARED_TENSOR_AS_TRAINABLE
¶ The parameter is a provided tensor that can be updated.


typedef struct ccv_cnnp_model_s
ccv_cnnp_model_t
¶ model type is an abstract type, you won’t interact with a naked model ever.

typedef struct ccv_cnnp_model_io_s *
ccv_cnnp_model_io_t
¶ With this type, now in NNC, we have 4 types that represents a “tensor”: ccv_nnc_tensor_t / ccv_nnc_tensor_view_t / ccv_nnc_tensor_multiview_t: a concrete tensor with memory allocated. ccv_nnc_tensor_symbol_t: a symbol representation of a tensor, with its data layout, device affinity, and type specified. ccv_nnc_tensor_variable_t: in dynamic graph, this represents a concrete tensor with memory allocated, but also associated with a recorded execution. ccv_cnnp_model_io_t: this is the most flexible one. No data layout, device affinity or type specified, the format has to be c / h / w, no batch size needed. This is a handle used by model API to associates model inputs / outputs.

typedef ccv_nnc_cmd_t (*
ccv_cnnp_model_minimizer_set_f
)(const ccv_cnnp_model_t *const model, const ccv_cnnp_trainable_index_t *const indexes, const int index_size, const void *const context)¶ The setter function prototype for ccv_cnnp_model_set_minimizer. This is useful because it helps to set different minimizer parameters for different trainables. The example would be disable weight decay for bias / scale variables. If I expand this idea a bit, I can also support for different trainables, have entirely different minimizer function. However, I haven’t seen anything that can be trained with different minimizer (most likely due to epoch updates learn rate, therefore, it is hard to manipulate proper learn rate if different minimizers are used for different trainables at the same time). If there is a model does that, I can add that (need some thinking though). Because we cannot attach names to it (hmm, in retrospect, we probably should), the way we identify the trainables is to through which node it is used (by the command type), and in which position. Also, it is only interesting if the trainable is the input of some command. Therefore, only show it if it is an input.

ccv_cnnp_input
(void)¶ Create a naked input.
 Return
 A ccv_cnnp_model_io_t represents an input.

ccv_cnnp_model_apply
(ccv_cnnp_model_t *const model, const ccv_cnnp_model_io_t *const inputs, const int input_size)¶ This method mimics Keras callable for model (thus, override __call__ method in Python class).
 Return
 A ccv_cnnp_model_io_t that represents the output of the given model.
 Parameters
model
: A model that we can apply a set of inputs to get one output.inputs
: The set of inputs.input_size
: The size of inputs array.

ccv_cnnp_model_new
(const ccv_cnnp_model_io_t *const inputs, const int input_size, const ccv_cnnp_model_io_t *const outputs, const int output_size, const char *const name)¶ This method name is deceiving. It return a composed model, not a naked model. This composed model takes set of inputs, and run through various other models to arrive at the set of outputs.
 Return
 A composed model that takes inputs, and generate the outputs.
 Parameters
inputs
: The set of inputs.input_size
: The size of inputs array.outputs
: The set of outputs.output_size
: The size of outputs array.name
: The unique name of the model.

ccv_cnnp_sequential_new
(ccv_cnnp_model_t *const *const models, const int model_size, const char *const name)¶ This method returns a sequential model, which composed from a sequence of models.
 Return
 A composed model that applies these models one by one in sequence.
 Parameters
models
: The list of models, that takes one input, and emit one output, feeding into the subsequent one.model_size
: The size of the list.name
: The unique name of the model.

void
ccv_cnnp_model_compile
(ccv_cnnp_model_t *const model, const ccv_nnc_tensor_param_t *const inputs, const int input_size, const ccv_nnc_cmd_t minimizer, const ccv_nnc_cmd_t loss)¶ Prepare the model to be trained, the input specifies the batch size etc. Input size technically is not needed, here is a safety check.
 Parameters
model
: The model to be compiled.inputs
: The tensor parameters for the model’s inputs, that can be used to derive all tensor shapes.input_size
: The size of the inputs array.minimizer
: The wrapped command that represents a particular optimization strategy.loss
: The wrapped command that computes the loss function.

void
ccv_cnnp_model_dot
(const ccv_cnnp_model_t *const model, const int flags, FILE **const outs, const int out_size)¶ Generate output that can be parsed by GraphViz (DOT language).
 Parameters
model
: The composed model.flags
: Either CCV_NNC_SHORT_DOT_GRAPH or CCV_NNC_LONG_DOT_GRAPHouts
: The output file streams.out_size
: The size of output file stream array.

void
ccv_cnnp_model_fit
(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const fits, const int fit_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_stream_context_t *const stream_context)¶ Fit a model to a given input / output. This is a combination of running ccv_cnnp_model_evaluate / ccv_cnnp_model_backward / ccv_cnnp_model_apply_gradients. The difference is that when calling individual functions, the graph is compiled piece by piece, thus, is less efficient than calling ccv_cnnp_model_fit directly. However, having the separate functions makes this implementation much more versatile, for example, can accumulate gradients for multiple batches, or using custom gradients etc.
 Parameters
model
: The composed model.inputs
: The input tensors.input_size
: The size of the input tensors array.fits
: The target tensors.fit_size
: The size of the target tensors array.outputs
: The actual outputs from the model.output_size
: The size of the outputs array.stream_context
: The stream where the fit can be executed upon.

void
ccv_cnnp_model_evaluate
(ccv_cnnp_model_t *const model, const ccv_cnnp_evaluate_param_t params, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const outputs, const int output_size, ccv_nnc_stream_context_t *const stream_context)¶ Evaluate model with output.
 Parameters
model
: The composed model.params
: The parameters for how evaluation should behave.inputs
: The input tensors.input_size
: The size of the input tensors array.outputs
: The actual outputs from the model.output_size
: The size of the outputs array.stream_context
: The stream where the evaluation can be executed upon.

void
ccv_cnnp_model_backward
(ccv_cnnp_model_t *const model, ccv_nnc_tensor_t *const *const ingrads, const int ingrad_size, ccv_nnc_tensor_t *const *const outgrads, const int outgrad_size, ccv_nnc_stream_context_t *const stream_context)¶ Based on the input gradients, compute the output gradients (w.r.t. the inputs). This also adds trainable gradients.
 Parameters
model
: The composed model.ingrads
: The input gradients.ingrad_size
: The size of the input gradients array.outgrads
: The output gradients (w.r.t. the inputs).outgrad_size
: The size of the output gradients array.stream_context
: The stream where the gradient computation can be executed upon.

void
ccv_cnnp_model_apply_gradients
(ccv_cnnp_model_t *const model, ccv_nnc_stream_context_t *const stream_context)¶ Apply the computed gradients to the trainable tensors.
 Parameters
model
: The composed model.stream_context
: The stream where the gradient computation can be executed upon.

void
ccv_cnnp_model_checkpoint
(ccv_cnnp_model_t *const model, const char *const fn, const int flags)¶ This method checkpoint the given model. If the model is initialized, it will persist all parameters to the given file path. If it is not initialized, this method will try to load tensors off the disk.
 Parameters
model
: The composed model.fn
: The file name.flags
: Whether we perform read / write on this checkpoint, or read only / write only.

void
ccv_cnnp_model_set_data_parallel
(ccv_cnnp_model_t *const model, const int parallel)¶ Apply data parallel to the composed model. This method has to be called before we call either evaluate or fit and after the model is compiled.
 Parameters
model
: The composed model.parallel
: Number of devices we want to run on. 0 will use all devices available. 1 will skip.

void
ccv_cnnp_model_set_memory_compression
(ccv_cnnp_model_t *const model, const int memory_compression)¶ Apply memory compression to the composed model. The memory compression technique can reduce memory usage up to 75% comparing with raw mixprecision model during training time.
 Parameters
model
: The composed model.memory_compression
: Whether to enable the memory compression (1  enable, 0  disable (default))

void
ccv_cnnp_model_set_workspace_size
(ccv_cnnp_model_t *const model, size_t workspace_size)¶ This method set the max workspace size. If the graph is already compiled. It will rerun autotune to use the new workspace size to find the best algorithm.
 Parameters
model
: The composed model.workspace_size
: The size in bytes that we can use as workspace (scratch memory).

void
ccv_cnnp_model_set_minimizer
(ccv_cnnp_model_t *const model, const ccv_nnc_cmd_t minimizer, const ccv_cnnp_model_minimizer_set_f minimizer_setter, const void *const context)¶ Set a new minimizer for the model. This is useful when you need to update learn rate for stochastic gradient descent for example. This method can be called any time during the training process (after compilation).
 Parameters
model
: The composed model.minimizer
: The wrapped command that represents a new optimization strategy.minimizer_setter
: The function to be called to return minimizer for a particular trainable.context
: The context passed to the minimizer setter function.

ccv_cnnp_model_default_stream
(const ccv_cnnp_model_t *const model)¶ Get the default stream from a compiled model. If the model is not compiled, the default stream is 0.
 Return
 The default stream for this model.
 Parameters
model
: The composed model.

ccv_cnnp_model_memory_size
(const ccv_cnnp_model_t *const model)¶ Get the allocated memory size (exclude workspace) from a compiled model. If the model is not compiled the size is 0.
 Return
 The number of bytes for memory allocated.
 Parameters
model
: The composed model.

void
ccv_cnnp_model_free
(ccv_cnnp_model_t *const model)¶ Free a given model.
 Parameters
model
: The composed model.

ccv_cnnp_add
(const char *const name)¶ Add multiple input tensors together.
 Return
 A model that can be applied with multiple inputs, and generate output that is a sum of the inputs.
 Parameters
name
: The unique name of the model.

ccv_cnnp_concat
(const char *const name)¶ Concatenate input tensors together.
 Return
 A model that can be applied with multiple inputs, and generate output that is a concatenation of the inputs.
 Parameters
name
: The unique name of the model.

ccv_cnnp_identity
(const ccv_cnnp_param_t params, const char *const name)¶ An identity layer that takes input and do nothing pass it as the output. Realistically, we use this because we want to apply some normalization / activation function on top of the input.
 Return
 A model that takes input and pass it as output.
 Parameters
params
: Parameters (such as hint and activation or norm).name
: The unique name of the model.

ccv_cnnp_convolution
(const int groups, const int filters, const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params, const char *const name)¶ A convolution model.
 Return
 A convolution model.
 Parameters
groups
: The number of kernel groups in the model.filters
: The total number of filters in the model (filters = groups * per group filters).kdim
: The dimensions of the kernel.params
: Other parameters (such as hint and activation or norm).name
: The unique name of the model.

ccv_cnnp_dense
(const int count, const ccv_cnnp_param_t params, const char *const name)¶ A dense layer model.
 Return
 A dense layer model.
 Parameters
count
: The output dimension.params
: Other parameters (such as hint and activation or norm).name
: The unique name of the model.

ccv_cnnp_max_pool
(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params, const char *const name)¶ A max pool model.
 Return
 A max pool model.
 Parameters
kdim
: The pooling window dimension.params
: Other parameters (such as hint and activation or norm).name
: The unique name of the model.

ccv_cnnp_average_pool
(const int kdim[CCV_NNC_MAX_DIM_ALLOC], const ccv_cnnp_param_t params, const char *const name)¶ An average pool model.
 Return
 An average pool model.
 Parameters
kdim
: The pooling window dimension.params
: Other parameters (such as hint and activation or norm).name
: The unique name of the model.

ccv_cnnp_reshape
(const int dim[CCV_NNC_MAX_DIM_ALLOC], const char *const name)¶ Reshape an input into a different dimension.
 Return
 A reshape layer model.
 Parameters
dim
: The new dimension for the input.name
: The unique name of the model.

ccv_cnnp_flatten
(const char *const name)¶ Flatten an input tensor into a one dimensional array.
 Return
 A flatten layer model.
 Parameters
name
: The unique name of the model.

ccv_cnnp_cmd_exec
(const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const ccv_cnnp_tensor_param_t *const inputs, const int input_size, const int *const outputs, const int output_size, const char *const name)¶ A generic model based on the command. If the tensors are labeled as ccv_cnnp_io_t, it will participate as the input / output of the model. If it is a init tensor, the model will use this tensor for that parameter. More over, if it is marked as trainable, that tensor will be differentiated against when you call ccv_cnnp_model_fit. This model however doesn’t take over ownership of the tensor. You should manage the life cycle of the given tensor and it is your responsibility to make sure they outlive the model. Also, all inputs and outputs marked as init tensors will be shared if you reuse this model in other places.
 Return
 A model based on the given command.
 Parameters
cmd
: The command to generate this model.hint
: The hint to run the command.flags
: The flags with the command.inputs
: A list of ccv_cnnp_tensor_param_t identify each input as either a init tensor or a ccv_cnnp_io_t.input_size
: The size of input list.outputs
: A list of types identify each output as ccv_cnnp_io_t or a none tensor.output_size
: The size of the outputs. There is no need to give ccv_cnnp_tensor_param_t for outputs because all of them are CCV_CNNP_IO type.name
: The unique name of the model.

ccv_cnnp_graph
(const ccv_nnc_symbolic_graph_t *const graph, const ccv_cnnp_tensor_symbol_param_t *const tensor_symbol_params, const int tensor_symbol_param_size, ccv_nnc_tensor_symbol_t *const inputs, const int input_size, ccv_nnc_tensor_symbol_t *const outputs, const int output_size, const char *const name)¶ A generic model based on the symbolic graph we provided. A list of tensor symbols are labeled whether it is ccv_cnnp_io_t or not (we identify whether this is a input or output based on whether it is in the graph). If it is not, we init it with a given tensor. If it is marked as trainable, that tensor will be differentiated against when you call ccv_cnnp_model_fit. The model doesn’t take ownership over the init tensors. You are responsible to make sure the init tensors outlive the model until the initialization occurred. Also, these tensors will be shared if the model is reused.
 Return
 A model based on the given symbolic graph.
 Parameters
graph
: The symbolic graph that is our blue print for this model.tensor_symbol_params
: The list of tensor symbol parameters that labels a given symbol.tensor_symbol_param_size
: The size of the list.inputs
: The inputs to this graph. We can figure out which ones are inputs, but this gives us the order.input_size
: The size of the input list.outputs
: The outputs from this graph. We can figure out which ones are outputs, but this gives us the order.output_size
: The size of the output list.name
: The unique name of the model.

struct
ccv_cnnp_evaluate_param_t
¶  #include <ccv_nnc.h>
The parameters for how evaluation should behave.
Public Members

int
requires_grad
¶ Whether we need to keep intermediate results for gradient computations.

int
enable_outgrad
¶ Whether we can compute outflow gradients when call ccv_cnnp_model_backward later.

int
is_test
¶ Whether we evaluate it as test, or just as forward pass of the training process.

int

struct
ccv_cnnp_trainable_index_t
¶  #include <ccv_nnc.h>
Simple structure for group of command and the index for the variable.

struct
ccv_cnnp_param_t
¶ Public Members

int
no_bias
¶ No bias term.

int
norm
¶ The normalizations can be applied after activation such as CCV_CNNP_BATCH_NORM.

int
activation
¶ The activations can be applied for the output, such as CCV_CNNP_ACTIVATION_RELU or CCV_CNNP_ACTIVATION_SOFTMAX.

ccv_nnc_hint_t
hint
¶ The hint for a particular operation

int

struct
ccv_cnnp_tensor_param_t
¶

struct
ccv_cnnp_tensor_symbol_param_t
¶ Public Members

ccv_nnc_tensor_symbol_t
symbol
¶ The tensor symbol this is reference to.

int
type
¶ The type of the parameter, could be CCV_CNNP_IO, INIT_SHARED_TENSOR, or INIT_SHARED_TENSOR_TRAINABLE

ccv_nnc_tensor_t *
tensor
¶ The tensor that is going to be used for initialization.

ccv_nnc_tensor_symbol_t