Level 4 API

Essentials

typedef struct ccv_nnc_dynamic_graph_s ccv_nnc_dynamic_graph_t

Opaque pointer to the dynamic graph structure.

typedef struct ccv_nnc_tensor_variable_s *ccv_nnc_tensor_variable_t

Masquerade this as if it is a on stack variable, there is a heap allocation but managed by the dynamic graph. The fact that ccv_nnc_tensor_variable_t is a pointer is an implementation detail. It should be treated as an opaque type throughout. We may later extends this to be some on-stack information or even just a uid.

typedef void (*ccv_nnc_tensor_variable_destructor_f)(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_t *const tensor, void *const context)

A destructor function to be called when a tensor variable will be freed in the sense that no backward computation need it no more. Thus, we pass in tensor rather than tensor variable for the destructor.

typedef struct ccv_cnnp_model_s ccv_cnnp_model_t

Read more in Level-5 API section.

model type is an abstract type, you won’t interact with a naked model ever.

ccv_nnc_dynamic_graph_new(void)

Create a dynamic graph.

Returns:

A newly created dynamic graph.

ccv_nnc_tensor_variable_alias_new(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable, const int ofs[CCV_NNC_MAX_DIM_ALLOC], const int stride[CCV_NNC_MAX_DIM_ALLOC], const ccv_nnc_tensor_param_t info)

Create a new tensor variable that is an alias of a given tensor variable. You can alias any tensor variable that itself not an alias. You can also alias an alias, with some conditions: The tensor variable itself can be alias, but it needs to be contiguous as well. For example, a vector is contiguous. If both conditions satisfied, you can alias an alias.

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable we are going to alias from.

  • ofs – The offset on each of the dimension.

  • stride – The stride of each dimension. If all 0, it matches the dimension of the tensor_variable.

  • info – The tensor parameters for the new alias.

Returns:

New tensor variable that is an alias.

ccv_nnc_tensor_variable_params(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable)

Get the parameters for a tensor variable.

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable reference.

Returns:

The tensor parameters.

int ccv_nnc_tensor_variable_alias_params(const ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable, int ofs[CCV_NNC_MAX_DIM_ALLOC], int stride[CCV_NNC_MAX_DIM_ALLOC])

Get the parameters for a tensor variable alias.

Parameters:
  • graph – The symbolic graph.

  • tensor_variable – The tensor variable reference.

  • ofs – The offset on each of the dimension.

  • stride – The stride of each dimension.

Returns:

non-zero if it is not a tensor alias.

ccv_nnc_tensor_variable_is_constant(const ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable)

Query whether a given tensor variable is a constant (no gradient).

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable to query whether it is a constant.

void ccv_nnc_tensor_variable_set(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable, ccv_nnc_tensor_t *const tensor)

Set a tensor on the tensor variable. Tensor variable doesn’t take over the life-cycle management of the tensor (in similar way as the tensor binds).

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable to set.

  • tensor – The tensor that is going to be associated with the tensor variable.

void ccv_nnc_tensor_variable_detach(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable)

Detach the tensor variable from current graph. It acts as if computed between ``ccv_nnc_dynamic_graph_set_no_grad``. Thus, there are a few requirements for this: 1. It cannot be an alias when detach. You have to detach the original, not the alias. 2. When detach a variable, it could impact correctness when computing gradients. This cut off backprop, acting as if the detached variable is a constant (it will be marked as is). After this call, the tensor variable will be marked as constant and you can query that through ``ccv_nnc_tensor_variable_is_constant``. Why this method rather than making this variable as constant to begin with? First, an constant cannot be the output. Second, you may not wrap your computation between no grad, or not all inputs are constants, resulting a tensor variable that is on a graph. This method is helpful to rescue from that situation.

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable to be detached.

void ccv_nnc_tensor_variable_destructor_hook(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable, ccv_nnc_tensor_variable_destructor_f func, void *const context)

Hook into a tensor variable such that when it is actually freed (destroyed), the callback will receive the update.

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable to observe when it is destroyed.

  • func – The callback function.

  • context – The context to be passed along to the callback function.

void ccv_nnc_dynamic_graph_has_effect_to_tensor_variables(const ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t *const source_variables, const int source_variable_size, const ccv_nnc_tensor_variable_t *const destination_variables, const int destination_variable_size, uint64_t *const bitmask)

Check given tensor variables whether have effects to another set of tensor variables.

Parameters:
  • graph – The dynamic graph.

  • source_variables – The tensor variables to check whether it has effect to another set of variables.

  • source_variable_size – The size of source tensor variables.

  • destination_variables – Whether the source variables has effect to this list of variables.

  • destination_variable_size – The size of destination tensor variables.

  • bitmask – Bit return value, each bit represents a source tensor variable, and 1 meant it can reach some of the destinations.

int ccv_nnc_dynamic_graph_exec(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, const int flags, const ccv_nnc_tensor_variable_t *const inputs, const int input_size, ccv_nnc_tensor_variable_t *const outputs, const int output_size, const int parallel, ccv_nnc_stream_context_t *const stream_context)

Execute a command with given tensor variables, the output is in the output tensor variables.

Parameters:
  • graph – The dynamic graph.

  • cmd – The wrapped command.

  • hint – The hint associated with the command.

  • flags – A reserved field for flags.

  • inputs – The input tensor variables array.

  • input_size – The size of the input tensor variables array.

  • outputs – The output tensor variables array.

  • output_size – The size of the output tensor variables array.

  • parallel – The parallel parameter, how many concurrent computations we need to execute.

  • stream_context – Which stream this command will be executed upon.

void ccv_nnc_dynamic_graph_backward(ccv_nnc_dynamic_graph_t *const dynamic_graph, const ccv_nnc_tensor_variable_t *const f_variables, const int f_variable_size, const ccv_nnc_tensor_variable_t *const df_optionals, const ccv_nnc_tensor_variable_t *const inputs, const int input_size, ccv_nnc_tensor_variable_t *const outputs, const int output_size, ccv_nnc_stream_context_t *const stream_context)

Compute the gradient of given tensor, with respect to the f. Thus, df / dt.

Parameters:
  • dynamic_graph – The dynamic graph.

  • f_variables – The output losses.

  • f_variable_size – The size of output losses array.

  • df_optionals – The custom gradients for f. If not provided, will default to 1.

  • inputs – The input variables.

  • input_size – The size of the input variables array.

  • outputs – The gradients with respect to the inputs. If the gradient already have value exist, it will be accumulated into the final value.

  • output_size – The size of the outputs array. Should be equal to the input_size.

  • stream_context – Which stream this computation will be executed upon.

void ccv_nnc_dynamic_graph_apply_gradients(ccv_nnc_dynamic_graph_t *const dynamic_graph, const ccv_nnc_cmd_t minimizer, const ccv_nnc_tensor_variable_t *const gradients, const int gradient_size, ccv_nnc_tensor_variable_t *const parameters, const int parameter_size, ccv_nnc_tensor_variable_t *const saved_aux, const int parallel, ccv_nnc_stream_context_t *const stream_context)

Apply gradients to the set of parameters to update them with appropriate minimizer.

Parameters:
  • dynamic_graph – The dynamic graph.

  • minimizer – The wrapped command that represents a particular optimization strategy.

  • gradients – The computed gradients to be applied.

  • gradient_size – The size of gradients array.

  • parameters – The parameters to update.

  • parameter_size – The size of parameters array, should be the same length as gradients.

  • saved_aux – The aux variables to faciliate the minimizer. See ccv_nnc_minimizer_saved_aux_size.

  • parallel – The parallel parameter, how many concurrent computations we need to execute.

  • stream_context – Which stream this computation will be executed upon.

void ccv_nnc_dynamic_graph_minimize(ccv_nnc_dynamic_graph_t *const dynamic_graph, const ccv_nnc_cmd_t minimizer, const ccv_nnc_tensor_variable_t *const losses, const int loss_size, const ccv_nnc_tensor_variable_t *const dloss_optionals, ccv_nnc_tensor_variable_t *const parameters, const int parameter_size, ccv_nnc_tensor_variable_t *const saved_aux, const int parallel, ccv_nnc_stream_context_t *const stream_context)

Apply one step of minimization (most likely, a gradient descent) to the parameters with a given loss (or losses).

Parameters:
  • dynamic_graph – The dynamic graph.

  • minimizer – The wrapped command that represents a particular optimization strategy.

  • losses – The losses we are trying to minimize.

  • loss_size – The size of the losses array.

  • dloss_optionals – The custom gradient for losses. If not provided, will default to 1.

  • parameters – The parameters to update.

  • parameter_size – The size of parameters array.

  • saved_aux – The aux variables to faciliate the minimizer. See ccv_nnc_minimizer_saved_aux_size.

  • parallel – The parallel parameter, how many concurrent computations we need to execute.

  • stream_context – Which stream this computation will be executed upon.

void ccv_nnc_dynamic_graph_evaluate(ccv_nnc_dynamic_graph_t *const dynamic_graph, ccv_cnnp_model_t *const model, const int is_test, const ccv_nnc_tensor_variable_t *const inputs, const int input_size, ccv_nnc_tensor_variable_t *const outputs, const int output_size, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Evaluate a CNNP model on the dynamic graph with set of inputs / outputs.

Parameters:
  • dynamic_graph – The dynamic graph.

  • model – The CNNP model to be evaluated against. Note that ccv_nnc_dynamic_graph_backward / ccv_nnc_dynamic_graph_apply_gradients / ccv_nnc_dynamic_graph_minimize all works with this model. It takes over the life-cycle of the model, and now you don’t need to free it any more.

  • is_test – Whether we are in test mode or not.

  • inputs – The input variables.

  • input_size – The size of the input variables array.

  • outputs – The gradients with respect to the inputs.

  • output_size – The size of the outputs array.

  • tensor_tape – An opaque tensor tape object to “backpropagate through time”.

  • stream_context – Which stream this computation will be executed upon.

void ccv_nnc_dynamic_graph_dry_run(ccv_nnc_dynamic_graph_t *const dynamic_graph, ccv_cnnp_model_t *const model, const int is_test, const ccv_nnc_tensor_variable_t *const inputs, const int input_size, ccv_nnc_stream_context_t *const stream_context)

Dry run a CNNP model on the dynamic graph with set of inputs up until the actual execution.

Parameters:
  • dynamic_graph – The dynamic graph.

  • model – The CNNP model to be evaluated against. Note that ccv_nnc_dynamic_graph_backward / ccv_nnc_dynamic_graph_apply_gradients / ccv_nnc_dynamic_graph_minimize all works with this model. It takes over the life-cycle of the model, and now you don’t need to free it any more.

  • is_test – Whether we are in test mode or not.

  • inputs – The input variables.

  • input_size – The size of the input variables array.

  • stream_context – Which stream this computation will be executed upon.

void ccv_nnc_dynamic_graph_set_max_concurrency(ccv_nnc_dynamic_graph_t *const graph, const int max_stream_count)

Set the maximum operator-level concurrency. This is a soft-limit, e.g. if you have operations on different devices, they are concurrent.

Parameters:
  • graph – The dynamic graph.

  • max_stream_count – The maximum concurrency if the dynamic graph schedules internal streams. 0 is no limit.

int ccv_nnc_dynamic_graph_set_no_grad(ccv_nnc_dynamic_graph_t *const dynamic_graph, const int no_grad)

Enable or disable gradient computation on a dynamic graph.

Parameters:
  • dynamic_graph – The dynamic graph.

  • no_grad – If it is 1, disable gradient computation on the dynamic graph.

Returns:

0 if it turned, otherwise it is not turned.

void ccv_nnc_dynamic_graph_gc(ccv_nnc_dynamic_graph_t *const dynamic_graph)

Dynamic graph will retain a memory it allocated for efficient reuse. Triggering this method intentionally will force these memory to be collected. This is helpful if you know the existing allocation won’t be enough for the future use.

Parameters:

dynamic_graph – The dynamic graph.

void ccv_nnc_tensor_variable_free(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_variable_t tensor_variable)

Dispose a tensor variable. You cannot do any computation against this tensor variable afterwards.

Parameters:
  • graph – The dynamic graph.

  • tensor_variable – The tensor variable to be disposed.

void ccv_nnc_dynamic_graph_free(ccv_nnc_dynamic_graph_t *const graph)

Free the dynamic graph.

Parameters:

graph – The dynamic graph.

void ccv_nnc_dynamic_graph_dot(const ccv_nnc_dynamic_graph_t *const graph, const int flags, FILE *out)

Generate output that can be parsed by GraphViz (DOT language).

Parameters:
  • graph – The dynamic graph.

  • flags – Either CCV_NNC_SHORT_DOT_GRAPH or CCV_NNC_LONG_DOT_GRAPH

  • out – The output file stream.

ccv_nnc_dynamic_graph_bookkeeping_count(const ccv_nnc_dynamic_graph_t *const graph, const int type)

Count how many ops we kept for gradient computation purpose. This method is useful when we want to assert at end of some train loop, we shouldn’t have any gradient computation left.

Parameters:
  • graph – The dynamic graph.

  • type – The type of variables to trace. CCV_NNC_SYMBOL_TENSOR / CCV_NNC_SYMBOL_GRAPH_EXEC

Returns:

How many gradient computations we kept.

void ccv_nnc_dynamic_graph_format(const ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_symbolic_graph_format_f format_fn, void *const context)

Provide a hook for upper level to do custom formatting of a given dynamic graph for whatever inside. You can implement logic to format the graph into protobuf, or json. However, this is not the method for you to visit the graph, and do mutations on it. If ops are not needed for gradient computation, likely these are not kept on the dynamic graph at all. You probably will get an empty graph. What’s still available can be checked with the ccv_nnc_dynamic_graph_bookkeeping_count.

Parameters:
  • graph – The dynamic graph.

  • format_fn – The format callback to be called on every node.

  • context – The context that will be passed to the callback.

ccv_nnc_tensor_variable_new(ccv_nnc_dynamic_graph_t *const graph)

Create a new tensor variable without specified dimensions. Most likely this will be used as output, therefore, its shape can be derived.

Parameters:

graph – The dynamic graph.

Returns:

A newly created tensor variable reference.

ccv_nnc_tensor_variable_new(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_param_t info)

Create a new tensor variable with tensor parameters.

Parameters:
  • graph – The dynamic graph.

  • info – Tensor parameters.

Returns:

A newly created tensor variable reference.

ccv_nnc_tensor_constant_new(ccv_nnc_dynamic_graph_t *const graph)

Create a new tensor constant without specified dimensions. This may be used with ccv_nnc_tensor_variable_set. A constant cannot be an output with some inputs. It can be a output with no input (such as CCV_NNC_SET_FORWARD command). This is used so that we don’t need to keep memory of this constant because later we want to compute gradient of this constant. It is not legal to run ccv_nnc_dynamic_backward against a constant.

Parameters:

graph – The dynamic graph.

Returns:

A newly created tensor constant reference.

ccv_nnc_tensor_constant_new(ccv_nnc_dynamic_graph_t *const graph, const ccv_nnc_tensor_param_t info)

Create a new tensor constant with tensor parameters.

Parameters:
  • graph – The dynamic graph.

  • info – Tensor parameters.

Returns:

A newly created tensor constant reference.