Level 2 API

Essentials

enum [anonymous]

Values:

enumerator CCV_NNC_SHORT_DOT_GRAPH

Display a simplified graph.

enumerator CCV_NNC_LONG_DOT_GRAPH

Display a graph that contains all information.

typedef struct ccv_nnc_graph_s ccv_nnc_graph_t

Opaque pointer holds the concrete graph representation.

typedef struct ccv_nnc_graph_static_schedule_s ccv_nnc_graph_static_schedule_t

Opaque pointer holds the graph schedule.

typedef struct ccv_nnc_tensor_tape_s ccv_nnc_tensor_tape_t

Opaque pointer to the tape of tensors. The tape are used by the while loop.

ccv_nnc_graph_new(void)

Create an empty graph. Note that all graph mutation methods are not thread-safe. You should only operate the graph in serial fashion.

Returns:

An opaque ccv_nnc_graph_t pointer.

ccv_nnc_graph_exec_new(ccv_nnc_graph_t *const graph, const ccv_nnc_cmd_t cmd, const ccv_nnc_hint_t hint, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const outputs, const int output_size)

Create a node with specific command execution, as well as its inputs & outputs. Underlying, the graph maintains the backing object for the node, and all you get is a on-stack object to index the backing object from the graph.

Parameters:
  • graph – The concrete graph.

  • cmd – The wrapped command.

  • hint – The hint for this command.

  • inputs – The input tensors array.

  • input_size – The size of input tensors array.

  • outputs – The output tensors array.

  • output_size – The size of output tensors array.

Returns:

An on-stack object that references a execution node.

void ccv_nnc_graph_exec_set(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, const ccv_nnc_cmd_t cmd)

Set the command for an existing execution node.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • cmd – The new wrapped command.

ccv_nnc_graph_exec_cmd(const ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec)

Return the command on an existing execution node.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

Returns:

The wrapped command.

void ccv_nnc_graph_exec_set_hint(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, const ccv_nnc_hint_t hint)

Set hint for an existing execution node.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • hint – The new hint.

void ccv_nnc_graph_exec_set_io(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, ccv_nnc_tensor_t *const *const inputs, const int input_size, ccv_nnc_tensor_t *const *const outputs, const int output_size)

Set input / output tensors for an existing execution node.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • inputs – The input tensors array.

  • input_size – The size of input tensors array.

  • outputs – The output tensors array.

  • output_size – The size of output tensors array.

int ccv_nnc_graph_exec_concat(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t source, const ccv_nnc_graph_exec_t destination)

Concatenate input graph nodes with an output graph node to create a new graph.

Parameters:
  • graph – The concrete graph.

  • source – The execution node reference to connect.

  • destination – The execution node reference connect to.

Returns:

Non-zero if cannot concat successfully.

int ccv_nnc_graph_exec_disjoin(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t source, const ccv_nnc_graph_exec_t destination)

Disconnect input graph nodes with an output graph nodes in this graph.

Parameters:
  • graph – The concrete graph.

  • source – The execution node reference to disconnect.

  • destination – The execution node reference disconnect to.

Returns:

Non-zero if cannot disjoin successfully.

int ccv_nnc_graph_exec_count(const ccv_nnc_graph_t *const graph)

Count number of exec in the graph.

Parameters:

graph – The concrete graph.

Returns:

The number of execution nodes in the graph.

void ccv_nnc_graph_dot(const ccv_nnc_graph_t *const graph, const int flags, FILE *out)

Generate output that can be parsed by GraphViz (DOT language).

Parameters:
  • graph – The concrete graph.

  • flags – Either CCV_NNC_SHORT_DOT_GRAPH or CCV_NNC_LONG_DOT_GRAPH

  • out – The output file stream.

void ccv_nnc_graph_autotune(ccv_nnc_graph_t *const graph, const size_t max_workspace_size, const int flags, const ccv_nnc_graph_exec_t *const sources, const int source_size, const ccv_nnc_graph_exec_t *const destinations, const int destination_size)

Run the autotune function on all execution node, and assign back with the optimized commands.

Parameters:
  • graph – The concrete graph.

  • max_workspace_size – The maximum allowed extra memory usage.

  • flags – A reserved field for flags.

  • sources – The source execution nodes to begin. 0 uses default sources.

  • source_size – The size of source execution nodes.

  • destinations – The destination execution nodes which we end. 0 uses default destinations.

  • destination_size – The size of destination execution nodes.

void ccv_nnc_graph_topsort(ccv_nnc_graph_t *const graph, int *const exec_cvt, const int exec_cvt_size)

Make the graph topsorted, thus, do a topological sort so when run the graph, no additional memory will be allocated. Otherwise when we run the graph, we need to allocate some memory on heap to faciliate.

Parameters:
  • graph – The concrete graph.

  • exec_cvt – The execution node assignments will change, and you can give an array to know the changes.

  • exec_cvt_size – The provided conversion array size.

void ccv_nnc_graph_set_default_static_schedule(ccv_nnc_graph_t *const graph, const int stream_type, const int max_stream_count)

Assuming the graph runs from the beginning to the end. Allocate a internal schedule object that will run the graph efficiently if it runs from the beginning to the end. It will basically call ccv_nnc_graph_static_schedule and save the end result to a internal schedule object to this graph.

Parameters:
  • graph – The concrete graph.

  • stream_type – The type of stream context we are going to use.

  • max_stream_count – The number of stream contexts to be allocated internally.

ccv_nnc_graph_static_schedule_new(ccv_nnc_graph_t *const graph, const int stream_type, const int max_stream_count, const ccv_nnc_graph_exec_t *const sources, const int source_size, const ccv_nnc_graph_exec_t *const destinations, const int destination_size)

Allocate extra streams to make this graph parallel runnable. Note this requires the graph to be topsorted. After this is done, you can schedule a graph either on its default stream, or a new stream with the schedule object.

Parameters:
  • graph – The concrete graph.

  • stream_type – The type of stream context we are going to use.

  • max_stream_count – The number of stream contexts to be allocated internally.

  • sources – The source execution nodes to begin. 0 uses default sources.

  • source_size – The size of source execution nodes.

  • destinations – The destination execution nodes which we end. 0 uses default destinations.

  • destination_size – The size of destination execution nodes.

Returns:

An opaque schedule object that let the graph knows how to run itself efficiently.

void ccv_nnc_graph_static_schedule_free(ccv_nnc_graph_static_schedule_t *const schedule)

Free a schedule object for a graph.

Parameters:

schedule – The schedule object returned from ccv_nnc_graph_static_schedule_new.

ccv_nnc_graph_default_stream(const ccv_nnc_graph_t *const graph)

Query the default stream for a given graph.

Parameters:

graph – The concrete graph.

Returns:

The default stream context.

void ccv_nnc_graph_set_sources(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t *const sources, const int source_size)

Set default sources for a give graph.

Parameters:
  • graph – The concrete graph.

  • sources – The source execution nodes to begin.

  • source_size – The size of source execution nodes.

ccv_nnc_graph_exec_t *ccv_nnc_graph_sources(const ccv_nnc_graph_t *const graph)

Get the default source execution nodes pointer.

Parameters:

graph – The concrete graph.

Returns:

A pointer to an array of default source execution nodes.

int ccv_nnc_graph_source_size(const ccv_nnc_graph_t *const graph)

Get the number of default source execution nodes.

Parameters:

graph – The concrete graph.

Returns:

The number of default source execution nodes.

void ccv_nnc_graph_set_destinations(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t *const destinations, const int destination_size)

Set default destinations for a give graph.

Parameters:
  • graph – The concrete graph.

  • destinations – The destination execution nodes which we end.

  • destination_size – The size of destination execution nodes.

ccv_nnc_graph_exec_t *ccv_nnc_graph_destinations(const ccv_nnc_graph_t *const graph)

Get the default destination execution nodes pointer.

Parameters:

graph – The concrete graph.

Returns:

A pointer to an array of default destination execution nodes.

int ccv_nnc_graph_destination_size(const ccv_nnc_graph_t *const graph)

Get the number of default destination execution nodes.

Parameters:

graph – The concrete graph.

Returns:

The number of default destination execution nodes.

void ccv_nnc_graph_free(ccv_nnc_graph_t *const graph)

This graph, and its relevant auxiliary objects (opaque to user) are deallocated.

Parameters:

graph – The concrete graph.

int ccv_nnc_graph_run(ccv_nnc_graph_t *const graph, const int flags, const ccv_nnc_graph_exec_t *const sources, const int source_size, const ccv_nnc_graph_exec_t *const destinations, const int destination_size, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Execute a computation graph with all bells and whistles. Need to supply a tensor tape if it contains backward pass for while loop or branches. With tensor tape, the tensors are versioned, so you can “backpropagate through time”.

Parameters:
  • graph – The concrete graph.

  • flags – A reserved field for flags.

  • sources – The source execution nodes array.

  • source_size – The size of source execution nodes array. 0 uses default sources.

  • destinations – The destination execution nodes array.

  • destination_size – The size of destination execution nodes array. 0 uses default destinations.

  • tensor_tape – An opaque tensor tape object to “backpropagate through time”.

  • stream_context – Which stream this graph will be executed upon.

Returns:

CCV_NNC_EXEC_SUCCESS if succeed.

int ccv_nnc_graph_run_with_schedule(ccv_nnc_graph_t *const graph, const int flags, const ccv_nnc_graph_static_schedule_t *const schedule, ccv_nnc_tensor_tape_t *const tensor_tape, ccv_nnc_stream_context_t *const stream_context)

Execute a computation graph with all bells and whistles. Need to supply a tensor tape if it contains backward pass for while loop or branches. With tensor tape, the tensors are versioned, so you can “backpropagate through time”. Comparing with ccv_nnc_graph_run method, this method doesn’t take sources / destinations node, rather, it takes the schedule object.

Parameters:
  • graph – The concrete graph.

  • flags – A reserved field for flags.

  • schedule – The schedule object specified the sources / destinations and how to efficiently run this.

  • tensor_tape – An opaque tensor tape object to “backpropagate through time”.

  • stream_context – Which stream this graph will be executed upon.

Returns:

CCV_NNC_EXEC_SUCCESS if succeed.

CCV_NO_GRAPH_EXEC(exec)
struct ccv_nnc_graph_exec_t
#include <ccv_nnc.h>

The opaque on stack object hold a reference to an execution node within a graph.

Others

void ccv_nnc_graph_exec_set_io_flags(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, const int *const input_flags, const int input_flag_size, const int *const output_flags, const int output_flag_size)

Set input / output flags for an existing execution node. This must be called after set_io, set additional flags for tensors related to this exec.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • input_flags – The input flags array.

  • input_flag_size – the size of input flags array, should be the same as input tensors array (or 0).

  • output_flags – The output flags array.

  • output_flag_size – the size of output flags array, should be the same as output tensors array (or 0).

void ccv_nnc_graph_exec_pair_with(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, const ccv_nnc_graph_exec_t pair_exec)

Set the pair reference for exec. In backward pass, an execution node’s pair node is the forward pass node.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • pair_exec – The pair execution node reference.

void ccv_nnc_graph_add_carry_over(ccv_nnc_graph_t *const graph, const ccv_nnc_tensor_t *const from, const ccv_nnc_tensor_t *const to)

Add tensor pair that can be used to “carry over”. (carry over: passing a tensor from current loop to the next loop).

Parameters:
  • graph – The concrete graph.

  • from – The tensor we have output in this loop.

  • to – The tensor we will use as input in the next loop.

void ccv_nnc_graph_exec_add_as_affected(ccv_nnc_graph_t *const graph, const ccv_nnc_graph_exec_t exec, ccv_nnc_tensor_t *const update)

Updates are the tensors that not directly involved in the computation, but its pointers need to get updated along with this exec, thus need to be “update” to other exec nodes.

Parameters:
  • graph – The concrete graph.

  • exec – The execution node reference.

  • update – The tensor need to be updated along the execution node.