tf.contrib.cudnn_rnn.CudnnRNNTanh

View source on GitHub

Class CudnnRNNTanh

Cudnn implementation of the RNN-tanh layer.

__init__

View source

__init__(
    num_layers,
    num_units,
    input_mode=CUDNN_INPUT_LINEAR_MODE,
    direction=CUDNN_RNN_UNIDIRECTION,
    dropout=0.0,
    seed=None,
    dtype=tf.dtypes.float32,
    kernel_initializer=None,
    bias_initializer=None,
    name=None
)

Creates a CudnnRNN model from model spec.

Args:

  • num_layers: the number of layers for the RNN model.
  • num_units: the number of units within the RNN model.
  • input_mode: indicate whether there is a linear projection between the input and the actual computation before the first layer. It can be 'linear_input', 'skip_input' or 'auto_select'. 'linear_input' (default) always applies a linear projection of input onto RNN hidden state. (standard RNN behavior). 'skip_input' is only allowed when input_size == num_units; 'auto_select' implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'.
  • direction: the direction model that the model operates. Can be either 'unidirectional' or 'bidirectional'
  • dropout: dropout rate, a number between [0, 1]. Dropout is applied between each layer (no dropout is applied for a model with a single layer). When set to 0, dropout is disabled.
  • seed: the op seed used for initializing dropout. See tf.compat.v1.set_random_seed for behavior.
  • dtype: tf.float16, tf.float32 or tf.float64
  • kernel_initializer: starting value to initialize the weight.
  • bias_initializer: starting value to initialize the bias (default is all zeros).
  • name: VariableScope for the created subgraph; defaults to class name. This only serves the default scope if later no scope is specified when invoking call().

Raises:

  • ValueError: if direction is invalid. Or dtype is not supported.

Properties

canonical_bias_shapes

Shapes of Cudnn canonical bias tensors.

canonical_weight_shapes

Shapes of Cudnn canonical weight tensors.

direction

Returns unidirectional or bidirectional.

graph

DEPRECATED FUNCTION

input_mode

Input mode of first layer.

Indicates whether there is a linear projection between the input and the actual computation before the first layer. It can be * 'linear_input': (default) always applies a linear projection of input onto RNN hidden state. (standard RNN behavior) * 'skip_input': 'skip_input' is only allowed when input_size == num_units. * 'auto_select'. implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'.

Returns:

'linear_input', 'skip_input' or 'auto_select'.

input_size

num_dirs

num_layers

num_units

rnn_mode

Type of RNN cell used.

Returns:

lstm, gru, rnn_relu or rnn_tanh.

saveable

scope_name

Methods

tf.contrib.cudnn_rnn.CudnnRNNTanh.state_shape

View source

state_shape(batch_size)

Shape of the state of Cudnn RNN cells w/o.

input_c.

Shape is a 1-element tuple, [num_layers * num_dirs, batch_size, num_units] Args: batch_size: an int

Returns:

a tuple of python arrays.