![]() |
Class IndRNNCell
Independently Recurrent Neural Network (IndRNN) cell
Inherits From: LayerRNNCell
(cf. https://arxiv.org/abs/1803.04831).
Args:
num_units
: int, The number of units in the RNN cell.activation
: Nonlinearity to use. Default:tanh
.reuse
: (optional) Python boolean describing whether to reuse variables in an existing scope. If notTrue
, and the existing scope already has the given variables, an error is raised.name
: String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.dtype
: Default dtype of the layer (default ofNone
means use the type of the first input). Required whenbuild
is called beforecall
.
__init__
__init__(
num_units,
activation=None,
reuse=None,
name=None,
dtype=None
)
Properties
graph
DEPRECATED FUNCTION
output_size
Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.
Methods
tf.contrib.rnn.IndRNNCell.get_initial_state
get_initial_state(
inputs=None,
batch_size=None,
dtype=None
)
tf.contrib.rnn.IndRNNCell.zero_state
zero_state(
batch_size,
dtype
)
Return zero-filled state tensor(s).
Args:
batch_size
: int, float, or unit Tensor representing the batch size.dtype
: the data type to use for the state.
Returns:
If state_size
is an int or TensorShape, then the return value is a
N-D
tensor of shape [batch_size, state_size]
filled with zeros.
If state_size
is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of 2-D
tensors with
the shapes [batch_size, s]
for each s in state_size
.