![]() |
Class LayerNormBasicLSTMCell
LSTM unit with layer normalization and recurrent dropout.
Inherits From: RNNCell
This class adds layer normalization and recurrent dropout to a basic LSTM unit. Layer normalization implementation is based on:
https://arxiv.org/abs/1607.06450.
"Layer Normalization" Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
and is applied before the internal nonlinearities. Recurrent dropout is base on:
https://arxiv.org/abs/1603.05118
"Recurrent Dropout without Memory Loss" Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth.
__init__
__init__(
num_units,
forget_bias=1.0,
input_size=None,
activation=tf.math.tanh,
layer_norm=True,
norm_gain=1.0,
norm_shift=0.0,
dropout_keep_prob=1.0,
dropout_prob_seed=None,
reuse=None
)
Initializes the basic LSTM cell.
Args:
num_units
: int, The number of units in the LSTM cell.forget_bias
: float, The bias added to forget gates (see above).input_size
: Deprecated and unused.activation
: Activation function of the inner states.layer_norm
: IfTrue
, layer normalization will be applied.norm_gain
: float, The layer normalization gain initial value. Iflayer_norm
has been set toFalse
, this argument will be ignored.norm_shift
: float, The layer normalization shift initial value. Iflayer_norm
has been set toFalse
, this argument will be ignored.dropout_keep_prob
: unit Tensor or float between 0 and 1 representing the recurrent dropout probability value. If float and 1.0, no dropout will be applied.dropout_prob_seed
: (optional) integer, the randomness seed.reuse
: (optional) Python boolean describing whether to reuse variables in an existing scope. If notTrue
, and the existing scope already has the given variables, an error is raised.
Properties
graph
DEPRECATED FUNCTION
output_size
Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.
Methods
tf.contrib.rnn.LayerNormBasicLSTMCell.get_initial_state
get_initial_state(
inputs=None,
batch_size=None,
dtype=None
)
tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state
zero_state(
batch_size,
dtype
)
Return zero-filled state tensor(s).
Args:
batch_size
: int, float, or unit Tensor representing the batch size.dtype
: the data type to use for the state.
Returns:
If state_size
is an int or TensorShape, then the return value is a
N-D
tensor of shape [batch_size, state_size]
filled with zeros.
If state_size
is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of 2-D
tensors with
the shapes [batch_size, s]
for each s in state_size
.