tf.contrib.rnn.LSTMBlockFusedCell

View source on GitHub

Class LSTMBlockFusedCell

FusedRNNCell implementation of LSTM.

Inherits From: LSTMBlockWrapper

This is an extremely efficient LSTM implementation, that uses a single TF op for the entire LSTM. It should be both faster and more memory-efficient than LSTMBlockCell defined above.

The implementation is based on: http://arxiv.org/abs/1409.2329.

We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

The variable naming is consistent with rnn_cell_impl.LSTMCell.

__init__

View source

__init__(
    num_units,
    forget_bias=1.0,
    cell_clip=None,
    use_peephole=False,
    reuse=None,
    dtype=None,
    name='lstm_fused_cell'
)

Initialize the LSTM cell.

Args:

  • num_units: int, The number of units in the LSTM cell.
  • forget_bias: float, The bias added to forget gates (see above).
  • cell_clip: clip the cell to this value. Defaults is no cell clipping.
  • use_peephole: Whether to use peephole connections or not.
  • reuse: (optional) boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.
  • dtype: the dtype of variables of this layer.
  • name: String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. By default this is "lstm_cell", for variable-name compatibility with tf.compat.v1.nn.rnn_cell.LSTMCell.

Properties

graph

DEPRECATED FUNCTION

num_units

Number of units in this cell (output dimension).

scope_name