LSTM
Long short-term memory.
A model that addresses the long-term dependency problem of RNN. It was designed to better propagate information from distant time steps.
The name comes from treating the hidden state as a short-term memory unit and engineering it to survive for a longer period of time.
Original RNN:

LSTM: cell state (): a state containing all prior information. hidden state (): a state containing information that should be exposed only at the current step.

The result of linearly transforming and is passed through respective activation functions to produce the input gate, forget gate, output gate, and gate gate (?).
If h is the dimension of and the hidden state, then W is (4h, 2h). The column dimension is 2h because we need to linearly transform x and the hidden state together. The row dimension is 4h so the result can be directly used as i, f, o, g.
The probabilities obtained through sigmoid are multiplied element-wise with the hidden state, effectively acting as weights.
Forget gate
 
and are concatenated, linearly combined with W, and then passed through sigmoid. This is multiplied with the cell state to determine how much of the cell state values to preserve. In other words, it decides how much information to forget.
Gate gate
  is the gate gate. It generates new information.
is the input gate. Like the forget gate, it has values passed through sigmoid. It determines how much of to apply to .  The cell state gets updated. The first term is the product of the forget gate and the previous cell state we saw earlier. The product of the input gate and gate gate is added to it.
The reason for creating a separate input gate and multiplying it with the gate gate is that a single linear transformation alone isn’t sufficient to produce the desired result. In other words, the input gate and gate gate together make it easier to manipulate the information to be added.
Output gate
 
The output gate is computed first to generate . The output gate is used to scale each dimension of the cell state by an appropriate ratio. In LSTM, is the value directly used for the output at the current time step. Think of it as filtered information from that is relevant only to the current time step t.
For example, suppose there’s a model with “hello” as training data and we run inference after training. If we feed “h” into the model, the linear combination of with produces “e”, which becomes the input for the next step.
Backpropagation
Unlike RNN, LSTM combines information through addition as shown below. 
This means gradient vanishing/exploding doesn’t occur from repeated exponentiation even with long sequence data.
GRU (Gated Recurrent Unit)
A network designed to use less memory than LSTM. It’s widely used because its performance is similar to or sometimes better than LSTM.
 
In LSTM, the forget gate and input gate control the amount of information deleted and created respectively. In GRU, is computed once, and is used like a forget gate while is used like an input gate.
Additionally, LSTM’s cell state and hidden state are implemented with just a single hidden state in GRU. In other words, GRU’s hidden state carries all prior information while also directly contributing to the output at the current step.