You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I wanted to ask about the LSTM implementation as it looks a bit different to the original implementation, for example it is actually not necessary to use an tf.nn.dynamic_rnn for variable sized sequences, as the inputs are already padded to have the same length, right?
The text was updated successfully, but these errors were encountered:
Yes, the input is padded as the same length. I did not give much attention on the implementation, i just found out the one worked for me, so there may be more suitable way to do the job.
Hi,
I wanted to ask about the LSTM implementation as it looks a bit different to the original implementation, for example it is actually not necessary to use an tf.nn.dynamic_rnn for variable sized sequences, as the inputs are already padded to have the same length, right?
The text was updated successfully, but these errors were encountered: