<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<channel>
<title>Ask Ghassem - Recent questions tagged rnn</title>
<link>https://ask.ghassem.com/tag/rnn</link>
<description>Powered by Question2Answer</description>
<item>
<title>Step-by-Step Hidden State Calculation in a Recurrent Neural Network</title>
<link>https://ask.ghassem.com/1049/step-step-hidden-state-calculation-recurrent-neural-network</link>
<description>&lt;p&gt;Consider a simplified Recurrent Neural Network (RNN) with a single input and a single output. The hidden state is updated using the recurrence:&lt;/p&gt;

&lt;p&gt;$$ h_t = \text{ReLU}(W_{ih} \cdot x_t + W_{hh} \cdot h_{t-1}) $$&lt;/p&gt;

&lt;p&gt;Assume the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;\( x_t = 3 \) for every time step&lt;/li&gt;
&lt;li&gt;\( h_0 = 0 \)&lt;/li&gt;
&lt;li&gt;\( W_{ih} = 0.4 \)&lt;/li&gt;
&lt;li&gt;\( W_{hh} = 0.6 \)&lt;/li&gt;
&lt;li&gt;Activation function: ReLU&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compute the value of the hidden state \( h_4 \) at time \( t = 4 \).&lt;/strong&gt;&lt;/p&gt;</description>
<category>Machine Learning</category>
<guid isPermaLink="true">https://ask.ghassem.com/1049/step-step-hidden-state-calculation-recurrent-neural-network</guid>
<pubDate>Mon, 01 Dec 2025 18:32:24 +0000</pubDate>
</item>
<item>
<title>Passing variable length sentences to Tensorflow LSTM</title>
<link>https://ask.ghassem.com/561/passing-variable-length-sentences-to-tensorflow-lstm</link>
<description>&lt;p&gt;I have a tensorflow LSTM model for predicting the sentiment. I build the model with the maximum sequence length 150. (Maximum number of words) While making predictions, i have written the code as below:&lt;/p&gt;

&lt;pre class=&quot;prettyprint lang-python&quot; data-pbcklang=&quot;python&quot; data-pbcktabsize=&quot;4&quot;&gt;
batchSize = 32
maxSeqLength = 150

def getSentenceMatrix(sentence):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;arr = np.zeros([batchSize, maxSeqLength])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sentenceMatrix = np.zeros([batchSize,maxSeqLength], dtype=&#039;int32&#039;)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cleanedSentence = cleanSentences(sentence)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cleanedSentence = &#039; &#039;.join(cleanedSentence.split()[:150])
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;split = cleanedSentence.split()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for indexCounter,word in enumerate(split):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;try:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sentenceMatrix[0,indexCounter] = wordsList.index(word)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;except ValueError:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sentenceMatrix[0,indexCounter] = 399999 #Vector for unkown words
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return sentenceMatrix

input_text = &quot;example data&quot;
inputMatrix = getSentenceMatrix(input_text)&lt;/pre&gt;

&lt;p&gt;&lt;br&gt;
&lt;br&gt;
In the code i&#039;m truncating my input text to 150 words and ignoring remaining data.Due to this my predictions are wrong.&lt;/p&gt;

&lt;pre class=&quot;prettyprint lang-python&quot; data-pbcklang=&quot;python&quot; data-pbcktabsize=&quot;4&quot;&gt;
cleanedSentence = &#039; &#039;.join(cleanedSentence.split()[:150]) &lt;/pre&gt;

&lt;p&gt;&lt;br&gt;
I know that if we have lesser length than sequence length we can pad with zero&#039;s. What we need to do if we have more length. Can you suggest me the best way to do this. Thanks in advance.&lt;/p&gt;</description>
<category>General</category>
<guid isPermaLink="true">https://ask.ghassem.com/561/passing-variable-length-sentences-to-tensorflow-lstm</guid>
<pubDate>Mon, 11 Feb 2019 05:06:27 +0000</pubDate>
</item>
</channel>
</rss>