How to feed LSTM with different input array sizes? The 2019 Stack Overflow Developer Survey Results Are InLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?
Does a dangling wire really electrocute me if I'm standing in water?
Is there a symbol for a right arrow with a square in the middle?
Resizing object distorts it (Illustrator CC 2018)
Origin of "cooter" meaning "vagina"
Is flight data recorder erased after every flight?
What does Linus Torvalds mean when he says that Git "never ever" tracks a file?
What is the accessibility of a package's `Private` context variables?
What is the meaning of Triage in Cybersec world?
What is the motivation for a law requiring 2 parties to consent for recording a conversation
Is there any way to tell whether the shot is going to hit you or not?
Is an up-to-date browser secure on an out-of-date OS?
Is this app Icon Browser Safe/Legit?
A poker game description that does not feel gimmicky
Why not take a picture of a closer black hole?
Delete all lines which don't have n characters before delimiter
Apparent duplicates between Haynes service instructions and MOT
How technical should a Scrum Master be to effectively remove impediments?
Falsification in Math vs Science
Deal with toxic manager when you can't quit
What to do when moving next to a bird sanctuary with a loosely-domesticated cat?
Which Sci-Fi work first showed weapon of galactic-scale mass destruction?
Do these rules for Critical Successes and Critical Failures seem Fair?
Pokemon Turn Based battle (Python)
Why do some words that are not inflected have an umlaut?
How to feed LSTM with different input array sizes?
The 2019 Stack Overflow Developer Survey Results Are InLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
add a comment |
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
add a comment |
$begingroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
$endgroup$
If I like to write a LSTM
network and feed it by different input array sizes, how is it possible?
For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM
that can handle different input array sizes?
I am using Keras
implementation of LSTM
.
keras lstm
keras lstm
asked Apr 7 at 8:04
user145959user145959
1458
1458
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Padding and masking (which can be used for (3)),
- Batch size = 1, and
- Batch size > 1, with equi-length samples in each batch.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
will be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
This way, all sequences would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist. A complete example is given at the end.
For cases (2) and (3) you need to set the seq_len
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator
(instead of model.fit
).
I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size
sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
where first dimension of input_shape
in Masking
is again None
to allow batches with different lengths.
Here is the code for cases (1) and (2):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
Extra notes
- Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence
[20, 21, 22, -10, -10]
will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
$endgroup$
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Padding and masking (which can be used for (3)),
- Batch size = 1, and
- Batch size > 1, with equi-length samples in each batch.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
will be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
This way, all sequences would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist. A complete example is given at the end.
For cases (2) and (3) you need to set the seq_len
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator
(instead of model.fit
).
I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size
sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
where first dimension of input_shape
in Masking
is again None
to allow batches with different lengths.
Here is the code for cases (1) and (2):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
Extra notes
- Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence
[20, 21, 22, -10, -10]
will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
$endgroup$
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Padding and masking (which can be used for (3)),
- Batch size = 1, and
- Batch size > 1, with equi-length samples in each batch.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
will be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
This way, all sequences would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist. A complete example is given at the end.
For cases (2) and (3) you need to set the seq_len
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator
(instead of model.fit
).
I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size
sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
where first dimension of input_shape
in Masking
is again None
to allow batches with different lengths.
Here is the code for cases (1) and (2):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
Extra notes
- Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence
[20, 21, 22, -10, -10]
will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
$endgroup$
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
add a comment |
$begingroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Padding and masking (which can be used for (3)),
- Batch size = 1, and
- Batch size > 1, with equi-length samples in each batch.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
will be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
This way, all sequences would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist. A complete example is given at the end.
For cases (2) and (3) you need to set the seq_len
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator
(instead of model.fit
).
I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size
sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
where first dimension of input_shape
in Masking
is again None
to allow batches with different lengths.
Here is the code for cases (1) and (2):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
Extra notes
- Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence
[20, 21, 22, -10, -10]
will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
$endgroup$
The easiest way is to use Padding and Masking.
There are three general ways to handle variable-length sequences:
- Padding and masking (which can be used for (3)),
- Batch size = 1, and
- Batch size > 1, with equi-length samples in each batch.
Padding and masking
In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10
is the special value, then
X = [
[[1, 1.1],
[0.9, 0.95]], # sequence 1 (2 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
will be converted to
X2 = [
[[1, 1.1],
[0.9, 0.95],
[-10, -10]], # padded sequence 1 (3 timestamps)
[[2, 2.2],
[1.9, 1.95],
[1.8, 1.85]], # sequence 2 (3 timestamps)
]
This way, all sequences would have the same length. Then, we use a Masking
layer that skips those special timestamps like they don't exist. A complete example is given at the end.
For cases (2) and (3) you need to set the seq_len
of LSTM to None
, e.g.
model.add(LSTM(units, input_shape=(None, dimension)))
this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator
(instead of model.fit
).
I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size
sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking
layer before LSTM layer to ignore the padded timestamps, e.g.
model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
model.add(LSTM(lstm_units))
where first dimension of input_shape
in Masking
is again None
to allow batches with different lengths.
Here is the code for cases (1) and (2):
from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np
class MyBatchGenerator(Sequence):
'Generates data for Keras'
def __init__(self, X, y, batch_size=1, shuffle=True):
'Initialization'
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.y)/self.batch_size))
def __getitem__(self, index):
return self.__data_generation(index)
def on_epoch_end(self):
'Shuffles indexes after each epoch'
self.indexes = np.arange(len(self.y))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, index):
Xb = np.empty((self.batch_size, *X[index].shape))
yb = np.empty((self.batch_size, *y[index].shape))
# naively use the same sample over and over again
for s in range(0, self.batch_size):
Xb[s] = X[index]
yb[s] = y[index]
return Xb, yb
# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]
# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)
# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
seq_len = x.shape[0]
Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(LSTM(lstm_units))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary())
model2.fit(Xpad, y, epochs=50, batch_size=32)
Extra notes
- Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence
[20, 21, 22, -10, -10]
will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.
edited Apr 7 at 23:08
answered Apr 7 at 11:18
EsmailianEsmailian
2,976320
2,976320
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
add a comment |
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
$endgroup$
– user145959
Apr 7 at 21:01
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
@user145959 my pleasure! I added a note at the end.
$endgroup$
– Esmailian
Apr 7 at 23:13
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
$begingroup$
Wow a great answer! It's called bucketing, right?
$endgroup$
– Aditya
2 days ago
1
1
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
$begingroup$
@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points.
$endgroup$
– Esmailian
2 days ago
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
add a comment |
$begingroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
$endgroup$
We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.
Padding the sequences:
You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.
The values are padded mostly by the value of 0. You can do this in Keras with :
y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.
If the sequence is longer than the max length then, the sequence will be trimmed to the max length.
answered Apr 7 at 10:57
Shubham PanchalShubham Panchal
37118
37118
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
add a comment |
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
1
1
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
$begingroup$
Padding everything to a fixed length is wastage of space.
$endgroup$
– Aditya
2 days ago
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown