Is stochastic gradient descent pseudo-stochastic?Why do neural network researchers care about epochs?Repeated training examples in Gradient DescentConvergence Criteria for Stochastic Gradient DescentWhy do neural network researchers care about epochs?Parallel minibatch gradient descent algorithmsGradient Descent (GD) vs Stochastic Gradient Descent (SGD)How backpropagation through gradient descent represents the error after each forward passStochastic Gradient Descent, Mini-Batch and Batch Gradient DescentStochastic gradient descent Vs Mini-batch size 1Stochastic gradient descent vs mini-batch gradient descentSpecifics on weight update calculation in stochastic gradient descent
How much of a Devil Fruit must be consumed to gain the power?
The IT department bottlenecks progress, how should I handle this?
What is the English pronunciation of "pain au chocolat"?
Delete multiple columns using awk or sed
Stack Interview Code methods made from class Node and Smart Pointers
How to preserve electronics (computers, iPads and phones) for hundreds of years
Why is it that I can sometimes guess the next note?
Can you use Vicious Mockery to win an argument or gain favours?
Does the Linux kernel need a file system to run?
Why should universal income be universal?
I found an audio circuit and I built it just fine, but I find it a bit too quiet. How do I amplify the output so that it is a bit louder?
What's the name of the logical fallacy where a debater extends a statement far beyond the original statement to make it true?
Does Doodling or Improvising on the Piano Have Any Benefits?
What does Apple's new App Store requirement mean
15% tax on $7.5k earnings. Is that right?
Why does AES have exactly 10 rounds for a 128-bit key, 12 for 192 bits and 14 for a 256-bit key size?
How could a planet have erratic days?
How can I write humor as character trait?
Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?
What is going on with gets(stdin) on the site coderbyte?
How to convince somebody that he is fit for something else, but not this job?
Does grappling negate Mirror Image?
Which Article Helped Get Rid of Technobabble in RPGs?
Will number of steps recorded on FitBit/any fitness tracker add up distance in PokemonGo?
Is stochastic gradient descent pseudo-stochastic?
Why do neural network researchers care about epochs?Repeated training examples in Gradient DescentConvergence Criteria for Stochastic Gradient DescentWhy do neural network researchers care about epochs?Parallel minibatch gradient descent algorithmsGradient Descent (GD) vs Stochastic Gradient Descent (SGD)How backpropagation through gradient descent represents the error after each forward passStochastic Gradient Descent, Mini-Batch and Batch Gradient DescentStochastic gradient descent Vs Mini-batch size 1Stochastic gradient descent vs mini-batch gradient descentSpecifics on weight update calculation in stochastic gradient descent
$begingroup$
I know that stochastic gradient descent randomly chooses 1 sample to update the weights. An epoch is defined as using all $N$ samples. So with SGD, for each epoch, we update the weights $N$ times.
My confusion is doesn't this make it so you have to go through all $N$ samples before you can see the same sample twice? Doesn't that effectively make it pseudo-random/stochastic? If it was entirely random, then there would be a possibility of seeing the same sample more than once before going through all $N$ samples.
machine-learning neural-networks gradient-descent sgd
$endgroup$
add a comment |
$begingroup$
I know that stochastic gradient descent randomly chooses 1 sample to update the weights. An epoch is defined as using all $N$ samples. So with SGD, for each epoch, we update the weights $N$ times.
My confusion is doesn't this make it so you have to go through all $N$ samples before you can see the same sample twice? Doesn't that effectively make it pseudo-random/stochastic? If it was entirely random, then there would be a possibility of seeing the same sample more than once before going through all $N$ samples.
machine-learning neural-networks gradient-descent sgd
$endgroup$
add a comment |
$begingroup$
I know that stochastic gradient descent randomly chooses 1 sample to update the weights. An epoch is defined as using all $N$ samples. So with SGD, for each epoch, we update the weights $N$ times.
My confusion is doesn't this make it so you have to go through all $N$ samples before you can see the same sample twice? Doesn't that effectively make it pseudo-random/stochastic? If it was entirely random, then there would be a possibility of seeing the same sample more than once before going through all $N$ samples.
machine-learning neural-networks gradient-descent sgd
$endgroup$
I know that stochastic gradient descent randomly chooses 1 sample to update the weights. An epoch is defined as using all $N$ samples. So with SGD, for each epoch, we update the weights $N$ times.
My confusion is doesn't this make it so you have to go through all $N$ samples before you can see the same sample twice? Doesn't that effectively make it pseudo-random/stochastic? If it was entirely random, then there would be a possibility of seeing the same sample more than once before going through all $N$ samples.
machine-learning neural-networks gradient-descent sgd
machine-learning neural-networks gradient-descent sgd
edited yesterday
Sycorax
41.9k12109206
41.9k12109206
asked yesterday
IamanonIamanon
303
303
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $mathsfAspadesuit$ (Ace of Spades), and set it aside. You'll never see another $mathsfAspadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $mathsf2colorredheartsuit$ or $mathsfJclubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $mathsfAspadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $mathsfAspadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs?
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398540%2fis-stochastic-gradient-descent-pseudo-stochastic%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $mathsfAspadesuit$ (Ace of Spades), and set it aside. You'll never see another $mathsfAspadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $mathsf2colorredheartsuit$ or $mathsfJclubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $mathsfAspadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $mathsfAspadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs?
$endgroup$
add a comment |
$begingroup$
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $mathsfAspadesuit$ (Ace of Spades), and set it aside. You'll never see another $mathsfAspadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $mathsf2colorredheartsuit$ or $mathsfJclubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $mathsfAspadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $mathsfAspadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs?
$endgroup$
add a comment |
$begingroup$
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $mathsfAspadesuit$ (Ace of Spades), and set it aside. You'll never see another $mathsfAspadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $mathsf2colorredheartsuit$ or $mathsfJclubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $mathsfAspadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $mathsfAspadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs?
$endgroup$
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $mathsfAspadesuit$ (Ace of Spades), and set it aside. You'll never see another $mathsfAspadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $mathsf2colorredheartsuit$ or $mathsfJclubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $mathsfAspadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $mathsfAspadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs?
edited yesterday
answered yesterday
SycoraxSycorax
41.9k12109206
41.9k12109206
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398540%2fis-stochastic-gradient-descent-pseudo-stochastic%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown