Is Normal(mean, variance) mod x still a normal distribution?Mean and variance of log-binomial distributionSquare of normal distribution with specific varianceDistribution of a Mean and VarianceFind distribution with given mean and varianceNormal Distribution with Uniform MeanEstimate truncation point of truncated normal given the mean and variance of the distributionmean variance of multimodal distributionNormal Distribution with unknown mean and unknown varianceVariance of the mean of a random variable with a normal distributionNormal with infinite variance
What is the best way to deal with NPC-NPC combat?
What is the unit of time_lock_delta in LND?
Do I need to watch Ant-Man and the Wasp and Captain Marvel before watching Avengers: Endgame?
How can I practically buy stocks?
Prove that the countable union of countable sets is also countable
How much of a wave function must reside inside event horizon for it to be consumed by the black hole?
Can someone publish a story that happened to you?
As an international instructor, should I openly talk about my accent?
How to not starve gigantic beasts
What to do with someone that cheated their way through university and a PhD program?
Why didn't the Space Shuttle bounce back into space as many times as possible so as to lose a lot of kinetic energy up there?
SFDX - Create Objects with Custom Properties
"My boss was furious with me and I have been fired" vs. "My boss was furious with me and I was fired"
Why did Rep. Omar conclude her criticism of US troops with the phrase "NotTodaySatan"?
Mistake in years of experience in resume?
How do I produce this Greek letter koppa: Ϟ in pdfLaTeX?
What is the most expensive material in the world that could be used to create Pun-Pun's lute?
Could moose/elk survive in the Amazon forest?
Contradiction proof for inequality of P and NP?
A Note on N!
My bank got bought out, am I now going to have to start filing tax returns in a different state?
Combinatorics problem, right solution?
Why do games have consumables?
Apply a different color ramp to subset of categorized symbols in QGIS?
Is Normal(mean, variance) mod x still a normal distribution?
Mean and variance of log-binomial distributionSquare of normal distribution with specific varianceDistribution of a Mean and VarianceFind distribution with given mean and varianceNormal Distribution with Uniform MeanEstimate truncation point of truncated normal given the mean and variance of the distributionmean variance of multimodal distributionNormal Distribution with unknown mean and unknown varianceVariance of the mean of a random variable with a normal distributionNormal with infinite variance
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Is the following distribution a still normal one?
The range of values is constrained to hard limits $ 0, 255 $. And it's generated by $ mathcalN(mu, sigma^2) operatornamemod 256 + 128^dagger $. So it's not that I'm simply truncating a normal, but folding it back into itself. Consequently the probability is never asymptotic to 0.
Normal, or is there a more appropriate term for it?
$dagger$ The +128 is just to centre the peak otherwise it looks silly as $mu = 0$. $sigma sim 12; < 20$. I expect that $mu$ can be ignored for the purposes of classification.
In detail, it's a plot of 128 + (sample1 - sample2) & 0xff
. I wouldn't fixate on the centring too much as there's some weird +/- stuff to do with computer networking and transmission of unsigned bytes.
distributions
$endgroup$
add a comment |
$begingroup$
Is the following distribution a still normal one?
The range of values is constrained to hard limits $ 0, 255 $. And it's generated by $ mathcalN(mu, sigma^2) operatornamemod 256 + 128^dagger $. So it's not that I'm simply truncating a normal, but folding it back into itself. Consequently the probability is never asymptotic to 0.
Normal, or is there a more appropriate term for it?
$dagger$ The +128 is just to centre the peak otherwise it looks silly as $mu = 0$. $sigma sim 12; < 20$. I expect that $mu$ can be ignored for the purposes of classification.
In detail, it's a plot of 128 + (sample1 - sample2) & 0xff
. I wouldn't fixate on the centring too much as there's some weird +/- stuff to do with computer networking and transmission of unsigned bytes.
distributions
$endgroup$
1
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54
add a comment |
$begingroup$
Is the following distribution a still normal one?
The range of values is constrained to hard limits $ 0, 255 $. And it's generated by $ mathcalN(mu, sigma^2) operatornamemod 256 + 128^dagger $. So it's not that I'm simply truncating a normal, but folding it back into itself. Consequently the probability is never asymptotic to 0.
Normal, or is there a more appropriate term for it?
$dagger$ The +128 is just to centre the peak otherwise it looks silly as $mu = 0$. $sigma sim 12; < 20$. I expect that $mu$ can be ignored for the purposes of classification.
In detail, it's a plot of 128 + (sample1 - sample2) & 0xff
. I wouldn't fixate on the centring too much as there's some weird +/- stuff to do with computer networking and transmission of unsigned bytes.
distributions
$endgroup$
Is the following distribution a still normal one?
The range of values is constrained to hard limits $ 0, 255 $. And it's generated by $ mathcalN(mu, sigma^2) operatornamemod 256 + 128^dagger $. So it's not that I'm simply truncating a normal, but folding it back into itself. Consequently the probability is never asymptotic to 0.
Normal, or is there a more appropriate term for it?
$dagger$ The +128 is just to centre the peak otherwise it looks silly as $mu = 0$. $sigma sim 12; < 20$. I expect that $mu$ can be ignored for the purposes of classification.
In detail, it's a plot of 128 + (sample1 - sample2) & 0xff
. I wouldn't fixate on the centring too much as there's some weird +/- stuff to do with computer networking and transmission of unsigned bytes.
distributions
distributions
edited Apr 21 at 14:21
Paul Uszak
asked Apr 21 at 13:34
Paul UszakPaul Uszak
199116
199116
1
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54
add a comment |
1
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54
1
1
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
No, by definition your distribution is not normal. The normal distribution has unbounded support, yours doesn't. QED.
Often enough, we don't need a normal distribution, only one that is "normal enough". Your distribution may well be normal enough for whatever you want to use it for.
However, I don't think a normal distribution is really the best way to parameterize your data. Specifically, even playing around with the standard deviation, there doesn't seem to be a way to get the kurtosis your data exhibits:
Note how we either get all the mass over a much smaller part of the horizontal axis for small SDs, or get much more mass in the center of the distribution than in your picture for large SDs.
mu <- 128
ss <- c(5,10,20,40)
nn <- 1e7
par(mfrow=c(2,2),mai=c(.5,.5,.5,.1))
for ( ii in 1:4 )
set.seed(1)
hist(rnorm(nn,mu,ss[ii])%%256,col="grey",
freq=FALSE,xlim=c(0,256),xlab="",ylab="",main=paste("SD =",ss[ii]))I also tried a $t$ distribution by varying the degrees of freedom, but that didn't look very good, either.
So, if your distribution is not normal enough for your purposes, you may want to look at something like a Pearson type VII distribution. If you truncate this using the modulo operator, it will again not be a Pearson VII strictly speaking, but it may again be "sufficiently Pearson VII".
$endgroup$
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is notR
but rather some cyclic group likeSO2
or perhapsR mod 256
, the mathematically accepted way to generalize any distribution onR
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes:p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
wherer
is the mod value (so 256 for the OP)
$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for thismod
space?
$endgroup$
– jnez71
Apr 21 at 21:29
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404265%2fis-normalmean-variance-mod-x-still-a-normal-distribution%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
No, by definition your distribution is not normal. The normal distribution has unbounded support, yours doesn't. QED.
Often enough, we don't need a normal distribution, only one that is "normal enough". Your distribution may well be normal enough for whatever you want to use it for.
However, I don't think a normal distribution is really the best way to parameterize your data. Specifically, even playing around with the standard deviation, there doesn't seem to be a way to get the kurtosis your data exhibits:
Note how we either get all the mass over a much smaller part of the horizontal axis for small SDs, or get much more mass in the center of the distribution than in your picture for large SDs.
mu <- 128
ss <- c(5,10,20,40)
nn <- 1e7
par(mfrow=c(2,2),mai=c(.5,.5,.5,.1))
for ( ii in 1:4 )
set.seed(1)
hist(rnorm(nn,mu,ss[ii])%%256,col="grey",
freq=FALSE,xlim=c(0,256),xlab="",ylab="",main=paste("SD =",ss[ii]))I also tried a $t$ distribution by varying the degrees of freedom, but that didn't look very good, either.
So, if your distribution is not normal enough for your purposes, you may want to look at something like a Pearson type VII distribution. If you truncate this using the modulo operator, it will again not be a Pearson VII strictly speaking, but it may again be "sufficiently Pearson VII".
$endgroup$
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is notR
but rather some cyclic group likeSO2
or perhapsR mod 256
, the mathematically accepted way to generalize any distribution onR
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes:p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
wherer
is the mod value (so 256 for the OP)
$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for thismod
space?
$endgroup$
– jnez71
Apr 21 at 21:29
add a comment |
$begingroup$
No, by definition your distribution is not normal. The normal distribution has unbounded support, yours doesn't. QED.
Often enough, we don't need a normal distribution, only one that is "normal enough". Your distribution may well be normal enough for whatever you want to use it for.
However, I don't think a normal distribution is really the best way to parameterize your data. Specifically, even playing around with the standard deviation, there doesn't seem to be a way to get the kurtosis your data exhibits:
Note how we either get all the mass over a much smaller part of the horizontal axis for small SDs, or get much more mass in the center of the distribution than in your picture for large SDs.
mu <- 128
ss <- c(5,10,20,40)
nn <- 1e7
par(mfrow=c(2,2),mai=c(.5,.5,.5,.1))
for ( ii in 1:4 )
set.seed(1)
hist(rnorm(nn,mu,ss[ii])%%256,col="grey",
freq=FALSE,xlim=c(0,256),xlab="",ylab="",main=paste("SD =",ss[ii]))I also tried a $t$ distribution by varying the degrees of freedom, but that didn't look very good, either.
So, if your distribution is not normal enough for your purposes, you may want to look at something like a Pearson type VII distribution. If you truncate this using the modulo operator, it will again not be a Pearson VII strictly speaking, but it may again be "sufficiently Pearson VII".
$endgroup$
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is notR
but rather some cyclic group likeSO2
or perhapsR mod 256
, the mathematically accepted way to generalize any distribution onR
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes:p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
wherer
is the mod value (so 256 for the OP)
$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for thismod
space?
$endgroup$
– jnez71
Apr 21 at 21:29
add a comment |
$begingroup$
No, by definition your distribution is not normal. The normal distribution has unbounded support, yours doesn't. QED.
Often enough, we don't need a normal distribution, only one that is "normal enough". Your distribution may well be normal enough for whatever you want to use it for.
However, I don't think a normal distribution is really the best way to parameterize your data. Specifically, even playing around with the standard deviation, there doesn't seem to be a way to get the kurtosis your data exhibits:
Note how we either get all the mass over a much smaller part of the horizontal axis for small SDs, or get much more mass in the center of the distribution than in your picture for large SDs.
mu <- 128
ss <- c(5,10,20,40)
nn <- 1e7
par(mfrow=c(2,2),mai=c(.5,.5,.5,.1))
for ( ii in 1:4 )
set.seed(1)
hist(rnorm(nn,mu,ss[ii])%%256,col="grey",
freq=FALSE,xlim=c(0,256),xlab="",ylab="",main=paste("SD =",ss[ii]))I also tried a $t$ distribution by varying the degrees of freedom, but that didn't look very good, either.
So, if your distribution is not normal enough for your purposes, you may want to look at something like a Pearson type VII distribution. If you truncate this using the modulo operator, it will again not be a Pearson VII strictly speaking, but it may again be "sufficiently Pearson VII".
$endgroup$
No, by definition your distribution is not normal. The normal distribution has unbounded support, yours doesn't. QED.
Often enough, we don't need a normal distribution, only one that is "normal enough". Your distribution may well be normal enough for whatever you want to use it for.
However, I don't think a normal distribution is really the best way to parameterize your data. Specifically, even playing around with the standard deviation, there doesn't seem to be a way to get the kurtosis your data exhibits:
Note how we either get all the mass over a much smaller part of the horizontal axis for small SDs, or get much more mass in the center of the distribution than in your picture for large SDs.
mu <- 128
ss <- c(5,10,20,40)
nn <- 1e7
par(mfrow=c(2,2),mai=c(.5,.5,.5,.1))
for ( ii in 1:4 )
set.seed(1)
hist(rnorm(nn,mu,ss[ii])%%256,col="grey",
freq=FALSE,xlim=c(0,256),xlab="",ylab="",main=paste("SD =",ss[ii]))I also tried a $t$ distribution by varying the degrees of freedom, but that didn't look very good, either.
So, if your distribution is not normal enough for your purposes, you may want to look at something like a Pearson type VII distribution. If you truncate this using the modulo operator, it will again not be a Pearson VII strictly speaking, but it may again be "sufficiently Pearson VII".
answered Apr 21 at 14:19
Stephan KolassaStephan Kolassa
48.5k8102182
48.5k8102182
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is notR
but rather some cyclic group likeSO2
or perhapsR mod 256
, the mathematically accepted way to generalize any distribution onR
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes:p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
wherer
is the mod value (so 256 for the OP)
$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for thismod
space?
$endgroup$
– jnez71
Apr 21 at 21:29
add a comment |
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is notR
but rather some cyclic group likeSO2
or perhapsR mod 256
, the mathematically accepted way to generalize any distribution onR
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes:p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
wherer
is the mod value (so 256 for the OP)
$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for thismod
space?
$endgroup$
– jnez71
Apr 21 at 21:29
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
$begingroup$
I called it Normal as I assumed that '(sample - sample)' = Normal, even if 'sample' itself is not Normal. 'sample' actually looks more like log-Normal. Would that explain the kurtosis problem?
$endgroup$
– Paul Uszak
Apr 21 at 17:04
1
1
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
Perhaps it could be normal in the wrapped distribution sense of directional statistics?
$endgroup$
– jnez71
Apr 21 at 19:11
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
That is possible. I'm afraid you have lost me with the sample, and its log-normality. Could you give a few more details?
$endgroup$
– Stephan Kolassa
Apr 21 at 20:05
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is not
R
but rather some cyclic group like SO2
or perhaps R mod 256
, the mathematically accepted way to generalize any distribution on R
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes: p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
where r
is the mod value (so 256 for the OP)$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
My understanding of it isn't fantastic (otherwise I'd post an answer) but I think the idea is that when you have a probability space with a sample set that is not
R
but rather some cyclic group like SO2
or perhaps R mod 256
, the mathematically accepted way to generalize any distribution on R
to these spaces is to, well, do what the OP did and "identify" (via modulo) various samples as each other, summing the probabilities ad infinitum, like the wiki article describes: p_wrapped(x) = sum_over_all_k_in_N(p(x + k*r))
where r
is the mod value (so 256 for the OP)$endgroup$
– jnez71
Apr 21 at 21:26
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for this
mod
space?$endgroup$
– jnez71
Apr 21 at 21:29
$begingroup$
For the normal distribution you get this which I suspect retains most of the properties we'd hope for (though I'm not 100% sure). Like perhaps a central limit theorem for this
mod
space?$endgroup$
– jnez71
Apr 21 at 21:29
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404265%2fis-normalmean-variance-mod-x-still-a-normal-distribution%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Can you give the values of $mu$ and $sigma^2$? Also, there seems to be something off: the result of a $textmod 256$ operation will be between 0 and 256, so the result after adding 128 should be between 126 and 384, unlike your picture. Did you simply not add 128 before plotting?
$endgroup$
– Stephan Kolassa
Apr 21 at 13:54
$begingroup$
I don't think it would hold for any $mu$ and $sigma$, and for some paramters it would just like subtracting a value.
$endgroup$
– Lerner Zhang
Apr 21 at 14:00
$begingroup$
Seems related to the wrapped normal distribution also discussed here
$endgroup$
– jnez71
Apr 21 at 18:54