How to test the equality of two Pearson correlation coefficients computed from the same sample? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?Significance test on the difference of Spearman's correlation coefficientHow can you run a correlation coefficient test among two ordinal variables with uneven scales?How can two positive dependent correlation coefficients differ significantly without differing significantly from zero?How to compare two Pearson correlation coefficientsAlternative to Pearson correlation testDifference Between Two Correlation Coefficients - questionsWhich Two-Sample Test for Non-Independent Data?Comparison of two correlationsWhat is the relationship between an average of correlations and a correlation for an average of the same variables?
What is the longest distance a player character can jump in one leap?
An adverb for when you're not exaggerating
また usage in a dictionary
Is "Reachable Object" really an NP-complete problem?
Is safe to use va_start macro with this as parameter?
How to answer "Have you ever been terminated?"
When the Haste spell ends on a creature, do attackers have advantage against that creature?
Why wasn't DOSKEY integrated with COMMAND.COM?
Do I really need to have a message in a novel to appeal to readers?
2001: A Space Odyssey's use of the song "Daisy Bell" (Bicycle Built for Two); life imitates art or vice-versa?
Trademark violation for app?
How could we fake a moon landing now?
Declining "dulcis" in context
Why do the resolve message appear first?
What does the "x" in "x86" represent?
Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?
Closed form of recurrent arithmetic series summation
Did MS DOS itself ever use blinking text?
How to Make a Beautiful Stacked 3D Plot
Why aren't air breathing engines used as small first stages
Why are both D and D# fitting into my E minor key?
Is there such thing as an Availability Group failover trigger?
How to down pick a chord with skipped strings?
Is there a holomorphic function on open unit disc with this property?
How to test the equality of two Pearson correlation coefficients computed from the same sample?
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?Significance test on the difference of Spearman's correlation coefficientHow can you run a correlation coefficient test among two ordinal variables with uneven scales?How can two positive dependent correlation coefficients differ significantly without differing significantly from zero?How to compare two Pearson correlation coefficientsAlternative to Pearson correlation testDifference Between Two Correlation Coefficients - questionsWhich Two-Sample Test for Non-Independent Data?Comparison of two correlationsWhat is the relationship between an average of correlations and a correlation for an average of the same variables?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Is there a reliable way to say if two Pearson correlations from the same sample (do not) differ significantly? More concrete, I calculated the correlation between a total score on a questionnaire and an other variable, and a subscore of the same questionnaire and the variable. The correlations are respectively .239 and .234, so they look quite similar to me. (The other two subscales did not significantly correlate with the variable). Could I use a fisher Z to check if the two correlations indeed do not significantly differ, or is the fact that they are not independent a problem?
hypothesis-testing correlation non-independent
New contributor
$endgroup$
add a comment |
$begingroup$
Is there a reliable way to say if two Pearson correlations from the same sample (do not) differ significantly? More concrete, I calculated the correlation between a total score on a questionnaire and an other variable, and a subscore of the same questionnaire and the variable. The correlations are respectively .239 and .234, so they look quite similar to me. (The other two subscales did not significantly correlate with the variable). Could I use a fisher Z to check if the two correlations indeed do not significantly differ, or is the fact that they are not independent a problem?
hypothesis-testing correlation non-independent
New contributor
$endgroup$
add a comment |
$begingroup$
Is there a reliable way to say if two Pearson correlations from the same sample (do not) differ significantly? More concrete, I calculated the correlation between a total score on a questionnaire and an other variable, and a subscore of the same questionnaire and the variable. The correlations are respectively .239 and .234, so they look quite similar to me. (The other two subscales did not significantly correlate with the variable). Could I use a fisher Z to check if the two correlations indeed do not significantly differ, or is the fact that they are not independent a problem?
hypothesis-testing correlation non-independent
New contributor
$endgroup$
Is there a reliable way to say if two Pearson correlations from the same sample (do not) differ significantly? More concrete, I calculated the correlation between a total score on a questionnaire and an other variable, and a subscore of the same questionnaire and the variable. The correlations are respectively .239 and .234, so they look quite similar to me. (The other two subscales did not significantly correlate with the variable). Could I use a fisher Z to check if the two correlations indeed do not significantly differ, or is the fact that they are not independent a problem?
hypothesis-testing correlation non-independent
hypothesis-testing correlation non-independent
New contributor
New contributor
edited Apr 13 at 20:44
amoeba
62.4k15208267
62.4k15208267
New contributor
asked Apr 13 at 8:14
ChaFoChaFo
161
161
New contributor
New contributor
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
Firstly I would point out that these correlations are fairly low.
Second, have you plotted the data to investigate possible non-linear associations?
Third, I would say that common sense should dictate that correlations of 0.239 and 0.234 are essentially the same and searching for a test to confirm this, unless the sample size is absolutely enormous, is folly.
Fourth, you could calculate confidence intervals for both statistics, and if they do not overlap, then you can conclude that they are statistically significantly different. However, this would be invalid since the 2 samples are not independent. Moreover, as per my third point, even if you did have such an enormous sample and a test which validly concluded that a significant difference exists, I would find it hard to belive that the difference was practically significant.
$endgroup$
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
add a comment |
$begingroup$
Expanding on Robert Long's answer (+1 to Robert) I'd say that testing for a difference between these is folly, regardless of sample size. Look! Is 0.239 different from 0.234? Well, maybe it is. There are situations where a very small effect size is very important. If a plane crashes 1 in 1,000 flights, that's a big big problem. I can't think, offhand, of a situation where this tiny difference in correlations could be meaningful, but maybe there is one. Whether it is significant or not is not the point.
Also, the dependence will surely be a problem. If you really wanted to see something like this, I'd find a third correlation: The correlation between the test after removing the subtest. Then you can compare that to the correlation with the subtest.
Finally, it's unclear to me what you are trying to show, but I think you are trying to show that these are not different. In that case, the usual null hypothesis tests are inappropriate. You should be looking at tests of equivalence (if, in fact you want to look at significance at all).
$endgroup$
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
add a comment |
$begingroup$
Yes, it is possible to perform a significance test using the Fisher transform. This also depends on $N$, the number of samples used to compute the Pearson correlations. This blog post describes the method in more detail, and provides R code for it.
New contributor
$endgroup$
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
ChaFo is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402809%2fhow-to-test-the-equality-of-two-pearson-correlation-coefficients-computed-from-t%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Firstly I would point out that these correlations are fairly low.
Second, have you plotted the data to investigate possible non-linear associations?
Third, I would say that common sense should dictate that correlations of 0.239 and 0.234 are essentially the same and searching for a test to confirm this, unless the sample size is absolutely enormous, is folly.
Fourth, you could calculate confidence intervals for both statistics, and if they do not overlap, then you can conclude that they are statistically significantly different. However, this would be invalid since the 2 samples are not independent. Moreover, as per my third point, even if you did have such an enormous sample and a test which validly concluded that a significant difference exists, I would find it hard to belive that the difference was practically significant.
$endgroup$
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
add a comment |
$begingroup$
Firstly I would point out that these correlations are fairly low.
Second, have you plotted the data to investigate possible non-linear associations?
Third, I would say that common sense should dictate that correlations of 0.239 and 0.234 are essentially the same and searching for a test to confirm this, unless the sample size is absolutely enormous, is folly.
Fourth, you could calculate confidence intervals for both statistics, and if they do not overlap, then you can conclude that they are statistically significantly different. However, this would be invalid since the 2 samples are not independent. Moreover, as per my third point, even if you did have such an enormous sample and a test which validly concluded that a significant difference exists, I would find it hard to belive that the difference was practically significant.
$endgroup$
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
add a comment |
$begingroup$
Firstly I would point out that these correlations are fairly low.
Second, have you plotted the data to investigate possible non-linear associations?
Third, I would say that common sense should dictate that correlations of 0.239 and 0.234 are essentially the same and searching for a test to confirm this, unless the sample size is absolutely enormous, is folly.
Fourth, you could calculate confidence intervals for both statistics, and if they do not overlap, then you can conclude that they are statistically significantly different. However, this would be invalid since the 2 samples are not independent. Moreover, as per my third point, even if you did have such an enormous sample and a test which validly concluded that a significant difference exists, I would find it hard to belive that the difference was practically significant.
$endgroup$
Firstly I would point out that these correlations are fairly low.
Second, have you plotted the data to investigate possible non-linear associations?
Third, I would say that common sense should dictate that correlations of 0.239 and 0.234 are essentially the same and searching for a test to confirm this, unless the sample size is absolutely enormous, is folly.
Fourth, you could calculate confidence intervals for both statistics, and if they do not overlap, then you can conclude that they are statistically significantly different. However, this would be invalid since the 2 samples are not independent. Moreover, as per my third point, even if you did have such an enormous sample and a test which validly concluded that a significant difference exists, I would find it hard to belive that the difference was practically significant.
answered Apr 13 at 10:32
Robert LongRobert Long
12.1k22553
12.1k22553
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
add a comment |
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
Thanks for your reply. I am indeed aware that the correlations are small. I have checked for non-linear associations. I also feel like the difference is not meaningfull, but I just wanted make I do everything in the best possible way. Thanks!
$endgroup$
– ChaFo
Apr 14 at 13:07
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
$begingroup$
@ChaFo that's OK, but you don't always have to make a formal test, especially where it seems obvious that they are essentially the same. How many observations do you have?
$endgroup$
– Robert Long
Apr 14 at 13:12
add a comment |
$begingroup$
Expanding on Robert Long's answer (+1 to Robert) I'd say that testing for a difference between these is folly, regardless of sample size. Look! Is 0.239 different from 0.234? Well, maybe it is. There are situations where a very small effect size is very important. If a plane crashes 1 in 1,000 flights, that's a big big problem. I can't think, offhand, of a situation where this tiny difference in correlations could be meaningful, but maybe there is one. Whether it is significant or not is not the point.
Also, the dependence will surely be a problem. If you really wanted to see something like this, I'd find a third correlation: The correlation between the test after removing the subtest. Then you can compare that to the correlation with the subtest.
Finally, it's unclear to me what you are trying to show, but I think you are trying to show that these are not different. In that case, the usual null hypothesis tests are inappropriate. You should be looking at tests of equivalence (if, in fact you want to look at significance at all).
$endgroup$
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
add a comment |
$begingroup$
Expanding on Robert Long's answer (+1 to Robert) I'd say that testing for a difference between these is folly, regardless of sample size. Look! Is 0.239 different from 0.234? Well, maybe it is. There are situations where a very small effect size is very important. If a plane crashes 1 in 1,000 flights, that's a big big problem. I can't think, offhand, of a situation where this tiny difference in correlations could be meaningful, but maybe there is one. Whether it is significant or not is not the point.
Also, the dependence will surely be a problem. If you really wanted to see something like this, I'd find a third correlation: The correlation between the test after removing the subtest. Then you can compare that to the correlation with the subtest.
Finally, it's unclear to me what you are trying to show, but I think you are trying to show that these are not different. In that case, the usual null hypothesis tests are inappropriate. You should be looking at tests of equivalence (if, in fact you want to look at significance at all).
$endgroup$
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
add a comment |
$begingroup$
Expanding on Robert Long's answer (+1 to Robert) I'd say that testing for a difference between these is folly, regardless of sample size. Look! Is 0.239 different from 0.234? Well, maybe it is. There are situations where a very small effect size is very important. If a plane crashes 1 in 1,000 flights, that's a big big problem. I can't think, offhand, of a situation where this tiny difference in correlations could be meaningful, but maybe there is one. Whether it is significant or not is not the point.
Also, the dependence will surely be a problem. If you really wanted to see something like this, I'd find a third correlation: The correlation between the test after removing the subtest. Then you can compare that to the correlation with the subtest.
Finally, it's unclear to me what you are trying to show, but I think you are trying to show that these are not different. In that case, the usual null hypothesis tests are inappropriate. You should be looking at tests of equivalence (if, in fact you want to look at significance at all).
$endgroup$
Expanding on Robert Long's answer (+1 to Robert) I'd say that testing for a difference between these is folly, regardless of sample size. Look! Is 0.239 different from 0.234? Well, maybe it is. There are situations where a very small effect size is very important. If a plane crashes 1 in 1,000 flights, that's a big big problem. I can't think, offhand, of a situation where this tiny difference in correlations could be meaningful, but maybe there is one. Whether it is significant or not is not the point.
Also, the dependence will surely be a problem. If you really wanted to see something like this, I'd find a third correlation: The correlation between the test after removing the subtest. Then you can compare that to the correlation with the subtest.
Finally, it's unclear to me what you are trying to show, but I think you are trying to show that these are not different. In that case, the usual null hypothesis tests are inappropriate. You should be looking at tests of equivalence (if, in fact you want to look at significance at all).
answered Apr 13 at 12:06
Peter Flom♦Peter Flom
77.6k12110219
77.6k12110219
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
add a comment |
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
1
1
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
$begingroup$
Excellent points, Peter (+1)
$endgroup$
– Robert Long
Apr 13 at 12:37
1
1
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Peter Flom, the population perspective in epidemiology says, in effect, that a tiny change in risk—one that is so small as to be effectively inconsequential clinically—is a big deal if it is multiplied across an entire population. Changing someone's risk of stroke by 1 in 10,000 per year is kinda meh. Changing 10,000,000 people's risk of stroke by 1 in 10,000 is a change of a 1,000 strokes per year: a big deal. See Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1), 32–28.
$endgroup$
– Alexis
Apr 13 at 18:00
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
$begingroup$
Of course, Pearson's correlation coefficient alone isn't likely to be the most used measure of contrasts in risk, but I think small associations can matter.
$endgroup$
– Alexis
Apr 13 at 18:01
add a comment |
$begingroup$
Yes, it is possible to perform a significance test using the Fisher transform. This also depends on $N$, the number of samples used to compute the Pearson correlations. This blog post describes the method in more detail, and provides R code for it.
New contributor
$endgroup$
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
add a comment |
$begingroup$
Yes, it is possible to perform a significance test using the Fisher transform. This also depends on $N$, the number of samples used to compute the Pearson correlations. This blog post describes the method in more detail, and provides R code for it.
New contributor
$endgroup$
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
add a comment |
$begingroup$
Yes, it is possible to perform a significance test using the Fisher transform. This also depends on $N$, the number of samples used to compute the Pearson correlations. This blog post describes the method in more detail, and provides R code for it.
New contributor
$endgroup$
Yes, it is possible to perform a significance test using the Fisher transform. This also depends on $N$, the number of samples used to compute the Pearson correlations. This blog post describes the method in more detail, and provides R code for it.
New contributor
New contributor
answered Apr 13 at 15:46
BaiBai
101
101
New contributor
New contributor
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
add a comment |
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
2
2
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
$begingroup$
Your reference is inappropriate for comparing correlation coefficients that share data, as is the case here. The OP points out that "the fact they are not independent" is the problem.
$endgroup$
– whuber♦
Apr 13 at 15:47
1
1
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Yes, I see. OP's situation involves overlap between the two datasets, but is not a case of paired data. Therefore, my answer is inappropriate.
$endgroup$
– Bai
Apr 13 at 16:05
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
$begingroup$
Actually, it sounds like the data are triples: that's what makes it possible to compute more than one correlation coefficient.
$endgroup$
– whuber♦
Apr 13 at 16:12
add a comment |
ChaFo is a new contributor. Be nice, and check out our Code of Conduct.
ChaFo is a new contributor. Be nice, and check out our Code of Conduct.
ChaFo is a new contributor. Be nice, and check out our Code of Conduct.
ChaFo is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402809%2fhow-to-test-the-equality-of-two-pearson-correlation-coefficients-computed-from-t%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown