Feature engineering suggestion required Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsFeature Extraction Technique - Summarizing a Sequence of DataPrepping Data For Usage ClusteringGround-truth and feature extraction for predictive modellingHow to use neural network's hidden layer output for feature engineering?Fix missing data by adding another feature instead of using the mean?What are best practices for collaborative feature engineering?How would knowing spammers email address improve spam detection algorithms?Is this a good practice of feature engineering?How do I develop a system to Recommend a marketing channel using data science?

Antler Helmet: Can it work?

What's the point in a preamp?

If A makes B more likely then B makes A more likely"

How can I make names more distinctive without making them longer?

What would be Julian Assange's expected punishment, on the current English criminal law?

Blender game recording at the wrong time

What do you call the holes in a flute?

Autumning in love

Are my PIs rude or am I just being too sensitive?

Unexpected result with right shift after bitwise negation

What items from the Roman-age tech-level could be used to deter all creatures from entering a small area?

Area of a 2D convex hull

Is it possible to ask for a hotel room without minibar/extra services?

How should I respond to a player wanting to catch a sword between their hands?

Training a classifier when some of the features are unknown

New Order #5: where Fibonacci and Beatty meet at Wythoff

Why does tar appear to skip file contents when output file is /dev/null?

Cauchy Sequence Characterized only By Directly Neighbouring Sequence Members

Cold is to Refrigerator as warm is to?

Mortgage adviser recommends a longer term than necessary combined with overpayments

Can a zero nonce be safely used with AES-GCM if the key is random and never used again?

Is 1 ppb equal to 1 μg/kg?

What was the last x86 CPU that did not have the x87 floating-point unit built in?

Single author papers against my advisor's will?



Feature engineering suggestion required



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsFeature Extraction Technique - Summarizing a Sequence of DataPrepping Data For Usage ClusteringGround-truth and feature extraction for predictive modellingHow to use neural network's hidden layer output for feature engineering?Fix missing data by adding another feature instead of using the mean?What are best practices for collaborative feature engineering?How would knowing spammers email address improve spam detection algorithms?Is this a good practice of feature engineering?How do I develop a system to Recommend a marketing channel using data science?










4












$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question









New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35















4












$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question









New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35













4












4








4





$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question









New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"







machine-learning feature-engineering data-science-model






share|improve this question









New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Apr 11 at 2:37







SSuram













New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 11 at 1:26









SSuramSSuram

214




214




New contributor




SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






SSuram is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35












  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35







1




1




$begingroup$
could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 1:40




$begingroup$
could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 1:40












$begingroup$
I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
$endgroup$
– SSuram
Apr 11 at 2:32




$begingroup$
I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
$endgroup$
– SSuram
Apr 11 at 2:32












$begingroup$
Yes, just add it to your question and it will be perfect
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 2:35




$begingroup$
Yes, just add it to your question and it will be perfect
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 2:35










1 Answer
1






active

oldest

votes


















2












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the ideia



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






SSuram is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49088%2ffeature-engineering-suggestion-required%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the ideia



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13















2












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the ideia



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13













2












2








2





$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the ideia



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$



Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the ideia



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.







share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 11 at 3:11


























community wiki





2 revs
Pedro Henrique Monforte












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13
















  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13















$begingroup$
I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
$endgroup$
– SSuram
Apr 11 at 2:38




$begingroup$
I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
$endgroup$
– SSuram
Apr 11 at 2:38












$begingroup$
What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
$endgroup$
– SSuram
Apr 11 at 3:02





$begingroup$
What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
$endgroup$
– SSuram
Apr 11 at 3:02













$begingroup$
that will actually return zero for near every case
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 3:13




$begingroup$
that will actually return zero for near every case
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 3:13










SSuram is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















SSuram is a new contributor. Be nice, and check out our Code of Conduct.












SSuram is a new contributor. Be nice, and check out our Code of Conduct.











SSuram is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49088%2ffeature-engineering-suggestion-required%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Sum ergo cogito? 1 nng

三茅街道4182Guuntc Dn precexpngmageondP