Working through the single responsibility principle (SRP) in Python when calls are expensive Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle

2001: A Space Odyssey's use of the song "Daisy Bell" (Bicycle Built for Two); life imitates art or vice-versa?

Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?

How do I keep my slimes from escaping their pens?

Short Story with Cinderella as a Voo-doo Witch

Resolving to minmaj7

English words in a non-english sci-fi novel

Naming the result of a source block

Output the ŋarâþ crîþ alphabet song without using (m)any letters

How to answer "Have you ever been terminated?"

Storing hydrofluoric acid before the invention of plastics

What is the logic behind the Maharil's explanation of why we don't say שעשה ניסים on Pesach?

Why are there no cargo aircraft with "flying wing" design?

What's the purpose of writing one's academic biography in the third person?

ListPlot join points by nearest neighbor rather than order

Identifying polygons that intersect with another layer using QGIS?

Is the Standard Deduction better than Itemized when both are the same amount?

Bete Noir -- no dairy

porting install scripts : can rpm replace apt?

If a contract sometimes uses the wrong name, is it still valid?

How to find out what spells would be useless to a blind NPC spellcaster?

Should I use a zero-interest credit card for a large one-time purchase?

Generate an RGB colour grid

Sci-Fi book where patients in a coma ward all live in a subconscious world linked together

What is a non-alternating simple group with big order, but relatively few conjugacy classes?



Working through the single responsibility principle (SRP) in Python when calls are expensive



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








12















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 4





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    Apr 12 at 14:53






  • 18





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    Apr 12 at 16:57






  • 2





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    Apr 12 at 17:18







  • 3





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    Apr 12 at 20:30






  • 5





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    Apr 12 at 21:11

















12















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 4





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    Apr 12 at 14:53






  • 18





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    Apr 12 at 16:57






  • 2





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    Apr 12 at 17:18







  • 3





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    Apr 12 at 20:30






  • 5





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    Apr 12 at 21:11













12












12








12








Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question
















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?







python performance single-responsibility methods






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 12 at 21:21









Peter Mortensen

1,11521114




1,11521114










asked Apr 12 at 14:19









lucasgcblucasgcb

18319




18319







  • 4





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    Apr 12 at 14:53






  • 18





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    Apr 12 at 16:57






  • 2





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    Apr 12 at 17:18







  • 3





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    Apr 12 at 20:30






  • 5





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    Apr 12 at 21:11












  • 4





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    Apr 12 at 14:53






  • 18





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    Apr 12 at 16:57






  • 2





    @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    Apr 12 at 17:18







  • 3





    note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    Apr 12 at 20:30






  • 5





    Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

    – Bakuriu
    Apr 12 at 21:11







4




4





Possible duplicate of Is micro-optimisation important when coding?

– gnat
Apr 12 at 14:53





Possible duplicate of Is micro-optimisation important when coding?

– gnat
Apr 12 at 14:53




18




18





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
Apr 12 at 16:57





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
Apr 12 at 16:57




2




2





@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
Apr 12 at 17:18






@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
Apr 12 at 17:18





3




3





note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
Apr 12 at 20:30





note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
Apr 12 at 20:30




5




5





Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

– Bakuriu
Apr 12 at 21:11





Use a JITted implementation of python such as PyPy. Should mostly fix this problem.

– Bakuriu
Apr 12 at 21:11










3 Answers
3






active

oldest

votes


















17















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    Apr 12 at 16:43







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    Apr 12 at 16:49







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    Apr 12 at 18:43







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    Apr 12 at 20:30






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    Apr 12 at 21:51


















46














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    Apr 12 at 15:14






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    Apr 12 at 15:46






  • 8





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    Apr 12 at 16:52






  • 3





    your AWS bills are very low indeed

    – Ewan
    Apr 12 at 16:53






  • 6





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    Apr 12 at 19:02



















2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC). **



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.



** Some extra notes on PyPy in Production



Be very careful about making any choices that have the practical effect of "locking you in" to PyPy in a large codebase. Because some (very popular and useful) third party libraries do not play nice for reasons mentioned earlier, it can cause very difficult decisions later if you realize you need one of those libraries. My experience is primarily in using PyPy to speed up some (but not all) microservices which are performance sensitive in an company environment where it adds negligible complexity to our production environment (we already have multiple languages deployed, some with different major versions like 2.7 vs 3.5 running anyways).



I have found using both PyPy and CPython regularly forced me to write code which only relies on guarantees made by the language specification itself, and not on implementation details which are subject to change at any time. You may find thinking about such details to be an extra burden, but I found it valuable in my professional development, and I think it is "healthy" for the Python ecosystem as a whole.






share|improve this answer










New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    Apr 13 at 21:59







  • 1





    with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    2 days ago












  • CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

    – jpmc26
    2 days ago












  • I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

    – Ewan
    2 days ago











  • @jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

    – Steven Jackson
    yesterday












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









17















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    Apr 12 at 16:43







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    Apr 12 at 16:49







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    Apr 12 at 18:43







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    Apr 12 at 20:30






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    Apr 12 at 21:51















17















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer




















  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    Apr 12 at 16:43







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    Apr 12 at 16:49







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    Apr 12 at 18:43







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    Apr 12 at 20:30






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    Apr 12 at 21:51













17












17








17








is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.







share|improve this answer
















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.



--Edit
I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.




  1. Don't optimise untill you have a problem and then look for bottlenecks



    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.




  2. Its only a few milliseconds, other things will be slower



    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.



    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.



    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.



    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.



    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.








share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 13 at 14:04

























answered Apr 12 at 15:26









EwanEwan

44.1k33799




44.1k33799







  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    Apr 12 at 16:43







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    Apr 12 at 16:49







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    Apr 12 at 18:43







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    Apr 12 at 20:30






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    Apr 12 at 21:51












  • 1





    While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    Apr 12 at 16:43







  • 2





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    Apr 12 at 16:49







  • 2





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    Apr 12 at 18:43







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    Apr 12 at 20:30






  • 2





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    Apr 12 at 21:51







1




1





While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
Apr 12 at 16:43






While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
Apr 12 at 16:43





2




2





sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
Apr 12 at 16:49






sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
Apr 12 at 16:49





2




2





@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
Apr 12 at 18:43






@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
Apr 12 at 18:43





1




1





you can also try pypy, which is a JITted python

– Eevee
Apr 12 at 20:30





you can also try pypy, which is a JITted python

– Eevee
Apr 12 at 20:30




2




2





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
Apr 12 at 21:51





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
Apr 12 at 21:51













46














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    Apr 12 at 15:14






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    Apr 12 at 15:46






  • 8





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    Apr 12 at 16:52






  • 3





    your AWS bills are very low indeed

    – Ewan
    Apr 12 at 16:53






  • 6





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    Apr 12 at 19:02
















46














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    Apr 12 at 15:14






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    Apr 12 at 15:46






  • 8





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    Apr 12 at 16:52






  • 3





    your AWS bills are very low indeed

    – Ewan
    Apr 12 at 16:53






  • 6





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    Apr 12 at 19:02














46












46








46







Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer















Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.







share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 12 at 16:40

























answered Apr 12 at 14:57









Robert HarveyRobert Harvey

167k44387601




167k44387601







  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    Apr 12 at 15:14






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    Apr 12 at 15:46






  • 8





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    Apr 12 at 16:52






  • 3





    your AWS bills are very low indeed

    – Ewan
    Apr 12 at 16:53






  • 6





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    Apr 12 at 19:02













  • 3





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    Apr 12 at 15:14






  • 2





    @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    Apr 12 at 15:46






  • 8





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    Apr 12 at 16:52






  • 3





    your AWS bills are very low indeed

    – Ewan
    Apr 12 at 16:53






  • 6





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    Apr 12 at 19:02








3




3





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
Apr 12 at 15:14





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
Apr 12 at 15:14




2




2





@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
Apr 12 at 15:46





@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
Apr 12 at 15:46




8




8





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
Apr 12 at 16:52





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
Apr 12 at 16:52




3




3





your AWS bills are very low indeed

– Ewan
Apr 12 at 16:53





your AWS bills are very low indeed

– Ewan
Apr 12 at 16:53




6




6





@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
Apr 12 at 19:02






@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
Apr 12 at 19:02












2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC). **



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.



** Some extra notes on PyPy in Production



Be very careful about making any choices that have the practical effect of "locking you in" to PyPy in a large codebase. Because some (very popular and useful) third party libraries do not play nice for reasons mentioned earlier, it can cause very difficult decisions later if you realize you need one of those libraries. My experience is primarily in using PyPy to speed up some (but not all) microservices which are performance sensitive in an company environment where it adds negligible complexity to our production environment (we already have multiple languages deployed, some with different major versions like 2.7 vs 3.5 running anyways).



I have found using both PyPy and CPython regularly forced me to write code which only relies on guarantees made by the language specification itself, and not on implementation details which are subject to change at any time. You may find thinking about such details to be an extra burden, but I found it valuable in my professional development, and I think it is "healthy" for the Python ecosystem as a whole.






share|improve this answer










New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    Apr 13 at 21:59







  • 1





    with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    2 days ago












  • CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

    – jpmc26
    2 days ago












  • I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

    – Ewan
    2 days ago











  • @jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

    – Steven Jackson
    yesterday
















2














First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC). **



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.



** Some extra notes on PyPy in Production



Be very careful about making any choices that have the practical effect of "locking you in" to PyPy in a large codebase. Because some (very popular and useful) third party libraries do not play nice for reasons mentioned earlier, it can cause very difficult decisions later if you realize you need one of those libraries. My experience is primarily in using PyPy to speed up some (but not all) microservices which are performance sensitive in an company environment where it adds negligible complexity to our production environment (we already have multiple languages deployed, some with different major versions like 2.7 vs 3.5 running anyways).



I have found using both PyPy and CPython regularly forced me to write code which only relies on guarantees made by the language specification itself, and not on implementation details which are subject to change at any time. You may find thinking about such details to be an extra burden, but I found it valuable in my professional development, and I think it is "healthy" for the Python ecosystem as a whole.






share|improve this answer










New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    Apr 13 at 21:59







  • 1





    with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    2 days ago












  • CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

    – jpmc26
    2 days ago












  • I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

    – Ewan
    2 days ago











  • @jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

    – Steven Jackson
    yesterday














2












2








2







First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC). **



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.



** Some extra notes on PyPy in Production



Be very careful about making any choices that have the practical effect of "locking you in" to PyPy in a large codebase. Because some (very popular and useful) third party libraries do not play nice for reasons mentioned earlier, it can cause very difficult decisions later if you realize you need one of those libraries. My experience is primarily in using PyPy to speed up some (but not all) microservices which are performance sensitive in an company environment where it adds negligible complexity to our production environment (we already have multiple languages deployed, some with different major versions like 2.7 vs 3.5 running anyways).



I have found using both PyPy and CPython regularly forced me to write code which only relies on guarantees made by the language specification itself, and not on implementation details which are subject to change at any time. You may find thinking about such details to be an extra burden, but I found it valuable in my professional development, and I think it is "healthy" for the Python ecosystem as a whole.






share|improve this answer










New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










First, some clarifications: Python is a language. There are several different interpreters which can execute code written in the Python language. The reference implementation (CPython) is usually what is being referenced when someone talks about "Python" as if it is an implementation, but it is important to be precise when talking about performance characteristics, as they can differ wildly between implementations.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?




Case 1.)
If you have pure Python code (<= Python Language version 3.5, 3.6 has "beta level support") which only relies on pure Python modules, you can embrace SRP everywhere and use PyPy to run it. PyPy (https://morepypy.blogspot.com/2019/03/pypy-v71-released-now-uses-utf-8.html) is a Python interpreter which has a Just in Time Compiler (JIT) and can remove function call overhead as long as it has sufficient time to "warm up" by tracing the executed code (a few seconds IIRC). **



If you are restricted to using the CPython interpreter, you can extract the slow functions into extensions written in C, which will be pre-compiled and not suffer from any interpreter overhead. You can still use SRP everywhere, but your code will be split between Python and C. Whether this is better or worse for maintainability than selectively abandoning SRP but sticking to only Python code depends on your team, but if you have performance critical sections of your code, it will undoubtably be faster than even the most optimized pure Python code interpreted by CPython. Many of Python's fastest mathematical libraries use this method (numpy and scipy IIRC). Which is a nice segue into Case 2...



Case 2.)
If you have Python code which uses C extensions (or relies on libraries which use C extensions), PyPy may or may not be useful depending on how they're written. See http://doc.pypy.org/en/latest/extending.html for details, but the summary is that CFFI has minimal overhead while CTypes is slower (using it with PyPy may be even slower than CPython)



Cython (https://cython.org/) is another option which I don't have as much experience with. I mention it for the sake of completeness so my answer can "stand on its own", but don't claim any expertise. From my limited usage, it felt like I had to work harder to get the same speed improvements i could get "for free" with PyPy, and if I needed something better than PyPy, it was just as easy to write my own C extension (which has the benefit if I re-use the code elsewhere or extract part of it into a library, all my code can still run under any Python Interpreter and is not required to be run by Cython).



I'm scared of being "locked into" Cython, whereas any code written for PyPy can run under CPython as well.



** Some extra notes on PyPy in Production



Be very careful about making any choices that have the practical effect of "locking you in" to PyPy in a large codebase. Because some (very popular and useful) third party libraries do not play nice for reasons mentioned earlier, it can cause very difficult decisions later if you realize you need one of those libraries. My experience is primarily in using PyPy to speed up some (but not all) microservices which are performance sensitive in an company environment where it adds negligible complexity to our production environment (we already have multiple languages deployed, some with different major versions like 2.7 vs 3.5 running anyways).



I have found using both PyPy and CPython regularly forced me to write code which only relies on guarantees made by the language specification itself, and not on implementation details which are subject to change at any time. You may find thinking about such details to be an extra burden, but I found it valuable in my professional development, and I think it is "healthy" for the Python ecosystem as a whole.







share|improve this answer










New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this answer



share|improve this answer








edited 14 hours ago





















New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









answered Apr 13 at 20:51









Steven JacksonSteven Jackson

372




372




New contributor




Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Steven Jackson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    Apr 13 at 21:59







  • 1





    with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    2 days ago












  • CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

    – jpmc26
    2 days ago












  • I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

    – Ewan
    2 days ago











  • @jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

    – Steven Jackson
    yesterday


















  • Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

    – lucasgcb
    Apr 13 at 21:59







  • 1





    with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

    – Ewan
    2 days ago












  • CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

    – jpmc26
    2 days ago












  • I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

    – Ewan
    2 days ago











  • @jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

    – Steven Jackson
    yesterday

















Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

– lucasgcb
Apr 13 at 21:59






Yeah! I've been considering focus on C extensions for this case instead of abandoning the principle and writing wild code, the other answers gave me the impression it would be slow regardless unless I swapped from the reference interpreter - To clear it up, OOP would still be a sensible approach in your view?

– lucasgcb
Apr 13 at 21:59





1




1





with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

– Ewan
2 days ago






with case 1 (2nd para) do you not get the same over head calling the functions, even if the functions themselves are complied?

– Ewan
2 days ago














CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

– jpmc26
2 days ago






CPython is the only interpreter that is generally taken seriously. PyPy is interesting, but it certainly isn't seeing any kind of widespread adoption. Furthermore, it's behavior differs from CPython, and it doesn't work with some important packages, e.g. scipy. Few sane developers would recommend PyPy for production. As such, the distinction between the language and the implementation is inconsequential in practice.

– jpmc26
2 days ago














I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

– Ewan
2 days ago





I think you hit the nail on the head though. There's no reason you couldn't have a better interpreter, or a compiler. It's not intrinsic to python as a language. You are just stuck with practical realities

– Ewan
2 days ago













@jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

– Steven Jackson
yesterday






@jpmc26 I've used PyPy in production, and recommend considering doing so to other experienced developers. It is great for microservices using falconframework.org for lightweight rest APIs (as one example). Behavior differing because developers rely on implementation details which are NOT a guarantee of the language is not a reason to not use PyPy. It's a reason to rewrite your code. The same code may break anyway if CPython make changes to its implementation (which it is free to do as long as it still conforms to the language spec).

– Steven Jackson
yesterday


















draft saved

draft discarded
















































Thanks for contributing an answer to Software Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Sum ergo cogito? 1 nng

419 nièngy_Soadمي 19bal1.5o_g

Queiggey Chernihivv 9NnOo i Zw X QqKk LpB