您的位置:首页 > 编程语言 > Python开发

Python performance optimization

2012-10-16 10:17 288 查看


Python performance optimization

Performance optimization – making a program run faster – is closely related to refactoring. Refactoring makes existing source code more readable and less complex on the inside without changing its behavior
on the outside. Refactoring is a costly operation. Typically, an 80/20 rule applies. A 20% effort is needed to implement the bulk of the program. Then another 80% effort (and budget) is needed to optimize, refactor, extend and/or document the program.
Do not worry about performance during development. You first need the program to produce correct results (with correct input) before you know what is making it slow.
Not all parts of the source code will be optimizable. In Python, many operations such as dictionary lookups and regular expressions are as efficient as they can be. Some operations in your code simply need to
happen to make the program work. In this case, micro-optimizations (e.g., making a function 1% faster) are a waste of time.


Profiling

Typically, the top 20% slow parts of the program may benefit from optimization. To find the slow parts you need to profile the source code. The profile() function
below uses Python's cProfile module to return a string of performance statistics. To use it you can simply copy it int0 a test script:

The following example profiles Pattern's parse() command.
Parameters of parse() are instead passed toprofile().
Make sure that the program is busy for a few seconds to get a reliable profile. You could execute it several times in a loop or pass lots of data to it, for example.

Assessment of the output:

brill.py has a heavy apply() that
needs closer inspection.
brill.py has a heavy load(),
it probably loads a lexicon file. Not much we can do about that.
There appears to be a lambda function (<genexpr>) in brill.py that
is called x 100,000.
The split() method of Python's str object
is called often, but we probably can't optimize that.

Once we have isolated the slow parts, we can try to make them faster. To do this, we time them one by one. Below is an example setup that executes the apply() in brill.py multiple
times and outputs the elapsed time. We can tinker with the source code and verify if the elapsed time increases or decreases.


Dictionaries

Membership testing is faster in dict than in list. Python
dictionaries use hash tables, this means that a lookup operation (e.g., if x in y) is O(1).
A lookup operation in a list means that the entire list needs to be iterated, resulting in O(n) for a list of length n.

Assume we have a list of stop words we want to filter from a given text:

The algorithm takes 11.6 seconds to run. Adding stop words makes it even slower. However, the list is easily converted to a dictionary. With dict performance
is constant, regardless of dictionary size:

The dict.fromkeys() method takes a list of keys + a default value for all keys, and returns a new dictionary. Using this approach the algorithm takes 4.7 seconds
to run, a x2.5 speedup.


Dictionaries + caching

Here is a comparison of four implementations of the cosine distance algorithm, which measures similarity between vectors of features (words) with feature weights (word relevance). The first implementation represents vectors as ordered, equal-length
lists of feature weights. The second represents vectors as sparse dictionaries of feature → weight items, where features with weight = 0 are omitted. The third subclasses a dictionary and uses caching for l2().
The fourth adds a distance cache.

The first implementation takes 1.5 seconds to run (and more for longer vectors):

To make it faster we can get leave out the zeros (which means less iterations), using dict.get() with a default value for missing features. The second implementation
then takes 1.1 seconds, a x1.4 speedup:

To make it faster we can cache the costly L2-norm math, since it always yields the same result for a given vector. The third implementation then takes 0.6 seconds, a x2.5 speedup:

Finally, we cache the distances. Python's id() function returns a unique id for each object. When we calculate the distance between v1 and v2,
we can store the result in a global CACHE dictionary, underCACHE[(id(v1),id(v2))] –
since keys must be hashable we can't use (v1,v2) directly.

Next time, we can check if the result is already in the cache – a dictionary lookup is faster than the math.

This ensures that calculating distance is never more than O(n*n) for n vectors.


Sets

Set operations (union, intersection, difference) are faster than iterating over lists:
SyntaxOperationDescription
set(list1) | set(list2)unionNew set with values from both list1 and list2.
set(list1) & set(list2)intersectionNew set with values common to list1 and list2.
set(list1) - set(list2)differenceNew set with values in list1 but not in list2.
You can use them to merge two lists, or to make a list of unique values for example.


Inner for-loops

If your code has nested for-loops, all optimizations inside the inner loop count. Consider the following:

The algorithm takes 4.0 seconds to run. It is a hypothetical example, but the point is this: we can make it faster by moving v1[i] outside of the inner loop:

Now it takes 3.2 seconds to run. In the first example, v1[i] is called 100 x 100 x 100 = 1,000,000 times. In this example, we look up number i in v1 once
before iterating over v2, so v1[i] is called only 100
x 100 times, making the algorithm x1.3 faster. Move everything you can outside of the inner loop.


Lazy if-evaluation

As in most programming languages, Python's if is lazily evaluated. This means that in: if
x and y, condition y will not be tested if x is
already False. We can exploit this by checking a fast condition first before checking a slow condition.

For example, say we are looking for abbreviations in text:

The algorithm takes 4.3 seconds to run. However, since most of the words are not abbreviations we can optimize it by first checking if a word ends with a period, which is faster than iterating the list of known abbreviations:

Now it takes 3.1 seconds to run, a x1.4 speedup.


String methods & regular expressions

Regular expressions in Python are fast because they are pushed back to C code. However, in many situations simple string methods are even faster. Below is a list of interesting string methods. If you do use regular expressions, remember to add source code comments
what they do.
MethodDescription
str[-1] == 'x'True if the last character is "x" (but Exception if len(str)
== 0).
str.isalpha()True if the string only contains a-z | A-Z characters.
str.isdigit()True if the string only contains 0-9 characters.
str.startswith(('x', 'yz'))True if the first characters in the string are "x" or "yz".
str.endswith(('x', 'yz'))True if the last characters in the string are "x" or "yz".


String concatenation

Format strings are often faster than concatenating values to strings:

If you are constructing a large string (for example, XML output), it is faster to append the different parts to a list and collapse it at the end:


List comprehension

List comprehensions are faster than building a new list in a for-loop.
The first example below uses a loop and takes 6.6 seconds. The second example uses list comprehension and takes 5.3 seconds, a x1.2 speedup.


If + None

if done is not None is faster than if done != None, which
in turn is faster than if not done.

It's nitpicking but it matters inside inner loops.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: