JVM Performance Magic Tricks
2015-06-27 00:10
253 查看
HotSpot, the JVM we all know and love, is the brain in which our Java and Scala juices flow. Over the years, it’s been improved and tweaked by more than a handful of engineers, and with every iteration, the speed and efficiency
of its code execution is nearing that of native compiled code.
At its core lies the JIT (“Just-In-Time”) compiler. The sole purpose of this component is to make your code run
fast, and it is one of the reasons HotSpot is so popular and successful.
hot method (10K invocations is the default threshold), the compiler kicks in, and converts that method’s platform-independent “slow” bytecode into an optimized, lean, mean compiled version of itself.
Some optimizations are obvious: simple method inlining, dead code removal, replacing library calls with native math operations, etc. The JIT compiler doesn’t stop there, mind you. Here are some of the more interesting optimizations
performed by it:
Or perhaps this one:
What both these loops have in common, is that in both cases the loop does one thing for a while, and then another thing from a certain point on. The compiler can spot these patterns, and split the loops into cases, or “peel”
several iterations.
Let’s take the first loop for example. The
This way, the redundant
Some of these checks and dereferences may never fail (null denoting a failure, for that matter). One classic example would include an assertion, like this:
If your code behaves well, and never passes null as an argument to
Another example would be a dereference without an explicit null check. Even though we don't always check for null ourselves -- especially in cases in which we know (or assume) that null isn't a possibility -- the JVM must always
perform the check internally, as it must throw a
is eventually encountered.
After executing the above code many, many times without ever entering the body of the if statement, the JIT compiler might make the optimistic assumption that this check is most likely unnecessary. It would then proceed to compile
the method, dropping the check altogether, as if it were written like so:
Not testing for null in the above code can result in a significant performance boost, which may be a pure win in most cases.
But what if that happy-path assumption eventually proves to be wrong?
Since the JVM is now executing native compiled code, a null reference would not result in a fuzzy
longer assume that the null check is redundant: it recompiles the method, this time with the null check in place.
Method inlining is a common optimization in which the compiler takes a complete method and inserts its code into another’s, in order to avoid a method call. This gets a little tricky when dealing with virtual method invocations
(or dynamic dispatch).
Take the following code for example:
The method
Not necessarily! After executing
Since this optimization relies on runtime information, it can eliminate most of the invocations to
The JIT compiler has a lot more tricks up its sleeve, but these are just a few to give you a taste of what goes on under the hood when our code is executed and optimized by the JVM.
so hard to help it -- just write your code as you otherwise would.
Join the discussion
on reddit!
This post is also available on Speaker Deck
From:http://blog.takipi.com/jvm-performance-magic-tricks/
of its code execution is nearing that of native compiled code.
At its core lies the JIT (“Just-In-Time”) compiler. The sole purpose of this component is to make your code run
fast, and it is one of the reasons HotSpot is so popular and successful.
What does the JIT compiler actually do?
While your code is being executed, the JVM gathers information about its behavior. Once enough statistics are gathered about ahot method (10K invocations is the default threshold), the compiler kicks in, and converts that method’s platform-independent “slow” bytecode into an optimized, lean, mean compiled version of itself.
Some optimizations are obvious: simple method inlining, dead code removal, replacing library calls with native math operations, etc. The JIT compiler doesn’t stop there, mind you. Here are some of the more interesting optimizations
performed by it:
Divide and conquer
How many times have you used the following pattern:StringBuilder sb = new StringBuilder("Ingredients: "); for (int i = 0; i < ingredients.length; i++) { if (i > 0) { sb.append(", "); } sb.append(ingredients[i]); } return sb.toString();
Or perhaps this one:
boolean nemoFound = false; for (int i = 0; i < fish.length; i++) { String curFish = fish[i]; if (!nemoFound) { if (curFish.equals("Nemo")) { System.out.println("Nemo! There you are!"); nemoFound = true; continue; } } if (nemoFound) { System.out.println("We already found Nemo!"); } else { System.out.println("We still haven't found Nemo :("); } }
What both these loops have in common, is that in both cases the loop does one thing for a while, and then another thing from a certain point on. The compiler can spot these patterns, and split the loops into cases, or “peel”
several iterations.
Let’s take the first loop for example. The
if (i > 0)line starts as
falsefor a single iteration, and from that point on it always evaluates to
true. Why check the condition every time then? The compiler would compile that code as if it were written like so:
StringBuilder sb = new StringBuilder("Ingredients: "); if (ingredients.length > 0) { sb.append(ingredients[0]); for (int i = 1; i < ingredients.length; i++) { sb.append(", "); sb.append(ingredients[i]); } } return sb.toString();
This way, the redundant
if (i > 0)is removed, even if some code might get duplicated in the process, as speed is what it’s all about.
Living on the edge
Null checks are bread-and-butter. Sometimes null is a valid value for our references (e.g. indicating a missing value, or an error), but sometimes we add null checks just to be on the safe side, and sometimes we don't even check.Some of these checks and dereferences may never fail (null denoting a failure, for that matter). One classic example would include an assertion, like this:
public static String l33tify(String phrase) { if (phrase == null) { throw new IllegalArgumentException("phrase must not be null"); } return phrase.replace('e', '3'); }
If your code behaves well, and never passes null as an argument to
l33tify, the assertion will never fail.
Another example would be a dereference without an explicit null check. Even though we don't always check for null ourselves -- especially in cases in which we know (or assume) that null isn't a possibility -- the JVM must always
perform the check internally, as it must throw a
NullPointerExceptionand not come crashing down, if null
is eventually encountered.
After executing the above code many, many times without ever entering the body of the if statement, the JIT compiler might make the optimistic assumption that this check is most likely unnecessary. It would then proceed to compile
the method, dropping the check altogether, as if it were written like so:
public static String l33tify(String phrase) { return phrase.replace('e', '3'); }
Not testing for null in the above code can result in a significant performance boost, which may be a pure win in most cases.
But what if that happy-path assumption eventually proves to be wrong?
Since the JVM is now executing native compiled code, a null reference would not result in a fuzzy
NullPointerException, but rather in a real, harsh memory access violation. The JVM, being the low-level creature that it is, would intercept the resulting segmentation fault, recover, and follow-up with a de-optimization -- the compiler can no
longer assume that the null check is redundant: it recompiles the method, this time with the null check in place.
Virtual insanity
One of the main differences between the JVM’s JIT compiler and other static ones such as C++ compilers, is that the JIT compiler has dynamic runtime data on which it can rely when making decisions.Method inlining is a common optimization in which the compiler takes a complete method and inserts its code into another’s, in order to avoid a method call. This gets a little tricky when dealing with virtual method invocations
(or dynamic dispatch).
Take the following code for example:
public class Main { public static void perform(Song s) { s.sing(); } } public interface Song { void sing(); } public class GangnamStyle implements Song { @Override public void sing() { System.out.println("Oppan gangnam style!"); } } public class Baby implements Song { @Override public void sing() { System.out.println("And I was like baby, baby, baby, oh"); } } // More implementations here
The method
performmight be executed millions of times, and each time an invocation of the method
singtakes place. Invocations are costly, especially ones such as this one, since it needs to dynamically select the actual code to execute each time according to the runtime type of
s. Inlining seems like a distant dream at this point, doesn’t it?
Not necessarily! After executing
performa few thousand times, the compiler might decide, according to the statistics it gathered, that 95% of the invocations target an instance of
GangnamStyle. In these cases, the HotSpot JIT can perform an optimistic optimization with the intent of eliminating the virtual call to
sing. In other words, the compiler will generate native code for something along these lines:
public static void perform(Song s) { if (s fastnativeinstanceof GangnamStyle) { System.out.println("Oppan gangnam style!"); } else { s.sing(); } }
Since this optimization relies on runtime information, it can eliminate most of the invocations to
sing, even though they are polymorphic.
The JIT compiler has a lot more tricks up its sleeve, but these are just a few to give you a taste of what goes on under the hood when our code is executed and optimized by the JVM.
Can I help?
The JIT compiler is a compiler for straightforward people; it is built to optimize straightforward writing, and it searches for patterns which appear in everyday standard code. The best way to help your compiler is to not tryso hard to help it -- just write your code as you otherwise would.
Join the discussion
on reddit!
This post is also available on Speaker Deck
From:http://blog.takipi.com/jvm-performance-magic-tricks/
相关文章推荐
- 编写与设置Servlet
- POJ1185(炮兵阵地) 状压DP
- .Net学习笔记----2015-06-26(多态的练习)
- Spring4.0给我们带来什么?
- LeetCode——Summary Ranges
- Oracle临时表(Temporary Table)
- 基于Map-Reduce的选择运算
- 网络设备配置与管理--使用VTP实现扩展VLAN配置
- 5 Coding Hacks to Reduce GC Overhead
- 2015062610 - 渠梁集
- 欧拉路径 基础题 hiho第49周
- 【REST】REST和JAX-RS相关知识介绍
- OGNL:Object Graph Navigation Language(对象图导航语言)
- 5 Techniques to Improve Your Server Logging
- 大数据性能调优之HBase的RowKey设计
- 《构建之法》----自我答疑收获
- 2015062609 - 百川下是急流
- 网络设备配置与管理--使用ACL实现网络管理
- 使用 SQLiteDatabase 操作 SQLite 数据库
- Struts2开发环境搭建,及一个简单登录功能实例