[ index ] [ compilers | maintainability | resources | rules | size | speed | tools ]
Note: for a more comprehensive set of benchmarks, I recommend Doug Bell's Benchmark applet. I'm no longer actively maintaining these microbenchmarks.I've created a set of microbenchmarks to test the performance of Java operations on different platforms. These can be used to guide optimization decisions and to compare different Java implementations. This page describes the microbenchmarks and analyses the results for several different Java implementations on a low-end PC. The applet and results submitted by readers for different platforms, together with the industry-standard Linpack benchmark, are on separate pages:
- Java Microbenchmark: applet and results
- Java Linpack: applet and results
Description
The table below shows the time in microseconds to execute various Java operations on a 486 PC (AMD DX4-120 with 24MB RAM and 256kB L2 cache, running Windows 95). The Java implementations tested are:The Sun JDK uses an interpreter, while all the rest use just-in-time compilers. Times are only quoted to two significant figures because the variance between runs is typically on the order of 5-10%.
- Sun JDK 1.0.2.
- Symantec Café 1.51.
- Symantec Visual Café PR 2.
- Microsoft Internet Explorer 3.0 (Visual J++ uses the same VM).
- Netscape Navigator 3.0.
- Asymetrix SuperCede beta.
Description JDK Café VCafé IE NN SC Loop overhead: while (Go) n++ 1.1 0.052 0.050 0.066 0.067 0.065 Local variable assignment: i = n 0.48 0.037 0.027 0.009 0.006 0.006 Instance var. assign.: this.i = n 1.0 0.043 0.041 0.035 0.034 0.034 Array element assign.: a[0] = n 1.2 0.11 0.066 0.043 0.087 0.033 Byte increment: b++ 1.3 0.068 0.055 0.048 0.053 0.007 Short increment: s++ 1.3 0.067 0.054 0.048 0.053 0.014 Int increment: i++ 0.31 0.030 0.022 0.006 0.011 0.006 Long increment: l++ 1.2 0.071 0.044 0.049 0.038 0.007 Float increment: f++ 1.3 0.25 0.18 0.17 0.18 0.18 Double increment: d++ 1.2 0.32 0.20 0.23 0.19 0.18 Object creation: new Object() 13.0 9.5 8.2 13.0 26.0 5.9 Array creation: new int[10] 13.0 11.0 9.2 13.0 42.0 39.0 Method call: null_func() 2.2 0.22 0.12 0.13 0.16 0.13 Synchronous call: sync_func() 19.0 13.0 3.6 4.1 16.0 5.1 Math function: abs() 4.9 0.68 0.13 0.55 0.59 0.68 Inline code: (x < 0) ? -x : x 0.55 0.087 0.09 0.084 0.19 0.61 Analysis
- Variable accesses:
- The first benchmark times a null loop (this time has been subtracted from all the other results). The next three benchmarks time how long it takes to store an integer in a local variable, in an instance variable, and in an array. As might be expected, local variables are fastest. Instance variables are slower because there's an extra field operation involved (tip from KB Sriram), and array accesses are slower still, due to the bounds checking that Java performs. Note that most JIT compilers store local variables in registers, resulting in an additional speedup.
- Data types:
- The next six benchmarks time how long it takes to increment the byte, short, int, long, float, and double data types. ints are consistently fastest, especially when using JIT compilers. For floating-point code, double is typically only slightly slower than float.
- Objects and methods
- Finally we time creating objects and integer arrays, calling normal and synchronized methods, and calling a predefined math function versus inlining a simple version of the same function. Creating an object is slow, and current JIT compilers haven't improved things much. It's typically about the same cost as creating an integer array of length 10. Synchronized method calls are 10-100 times slower than the normal variety. Finally, replacing a system function such as Math.abs with simplified inlined code ((x < 0.0) ? -x : x;) can significantly improve performance for interpreters and older JIT compilers. However, Visual Café represents the newer wave of JIT compilers with specialized native code for some built-in functions -- you're unlikely to be able to do better than these!
- Just-in-time vs interpreted:
- Comparing the results for Sun's JDK interpreter with a JIT compiler (e.g. VJ++), we can see that just-in-time compilers improve the performance of most operations by 5-30 times. However, the time to create an object hasn't improved at all, and the time to call a synchronized method has only improved by a factor of four, so reducing the number of objects created (see the speed page) and minimizing the amount of synchronization in your code have both become even more important.
| http://www.cs.cmu.edu/~jch/java/benchmarks.html | Java Microbenchmarks |
| Last modified: Wed 18 Mar 1998 | Copyright © 1996, 1997 Jonathan Hardwick |