[concurrency-interest] Thread priority

oleksandr otenko oleksandr.otenko at oracle.com
Thu Feb 7 16:38:53 EST 2013


-Xcomp

Alex

On 07/02/2013 16:30, Nathan Reynolds wrote:
> Sorry, I wasn't very specific.  The 10,000 invocation is the default 
> for server VM.  For client, I think it is 1,500 invocations.
>
> I simplified my explanation.  Optimization for server doesn't start 
> until 10,000 method invocations or 10,000 loop iterations. However, 
> the optimized code won't be used unless the thread enters the method 
> or iteration after the compilation is done.
>
> JRockit JVM never executed bytecode.  When the class was loaded, it 
> immediately created native code albeit not fully optimized. This made 
> JRockit startup times much slower than HotSpot. However, JRockit was 
> usually able to get to full speed much faster.  This prompted HotSpot 
> to adopted a tiered compilation strategy.  1,000 invocations was used 
> to avoid the slow startup times yet get to native code sooner than 
> waiting until 10,000 invocations.  If configurable, you can probably 
> get HotSpot to behave like JRockit by setting the first compilation to 
> 0 or 1 invocations.
>
> 1,000 and 10,000 seem like arbitrary values with a little bit of 
> experience or science behind them.  Sure, anyone could tune these 
> values specific for their application to get the optimal start up and 
> warm up performance.  But, how can we pick the best value averaged for 
> all workloads?  How do we even collect that data?  We could run 
> several tests and find the best value averaged for those tests, but 
> there will always be a more optimal value for each individual 
> application.  It seems this is something best for the program writer 
> to tune.
>
> Nathan Reynolds 
> <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
> Architect | 602.333.9091
> Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
> On 2/7/2013 9:08 AM, Gregg Wonderly wrote:
>> I've always wondered why these kinds of "large" invocation counts are 
>> used. In many methods, there will be a single entry in most 
>> applications, and "loops" inside of that method could be optimized 
>> much sooner.  In many of my desktop applications, I set the 
>> invocation count (on the command line) to 100 or even 25, and get 
>> faster startups, and better performance for the small amount of time 
>> that I use the apps.  For the client VM, it really seems strange to 
>> wait so long (1000 invocations), to compile with instrumentation.  
>> Then waiting for 10 times that many invocations to decide on the 
>> final optimizations seems a bit of a stretch.
>>
>> Are there real data values from lots of different users which 
>> indicate that these "counts" are when people are ready to be more 
>> productive?  I know that there are probably lots of degenerative 
>> cases where optimizations will be missed without enough data.  But it 
>> would seem better to go to native code early, and adapt occasionally, 
>> rather than wait until you can be sure to be perfect.
>>
>> Gregg Wonderly
>>
>> On 2/7/2013 9:49 AM, Nathan Reynolds wrote:
>>> With tiered compilation, once a method reaches 1,000 invocations
>>> (configurable?), it is compiled with instrumentation.  Then when it 
>>> reaches
>>> 10,000 invocations (configurable), it is fully optimized using the
>>> instrumentation profiling data.  For these operations, the JIT 
>>> threads should
>>> run at a higher priority.  However, there are some optimizations 
>>> which are too
>>> heavy to do at a high priority.  These optimizations should be done 
>>> at a low
>>> priority.  Also, methods, which haven't quite reached the 1,000 
>>> invocations but
>>> are being execute, could be compiled with instrumentation at a low 
>>> priority.
>>>
>>> The low priority work will only be done if the CPU isn't maxed out.  
>>> If any
>>> other thread needs the CPU, then the low priority compiler thread 
>>> will be
>>> immediately context switched off the core.  So, the low priority 
>>> compilation
>>> will never significantly hurt the performance of the high priority 
>>> threads.  For
>>> some work loads, the low priority threads may never get a chance to 
>>> run. That's
>>> okay because the work isn't that important.
>>>
>>> Nathan Reynolds 
>>> <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> |
>>> Architect | 602.333.9091
>>> Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
>>> On 2/7/2013 12:21 AM, Stanimir Simeonoff wrote:
>>>> Thread priorities are usually NOT applied at all.
>>>> For insance:
>>>>      intx DefaultThreadPriority                     = 
>>>> -1              {product}
>>>>     intx JavaPriority10_To_OSPriority              = 
>>>> -1              {product}
>>>>      intx JavaPriority1_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority2_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority3_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority4_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority5_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority6_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority7_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority8_To_OSPriority               = 
>>>> -1              {product}
>>>>      intx JavaPriority9_To_OSPriority               = 
>>>> -1              {product}
>>>>
>>>> in other words unless specified : -XXJavaPriority10_To_OSPriority=
>>>> it won't be mapped.
>>>>
>>>> If applied the JVM compiler/GC threads may become starved which you 
>>>> don't
>>>> want, so they have to work above normal prir (that request root 
>>>> privileges).
>>>> Alternatively the normal java threads have to run w/ lower prir 
>>>> which means
>>>> other process will have higher priority - also unpleasant.
>>>>
>>>> Stanimir
>>>>
>>>> On Thu, Feb 7, 2013 at 5:20 AM, Mohan Radhakrishnan
>>>> <radhakrishnan.mohan at gmail.com 
>>>> <mailto:radhakrishnan.mohan at gmail.com>> wrote:
>>>>
>>>>     Hi,
>>>>               Can the Thread priority setting in the API still be 
>>>> reliably
>>>>     used uniformly across processors ? There are other concurrency 
>>>> patterns in
>>>>     the API but this setting is still there.
>>>>
>>>>
>>>>     Thanks,
>>>>     Mohan
>>>>
>>>>     _______________________________________________
>>>>     Concurrency-interest mailing list
>>>> Concurrency-interest at cs.oswego.edu 
>>>> <mailto:Concurrency-interest at cs.oswego.edu>
>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Concurrency-interest mailing list
>>>> Concurrency-interest at cs.oswego.edu
>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130207/50983402/attachment.html>


More information about the Concurrency-interest mailing list