[concurrency-interest] How bad can volatile long++ be?

David Holmes dcholmes at optusnet.com.au
Tue Dec 11 18:09:45 EST 2007


Hi Osvaldo,

Taking a simple example:

   int x;  // field

    public inc() { x++; }

the bytecode generated by javac is:

public void inc();
  Code:
   0:   aload_0
   1:   dup
   2:   getfield        #2; //Field x:I
   5:   iconst_1
   6:   iadd
   7:   putfield        #2; //Field x:I
   10:  return

}

And you can see that there nothing atomic in that. Trying to recognize that
the above might be replaced by a single atomic assembly instruction is not a
worthwhile "optimization":
a) if the field is not accessed concurrently then there is no need for the
atomic update, and atomic instructions have a cost in terms being atomic, so
such a change would actually degrade performance;
b) if the field is accessed concurrently then either:
   i) there is synchronization protecting the field - in which case we're in
the same boat as (a), the atomic is unnecessary and expensive.; or
   ii) there is no sync, so the code is broken anyway and making this atomic
is unlikely to actually make the overall program correct.

Hence no point even attempting such an "optimization". :)

Cheers,
David

-----Original  Message-----
From: concurrency-interest-bounces at cs.oswego.edu
[mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of Osvaldo
Pinali Doederlein
Sent: Tuesday, 11 December 2007 9:29 PM
To: dholmes at ieee.org
Cc: Concurrency-interest at cs.oswego.edu
Subject: Re: [concurrency-interest] How bad can volatile long++ be?


  David Holmes escreveu:
David Gallardo writes:
  ++ is not atomic; while it may effectively be so on a single processor
machine, this is not the case on multiprocessor machines.

It isn't the case on single processor machines either. ++ is
read-modify-write sequence and a thread can be preempted at any point in the
sequence.

++ is just syntatic short-hand. Write it out in full and you'd never expect
it to be atomic.

  Perhaps the problem is that on CISC platforms like the over-popular x86,
this can be compiled down to a single instruction that does the fetch,
increment and store on a memory address operand. People get used to this,
they often read assembly output from compilers and see a single pretty,
atomic instruction like INC DWORD PTR  [EBX], and expect this to be the
rule - "it's atomic in practice". The problems is, it's not a portable
assumption. And even in the platforms that allow this code generation, I'd
expect the best optimizers to often not do it, for example because they see
that a new read is unnecessary on a previously used field, or the write can
be delayed (e.g. if the increments are inside a loop this would provide a
huge boost). I wonder, though, if any optimizers that could do that avoid
it - giving more priority to perform an atomic increment - just to
compensate for buggy application code?...

  A+
  Osvaldo

Cheers,
David Holmes

_______________________________________________
Concurrency-interest mailing list
Concurrency-interest at altair.cs.oswego.edu
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest




--
-----------------------------------------------------------------------
Osvaldo Pinali Doederlein                   Visionnaire Informática S/A
osvaldo at visionnaire.com.br                http://www.visionnaire.com.br
Arquiteto de Tecnologia                          +55 (41) 337-1000 #223
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/attachments/20071212/c0cfaefd/attachment.html 


More information about the Concurrency-interest mailing list