[concurrency-interest] JLS 17.7 Non-atomic treatment of double and long : Android

Gregg Wonderly gergg at cox.net
Tue Apr 30 22:21:02 EDT 2013

The problem is that people can trivially create data races.  It's happened everywhere.  There is so much legacy Java code that has data races.   That's no the large problem.  Data races are one thing.  The large problem is when the JIT reorders code and optimizes code to not be sequentially consistent.  It then lands on the developers shoulders to understand every possible, undocumented optimization that might be done by the compiler to try and understand why the code is misbehaving.  Only then, can they possibly formulate which changes will actually correct the behavior of their code, without making it "slower" than it needs to be.  To me, that is exactly the wrong way for a developer to test and optimize their code.  Visible data races that are only about "the data", are trivial to reason about.  

If I saw the Thread.setName/getName lack of synchronization, out in the wild, create a corrupted thread name, I'd be writing a bug report about the JVM creating memory corruption before I would consider that the array had been allocated and made visible before it was filled.  That's not how the code is written, and that's not how it should execute without the developer deciding that such an optimization is safe for their application to see.

Suggesting that denying the visibility of the optimization is selected by the developer not doing anything, is where I respectfully, cannot agree with such reasoning.  This is not a new issue.  The design of many functional languages is based on the simple fact that developers don't need to be burdened with designating that they want correctly executing code.  They should get that by default, and have to work at "faster" or "more performant" or "less latency", by explicit actions they take in code structure.

Thread.setName/getName is a great example of "no thought given" to concurrency in the original design.  That code, as said before, is only executed in a concurrent environment.  Why didn't the original code have the correct concurrency design.  Why hasn't it been visible to enough people that a bug report written and it already fixed?

The answer is, because that racy code, doesn't "act" wrong, in general usage patterns.  But optimizations due to how volatile works and how JIT developers exploit lack of "concurrency selected" coding, could, at any point, break that code.

This is how all of that legacy code, laying around on the internet, is going to come down.  It's going to start randomly breaking and causing completely unexplainable bugs.  People will not remember enough details about this code to actually understand that the JIT is causing them problems with reordering, loop hoisting or other "nifty" optimizations that provide a .1% improvement in speed.   It will be a giant waste of peoples time trying to reconcile what is actually going wrong, and in some cases, there will be real impact on the users lives, welfare and/or safety, potentially.

Call my position extreme, but I think it's vital for the Java community and Oracle in particular to understand that the path we are going down is absolutely a perilous disaster without something very specific being visible to developers to allow them to understand how their code is being executed so that they can see exactly what parts of the code need to be fixed.

Gregg Wonderly

On Apr 30, 2013, at 7:48 PM, "Boehm, Hans" <hans.boehm at hp.com> wrote:

> A nice simple example to consider here is a user application that declares an array, and calls a library to sort it.  The sort library uses a parallel sort that relies on a sequential sort to sort small sections of the array.  In a sequential-by-default world, how would you declare the parallel sections?  Would it be any different than what we do now?  The user application that declares the array may never know that there is any parallel code involved.  Nor should it.
> Applications such as this are naturally data-race-free, thus there is no issue with the compiler “breaking” code.  And the compiler can apply nearly all sequentially valid transformations on synchronization-free code, such as the sequential sort operations.  If you accidentally introduce a data race bug, it’s unlikely your code would run correctly even if the compiler guaranteed sequential consistency. Your code may be a bit easier to debug with a hypothetical compiler that ensures sequential consistency.  But so long as you avoided intentional (unannotated) data races, I think a data race detector would also make this fairly easy.
> Hans
> From: concurrency-interest-bounces at cs.oswego.edu [mailto:concurrency-interest-bounces at cs.oswego.edu] On Behalf Of Martin Thompson
> Sent: Tuesday, April 30, 2013 5:57 AM
> To: Kirk Pepperdine
> Cc: Gregg Wonderly; concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest] JLS 17.7 Non-atomic treatment of double and long : Android
> I agree with Kirk here and would take it further.  By default the vast majority of code should be single threaded and concurrent programming is only utilized in regions of data exchange.
> If all code was sequentially consistent then most hardware and compiler optimizations would be defeated.  A default position that all code is concurrent is sending the industry the wrong way in my view.  It makes a more sense to explicitly define the regions of data exchange in our programs and therefore what ordering semantics are required in those regions.
> Martin...
> ------------------------------
> Message: 3
> Date: Tue, 30 Apr 2013 07:38:01 +0200
> From: Kirk Pepperdine <kirk at kodewerk.com>
> To: Gregg Wonderly <gergg at cox.net>
> Cc: concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest] JLS 17.7 Non-atomic treatment of
>         double  and long : Android
> Message-ID: <5D85887E-8BFE-4F09-AFA8-53FE1DFD52D3 at kodewerk.com>
> Content-Type: text/plain; charset="windows-1252"
> Sorry but making thing volatile by default would be a horrible thing to do. Code wouldn't be a bit slower, it would be a lot slower and then you'd end up with the same problem in reverse!
> Regards,
> Kirk
> On 2013-04-30, at 12:10 AM, Gregg Wonderly <gergg at cox.net> wrote:
> > This code exists everywhere on the Java JVM now, because no one expects the loop hoist?   People are living with it, or eventually declaring the loop variable volatile after finding these discussions.
> >
> > Java, by default, should of used nothing but volatile variables, and developers should of needed to add non-volatile declarations via annotations, without the 'volatile' keyword being used, at all.
> >
> > That would of made it hard to "break" code without actually looking up what you were doing, because the added verbosity would only be tolerated when it actually accomplished a performance improvement.  Today, code is "Faster" without "volatile".   If everything was "volatile" by default, then code would be slower to start with, and proper "concurrency programming" would then make your code faster, as concurrency should.
> >
> > Gregg
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130430/e1ffd03e/attachment-0001.html>

More information about the Concurrency-interest mailing list