[concurrency-interest] Fast control transfer from one thread to another?
ron.pressler at gmail.com
Mon Aug 8 15:30:26 EDT 2011
If each fiber is represented by a thread, then any transfer of control
between fibers would require an OS task switch, which is bound to be slower
than C Ruby's lightweight threads. However, the erjang
<https://github.com/trifork/erjang/wiki>project has implemented erlang on
top of the JVM, and erlang requires lots of "fiber" switches for its actors.
The way they did it is by using a modified version of Kilim
which uses bytecode instrumentation to implement continuations with
stack-capture - the same way C Ruby does it, I presume.
Scala uses java's Fork/Join tasks for its actor scheduling, and it sounds
like you might be able to use that too. You transfer control to another
fiber with fork, and block yourself with join.
But in any case, for best performance you will probably have to abandon the
one-thread-per-fiber model (which is also expensive on memory)
On Mon, Aug 8, 2011 at 10:08 PM, Charles Oliver Nutter
<headius at headius.com>wrote:
> Hello all!
> I'm looking to improve the performance of JRuby's implementation of
> Ruby's "Fiber". Fiber is intended as a lightweight thread or
> coroutine. Fibers are bound to a parent thread, and only one Fiber or
> the parent thread can be running at a given time. Control transfer --
> in the logical sense -- pauses the caller and continues the callee
> from where it left off. The callee can then "yield" control back to
> the caller, pausing its execution.
> Because Fibers retain execution state (call stack, etc), they are
> implemented in JRuby using a native thread per Fiber. Control transfer
> used to be done via SynchronousBlockingQueue objects, but I am making
> changes to use LockSupport.park/unpark directly. LockSupport appears
> to transfer control around 2x faster than using SBQ, but it's still
> many times slower than C Ruby's mechanism of saving and restoring
> native C stack frames within the same native thread.
> So, I'm looking for suggestions on a better mechanism to explicitly
> deschedule one thread and start another one, knowing that it's always
> an explicit cooperative rescheduling.
> - Charlie
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest