Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Reactive Streams


Simone,

so I too have implemented a async memory bounded processor in my branch.   It just cannot recycle within the standard API.

Note also that I've created an IteratingProcessor, which is based on our IteratingCallback.  This prevents the recursion of onNext calling request(1) calling onNext()....

Finally, have a read of the README in the branch as one thing I do like about RS is that they are very similar to our Eat What You Kill scheduling strategy.  I think they will have very good mechanical sympathy as a result.... pity about the garbage!

cheers


On 3 June 2015 at 11:46, Greg Wilkins <gregw@xxxxxxxxxxx> wrote:

Simone,

yep I'm understanding it better now.   If you work with the standard RS APIs,  you can be asynchronous and non-queuing, but only if you give up on recyclable items.    You can have recyclable items, but then have to give up either on asynchronous or non-queuing.   You can have all three: async, bounded memory & recycling.

I had just assumed recycling, so that is why I was struggling with async OR bounded memory.

The suggestion I'm getting on the other thread is to achieve recycling by some back channel hand back of the buffers from the subscriber back to the publisher, but that is not part of the standard API.   

Your Base64CompressedStringProcessor works because it creates a new ByteBuffer wrapping a new byte[] for every call to gzip.decode(), which is then passed on in a call to onNext and forgotten.      Typically in the style of coding we do with IteratingCallback we would recycle such intermediate buffers either by putting them back in a pool or reusing them for the next iteration.     The RS style makes this hard so a lot of garbage will be created unless there is an external collection of those buffers some how.

cheers



On 3 June 2015 at 07:33, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Tue, Jun 2, 2015 at 8:00 PM, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
>On Tue, Jun 2, 2015 at 4:16 PM, Greg Wilkins <gregw@xxxxxxxxxxx> wrote:
>> Still can't see that.   Can you implement my FragmentingProcesser for
>> Strings without blocking or doubling memory commitment?
>
> I'll give it a go.

I have implemented here:
https://github.com/jetty-project/jetty-reactive/blob/master/src/test/java/org/eclipse/jetty/reactive/Base64CompressedStringProcessorTest.java

It's fully async, non blocking, non recursive, and uses only an
additional 4 chars of storage to chunk the huge string.

It's not fool proof (some edge case may make it fail), but the point
is that it is possible to implement your use case above.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
http://intalio.com
Developer advice, training, services and support
from the Jetty & CometD experts.
Intalio, the modern way to build business applications.



--
Greg Wilkins <gregw@xxxxxxxxxxx>  - an Intalio.com subsidiary
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.



--
Greg Wilkins <gregw@xxxxxxxxxxx>  - an Intalio.com subsidiary
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.

Back to the top