Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] asynchronous flush?

Hi Joakim,

I understand buffering in general :) We don’t need low latency here, the short SSE updates are pushed in batches every second and we can tolerate short delays in this (even on the order of seconds). However, without flushing, Jetty would continue buffering for quite a while. Apparently, non-blocking flush was overlooked when designing the async-write servlet APIs. If Jetty also does not expose any calls to do this, I suppose setting a low buffer size on the ServletResponse then padding the writes would work. But it’s not very elegant :)

Btw, to show you specifically what we’re talking about, this servlet feeds the traffic visualization shown on the map here: http://www.scarabresearch.com/. For this use case, SSE seems quite adequate.

Regards,
  Viktor


On July 28, 2016 at 16:38:09, Joakim Erdfelt (joakim@xxxxxxxxxxx) wrote:

There's no such thing as a no buffering send over tcp/ip.

Your app buffers.
The java layer buffers.
The OS network layer buffers.
The network itself buffers.
The various HTTP intermediaries buffer.
The network hardware between you and the remote buffers.
The remote side also has its buffers.

Any one of those can prevent the remote side app from seeing the data you want in the time frame that you want.
This buffering can be further exacerbated by traffic muxing, traffic aggregation, compression, etc...

If the timeliness of the data is important, you'll be better off using UDP, as that will reduce the number of points where buffering can occur (but not eliminate it!)
But then you are on the hook for out of order packets, dropped packets resend, etc.. (pretty much what the TCP layer is doing)
(This is how most network gaming works, video conferencing, live streaming, webrtc, etc...)

If you push enough data before your flush() to consume those various buffers then you can, in a round about way, force the data through.
However, if any layer has congestion, then you suddenly are blocking again.

Timeliness and SSE are at odds with each other, mainly due to the fact that you are dealing with HTTP and all that it brings to the table.

If timeliness is not a requirement, then stick with SSE and all of the buffering that exists.

You should seriously consider cometd, as it will not send if the specific endpoint is congested, queuing up those messages until the congestion abates some.
You can even use the cometd features for message timeout (the message expires and is old after x ms) and message ack (confirmation that the remote endpoint got the specific message. useful for unreliable clients on unreliable networks. wifi/mobile). 

Joakim Erdfelt / joakim@xxxxxxxxxxx

On Thu, Jul 28, 2016 at 6:50 AM, Viktor Szathmáry <phraktle@xxxxxxxxx> wrote:
Hi,

Is there a way to disable buffering, so that the output is immediately sent to the client?

Thanks,
  Viktor

_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Back to the top