Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Streaming options in the WebSocket API

Hi,

Thanks for the helpful  response!

It sounds like partial message support is tricky to manage and
potentially problematic for text messages but those concerns don't
apply to binary messages.

I would think that partial message support would still be preferable
in binary streaming cases, e.g. live HTML video streaming, progressive
image loading, audio/video chat, etc. Since each new session will use
a dedicated thread to read/write the entire stream, as the number of
sessions grows, so will the number of threads. Would you still stick
to the stream-based API for such use cases? I can imagine this is also
co-related to the amount of time required to finish reading or writing
a stream, i.e. the longer it is the more strain on thread resources.

Rossen

On Tue, Sep 17, 2013 at 12:36 PM, Joakim Erdfelt <joakim@xxxxxxxxxxx> wrote:
> Both the Jetty API and the JSR operate in the same way (on Jetty 9.1) with
> regards to streaming.
>
> The first frame of an incoming WebSocket message will trigger the Streaming
> (if you specified as such with your OnMessage declaration).
>
> It is your responsibility to read from that stream, meanwhile, any further
> incoming frames are pushed into the frame for you to read.
> If you don't read fast enough, the WebSocket implementation pauses and lets
> you catch up.  However, don't take too long, otherwise you might fall afoul
> of various timeouts.
>
> Unlike other message handling, when a streaming onMessage occurs, it is
> dispatched to a new thread, rather than within the same thread that
> read/parsed the raw websocket frame.  This allows jetty to continue reading
> and parsing for the purposes of streaming, and also allows the OnMessage
> thread to read from the stream.
>
> The partial message support in the JSR is supported, and gladly the JSR spec
> makes no specifics on how it should work when extensions are in the mix.  In
> other words, if you are using Chrome or Firefox, don't expect the frame
> boundaries you send will be the frame boundaries you receive.
>
> We have no partial message support in the Jetty WebSocket API.
>
> Why don't we expose this in the Jetty WebSocket API, but we do in the JSR?
> The requirements of the RFC 6455 with respect to TEXT messages and UTF8
> encoding, means that it is very possible for partial message support to have
> odd behavior.
>
> Example: you send a 1024 byte UTF8 message, it gets fragmented (for any of a
> dozen reasons), you receive a partial message of 30 bytes, then another of 2
> bytes, then another 900 bytes, then finally the last 92 bytes.  This is
> because a UTF-8 codepoint could be split by fragmentation, meaning that
> there is a partial codepoint that needs more data from the next frame in
> order to know if the TEXT message is valid per RFC-6455 or not.
>
> With partial message support, it is also possible to start receiving a TEXT
> message, then have the connection be forcibly failed due to some bad UTF-8
> encoded data.  This means you have to monitor the onError + onClose while
> using partial messages in order to know what just happened.
>
> At least with streaming support, in this scenario you'll get IOExceptions
> and closed streams when the UTF-8 encoded data is invalid.  A much cleaner
> approach to handling the non-happy-path scenarios.
>
> Also note, none of these concerns apply for partial message handling of
> BINARY messages.
>
>
> --
> Joakim Erdfelt <joakim@xxxxxxxxxxx>
> webtide.com - intalio.com/jetty
> Expert advice, services and support from from the Jetty & CometD experts
> eclipse.org/jetty - cometd.org
>
>
> On Tue, Sep 17, 2013 at 8:37 AM, Rossen Stoyanchev
> <rstoyanchev@xxxxxxxxxxxxx> wrote:
>>
>> Hi, I didn't seen any replies yet. Could anyone comment on support for
>> partial messages in the Jetty WebSocket API?
>>
>> thanks
>>
>> On Fri, Aug 30, 2013 at 2:12 PM, Rossen Stoyanchev
>> <rstoyanchev@xxxxxxxxxxxxx> wrote:
>> > Hi,
>> >
>> > I'm trying to understand the options for streaming in the Jetty
>> > WebSocket API in comparison to the JSR implementation.
>> >
>> > @OnWebSocketMessage lists Reader/InputStream that appear to be passed
>> > in when frames begin to arrive (in the jetty-9.1 branch at least) i.e.
>> > not waiting to the last frame. The JSR implementation however
>> > aggregates the whole message. Is that difference because the JSR
>> > expects it that way?
>> >
>> > The JSR also supports partial messages while the Jetty API does that
>> > only for the sending side. Is this intentional? It could be just a
>> > matter of exposing it since the underlying support is there.
>> > Reader/InputStream, when passed immediately, do allow streaming
>> > without buffering the full message in memory. However it would block
>> > eventually if the server is reading faster than the client is writing.
>> > Partial messages on the other hand it seems are intended to be a
>> > non-blocking option.
>> >
>> > I would also appreciate some comments on using the Jetty API vs the
>> > JSR API now that both are available. Obviously one is an official
>> > standard but beyond that are there any other considerations to keep in
>> > mind?
>> >
>> > Thanks,
>> > Rossen
>> _______________________________________________
>> jetty-users mailing list
>> jetty-users@xxxxxxxxxxx
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>
>
>
> _______________________________________________
> jetty-users mailing list
> jetty-users@xxxxxxxxxxx
> https://dev.eclipse.org/mailman/listinfo/jetty-users
>


Back to the top