Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Limiting number of parallel requests each client can make

Thanks for your replies,

> You should take a look at AsyncServlet processing, this allows to "sleep" requests with low resource needs
Would that mean rewriting the way we sleep or wait for a resource
everywhere in the code called while serving that request? I'd like to
avoid that as we don't have control over all the code or it could be
complicated to change it at this point.

>Put the QoSFilter in front of the servlet url-pattern that is resource starved.
>And then configure the maxRequests to 1.
>
>All requests that are not actively using that resource will be threadlessly queued with AsyncContext.
>When the active request completes, the queue is used to activate the next request.

In *DosFilter* I can overwrite extractUserId method to separate
clients from each other.
Basically I'd like to have separate queues for clients (based on
something I extract from request headers or ssl cert)

If I use *QoSFilter* with maxRequests=1 wouldnt one client block other clients?

Thanks


On Thu, May 6, 2021 at 4:24 PM Christoph Läubrich via jetty-users
<jetty-users@xxxxxxxxxxx> wrote:
>
> You should take a look at AsyncServlet processing, this allows to
> "sleep" requests with low resource needs:
>
> https://docs.oracle.com/javaee/7/tutorial/servlets012.htm
>
> Am 06.05.21 um 15:54 schrieb Přemysl Vyhnal:
> > Hi,
> > we're using an embedded jetty http server.
> > Some requests may take some time to complete - to simplify lets say
> > each request needs a resource that is only available to 1 thread at a
> > time - the other threads will wait on `synchronize` before sending the
> > response back from the server.
> >
> > Currently one client is able to call this server many times in
> > parallel and we end up in a state when all the thread pool threads are
> > used and any other client will be waiting in a queue for a long time
> > before being served.
> >
> > Do I understand correctly that this queue where the requests wait
> > before being assigned a thread is the one inside `ManagedSelector`?
> > Can I or should I try to change how it works?
> >
> >
> > Now we would like to limit the number of parallel requests each client
> > can make - or limit the number of requests per client per second. The
> > goal is that if a client makes many parallel long running requests,
> > the other clients will be served normally.
> >
> > We would like to separate clients from each other based on TLS certificates.
> >
> >
> > What I found so far:
> >
> > - LowResourceMonitor -- this can detect when all the threads are used
> > but I don't think I can use this to limit assigning new threads to the
> > requests waiting in the queue
> >
> > - org.eclipse.jetty.servlets.DoSFilter -- I can use this to:
> > -- reject (I would prefer not to)
> > -- throttle
> > -- delay and throttle
> > Is the throttle or delay in the DoSFilter happening when the request
> > already has its thread and therefore blocking that thread from being
> > used by other clients? Is it meant to slow down the client in sending
> > any future requests but doesn't stop the client from sending parallel
> > requests?
> >
> > Would you recommend anything else for my use case?
> >
> > Thanks a lot.
> >
> > Regards
> > Premek
> > _______________________________________________
> > jetty-users mailing list
> > jetty-users@xxxxxxxxxxx
> > To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users
> >
> _______________________________________________
> jetty-users mailing list
> jetty-users@xxxxxxxxxxx
> To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users


Back to the top