Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Limiting number of parallel requests each client can make

If you need to rewrite ALL code depends on the code ;-)

In most cases it requires at least to rewrite some of the control-flow but this is the only way to have a good (web) design here, if you rely on 'syncronized' you always are prone to deadlocks and DOS attacks as a user can easily exhaust your servers resources with less effort.

Basically you need the following things for async:

- a filter/servlet that is capable to check without blocking if there is a free resource (e.g using a counting semaphore) or similar - a datastructure (e.g. a priority queue) where you can put suspended request - a single thread that checks if the resource is free (again) maybe using an external event/trigger or simple polling an wakes up one or more of the suspended requests

What of these elements are already present or if you can reuse them depends on the context (e.g. the nature of your 'resource that is only available to 1 thread') lets say it is a device (a 'black-box') you can only open once or get a refusal/error otherwise, then a very simplified approach would be:

1) try to open the device
2) if it is refused suspend your request and put it in a Queue
3) if successful perform your task and close the device, and resume each item in the queue

This is not the most clever approach but you should be able to use this as a starting point to improve it until it fits your needs.

Am 06.05.21 um 16:53 schrieb Přemysl Vyhnal:
Thanks for your replies,

You should take a look at AsyncServlet processing, this allows to "sleep" requests with low resource needs
Would that mean rewriting the way we sleep or wait for a resource
everywhere in the code called while serving that request? I'd like to
avoid that as we don't have control over all the code or it could be
complicated to change it at this point.

Put the QoSFilter in front of the servlet url-pattern that is resource starved.
And then configure the maxRequests to 1.

All requests that are not actively using that resource will be threadlessly queued with AsyncContext.
When the active request completes, the queue is used to activate the next request.

In *DosFilter* I can overwrite extractUserId method to separate
clients from each other.
Basically I'd like to have separate queues for clients (based on
something I extract from request headers or ssl cert)

If I use *QoSFilter* with maxRequests=1 wouldnt one client block other clients?

Thanks


On Thu, May 6, 2021 at 4:24 PM Christoph Läubrich via jetty-users
<jetty-users@xxxxxxxxxxx> wrote:

You should take a look at AsyncServlet processing, this allows to
"sleep" requests with low resource needs:

https://docs.oracle.com/javaee/7/tutorial/servlets012.htm

Am 06.05.21 um 15:54 schrieb Přemysl Vyhnal:
Hi,
we're using an embedded jetty http server.
Some requests may take some time to complete - to simplify lets say
each request needs a resource that is only available to 1 thread at a
time - the other threads will wait on `synchronize` before sending the
response back from the server.

Currently one client is able to call this server many times in
parallel and we end up in a state when all the thread pool threads are
used and any other client will be waiting in a queue for a long time
before being served.

Do I understand correctly that this queue where the requests wait
before being assigned a thread is the one inside `ManagedSelector`?
Can I or should I try to change how it works?


Now we would like to limit the number of parallel requests each client
can make - or limit the number of requests per client per second. The
goal is that if a client makes many parallel long running requests,
the other clients will be served normally.

We would like to separate clients from each other based on TLS certificates.


What I found so far:

- LowResourceMonitor -- this can detect when all the threads are used
but I don't think I can use this to limit assigning new threads to the
requests waiting in the queue

- org.eclipse.jetty.servlets.DoSFilter -- I can use this to:
-- reject (I would prefer not to)
-- throttle
-- delay and throttle
Is the throttle or delay in the DoSFilter happening when the request
already has its thread and therefore blocking that thread from being
used by other clients? Is it meant to slow down the client in sending
any future requests but doesn't stop the client from sending parallel
requests?

Would you recommend anything else for my use case?

Thanks a lot.

Regards
Premek
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users



Back to the top