Firstly 2M requests/second is a pretty high mark to try to get to. In order to optimise jetty for such extreme loads, you really need to consider the exact hardware, the average request size, the average response size etc.. Tell us more and we may be able to suggest some changes to configuration.
Note also make sure that your load generation for your benchmark is realistic. There is a big difference in how a server behaves with a few very busy connections vs many mostly idle connections. Generally Jetty is configured more for the later case, so testing it with just 20 connections is not the sweet spot. If your real application will have more connections, then please test with more connections.
Also, when benchmarking, if your client is reporting achieved throughput, then you are not actually measuring achieved throughput. Rather you are measuring response latency because the client will not send the next request until a prior response is received. Thus the reported request/sec is a result of the client not sending more requests.
It is far better to have load generator that is capable of sending a specific request rate and then reporting the latency/errors achieved and deciding if that is sufficient quality of service for your application.
Also, this is not jetty specific, but I don't think the way you are converting the input into a string is the best way to do so. You will create lots of garbage reading line by line. You'd also be best to use the content length of the request to set the initial capacity of the StringBuffer.