TBufferedTransport如何在Java中使用TThreadedSelectorServer。

huangapple 未分类评论43阅读模式
英文:

How to use TBufferedTransport of TThreadedSelectorServer in java

问题

以下是翻译好的部分:

在Python客户端中:

self.tsocket= TSocket.TSocket(self.host, self.port)
self.transport = TTransport.TBufferedTransport(self.tsocket)
protocol = TBinaryProtocol(self.transport)
client = Handler.Client(protocol)
self.transport.open()

在Java服务器中:

TNonblockingServerSocket serverTransport = new TNonblockingServerSocket(port);
TProcessor tprocessor = new ExecutionService.Processor<ExecutionService.Iface>(handler);
TThreadedSelectorServer.Args tArgs = new TThreadedSelectorServer.Args(serverTransport);
tArgs.processor(tprocessor);
tArgs.protocolFactory(new TBinaryProtocol.Factory());
this.server = new TThreadedSelectorServer(tArgs);

Python客户端使用了TBufferedTransport,而Java服务器使用了TFramedTransport。引发了一个异常:

AbstractNonblockingServer$FrameBuffer  Read an invalid frame size of -2147418111. Are you using TFramedTransport on the client side?

由于某些原因,无法修改客户端,因此希望将Java服务器修改为使用TBufferedTransport。如何在Java中使用TThreadedSelectorServer的TBufferedTransport?
谢谢!!!

英文:

How to use TBufferedTransport of TThreadedSelectorServer in java?

in Python client:

self.tsocket= TSocket.TSocket(self.host, self.port)
self.transport = TTransport.TBufferedTransport(self.tsocket)
protocol = TBinaryProtocol(self.transport)
client = Handler.Client(protocol)
self.transport.open()

in Java Server

TNonblockingServerSocket serverTransport = new TNonblockingServerSocket(port);
TProcessor tprocessor = new ExecutionService.Processor&lt;ExecutionService.Iface&gt;(handler);
TThreadedSelectorServer.Args tArgs = new TThreadedSelectorServer.Args(serverTransport);
tArgs.processor(tprocessor);
tArgs.protocolFactory(new TBinaryProtocol.Factory());
this.server = new TThreadedSelectorServer(tArgs);

The Python client uses TBufferedTransport, and the Java server uses TFramedTransport. Causes an exception:

AbstractNonblockingServer$FrameBuffer  Read an invalid frame size of -2147418111. Are you using TFramedTransport on the client side?

For some reasons, the client cannot be modified, so I want to modify the java server to TBufferedTransport.
How to use TBufferedTransport of TThreadedSelectorServer in java?
thanks!!!

答案1

得分: 0

TThreadedSelectorServer 需要使用 TFramedTransport参考链接):

> 一个半同步/半异步服务器,具有一个单独的线程池来处理非阻塞I/O。接受在单个线程上处理,可配置数量的非阻塞选择器线程管理客户端连接的读写。... 与 TNonblockingServer 类似,它依赖于使用 TFramedTransport

这也适用于从 TNonblockingServer 派生的其他非阻塞服务器类(参考链接):

> 一个非阻塞的 TServer 实现。这可以在调用方面在所有连接的客户端之间实现公平性。此服务器本质上是单线程的。如果您想要一个有限的线程池和调用公平性,请参阅 THsHaServer。要使用此服务器,您必须在最外层传输中使用 TFramedTransport,否则此服务器将无法确定何时从传输中读取整个方法调用。客户端还必须使用 TFramedTransport

如果在客户端上无法使用 TFramedTransport,则必须使用阻塞服务器,即 TThreadPoolServer参考链接):

> 使用Java内置的线程池管理来生成处理客户端连接的工作线程池。

然后,您的代码将如下所示:

      TServerSocket serverTransport = new TServerSocket(9090);
      TThreadPoolServer.Args tArgs = new TThreadPoolServer.Args(serverTransport);
      tArgs.processor(processor);
      tArgs.protocolFactory(new TBinaryProtocol.Factory());
      TThreadPoolServer server = new TThreadPoolServer(tArgs);

为了详细说明阻塞和非阻塞服务器之间的区别(供一般参考,如果差异已经对您清楚,那么请忽略):阻塞意味着当从套接字读取数据时,不能同时进行其他操作。因此,当数据部分到达时,当前线程将等待其余数据的到达。因此,当阻塞服务器只有一个线程时,一次只能处理一个客户端。等待来自客户端的进一步数据的时间不能用于为其他客户端提供服务。

为了支持多个客户端,可以添加多个线程(如 TThreadPoolServer 中所做的)。每个线程仍然一次只能处理一个客户端,因此可以同时为的客户端数量受限于线程数。当然,您可以创建许多线程,但这不是一个良好的扩展方式:由 TThreadPoolServer 支持的Java线程池使用系统级线程,因此在创建和在线程之间切换时会带来一些资源开销。因此,创建大量线程以服务大量客户端意味着更多的时间会花在操作系统的任务管理上。

TNonblockingServer 派生的非阻塞服务器旨在通过利用等待从一个客户端获取数据的时间来从其他客户端读取数据来解决这个问题。这样,单个线程可以处理多个客户端,从当前具有可用数据的任何客户端读取数据。非阻塞服务器当然也可以有多个线程,每个线程处理多个客户端。这样,线程的数量不必随着客户端数量而增加。相反,可以按比例选择线程数量,使其与CPU核心数量成比例,然后在核心上运行的每个线程可以根据I/O带宽和CPU速度允许的情况下读取尽可能多的数据。因此,非阻塞服务器在具有大量客户端的情况下具有更好的可扩展性。

因此,如果您必须同时处理大量客户端,则最好使用 TNonblockingServer,并找到一种方法将客户端切换为使用 TFramedTransport。如果您的用例只处理有限数量的客户端,那么即使每个客户端产生大量数据,使用 TThreadPoolServer 而不修改客户端也是可以的。

英文:

The TThreadedSelectorServer requires TFramedTransport (reference):

>A Half-Sync/Half-Async server with a separate pool of threads to handle non-blocking I/O. Accepts are handled on a single thread, and a configurable number of nonblocking selector threads manage reading and writing of client connections. ... Like TNonblockingServer, it relies on the use of TFramedTransport.

This applies for the other non-blocking server classes deriving from TNonblockingServer (reference):

>A nonblocking TServer implementation. This allows for fairness amongst all connected clients in terms of invocations. This server is inherently single-threaded. If you want a limited thread pool coupled with invocation-fairness, see THsHaServer. To use this server, you MUST use a TFramedTransport at the outermost transport, otherwise this server will be unable to determine when a whole method call has been read off the wire. Clients must also use TFramedTransport.

If you cannot use TFramedTransport on the client side, you therefore have to use a blocking server, i.e. TThreadPoolServer (reference):

> Server which uses Java's built in ThreadPool management to spawn off a worker pool that deals with client connections in blocking way.

Your code would then look like this:

      TServerSocket serverTransport = new TServerSocket(9090);
      TThreadPoolServer.Args tArgs = new TThreadPoolServer.Args(serverTransport);
      tArgs.processor(processor);
      tArgs.protocolFactory(new TBinaryProtocol.Factory());
      TThreadPoolServer server = new TThreadPoolServer(tArgs);

To detail the differences between the blocking and the non-blocking servers (for general reference, apologies if the difference is already clear to you): Blocking means that when data is read from a socket, no other operation can be done while reading. So when the data arrives partially, the current thread waits until the remaining data arrives. So when a blocking server only has a single thread, only one client can be handled at a time. The time spend waiting for further data from a client cannot be used to serve other clients.

To support multiple clients, multiple threads can be added (as done for TThreadPoolServer). Each thread can only handle one client at a time as before, so the number of clients that can be served simultaneously is limited by the number of threads. You could of course spawn many threads, but this does not scale well: The threads used by the Java ThreadPool which backs the TThreadPoolServer are system-level threads, so they come with some resource over-head for creation and switching between threads. So creating a large number of threads to serve a large number of clients means more time is spent with OS book-keeping of the tasks.

Non-blocking servers (deriving from TNonblockingServer) are meant to solve this problem by utilizing the time spend waiting for data from one client by reading data from other clients. This way a single thread can handle multiple clients, reading from whichever client currently has available data. A non-blocking server can of course also have multiple threads, each handling multiple clients. This way the number of threads does not have to scale with the number of clients. Instead, the number of threads can be chosen proportionally to the number of CPU cores, and then each thread running on a core can read as much data as the I/O band-width and CPU speed allows. For this reason, a non-blocking server scales better with high-client numbers.

For this reason, if you have to handle a large number of clients simultaneously , using TNonblockingServer would be preferable and it would be better to find a way to switch the client to use the TFramedTransport. If your use-case is handling only a limited number of clients, then using TThreadPoolServer without modifying the client should be fine, even if each client produces a lot of data.

huangapple
  • 本文由 发表于 2020年7月24日 21:01:38
  • 转载请务必保留本文链接:https://java.coder-hub.com/63074104.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定