Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
RE: [dsdp-tcf-dev] Applicability of TCF agent for remote tracing

> This is an important use case that has been considered.  We have
> identified several cases where there is a need for a transparent high
> performance stream to transfer of data to and from a target while using
> TCF for the control aspect of the data.  

Yes, glad we agree on the need/objective :-).

> The easiest and fastest solution would probably be to open a new TCP/IP
> channel for the data part, but this has several drawbacks:

I agree, there is a small overhead in having a streams abstraction over
TCP/IP, but it is negligible overall if the block writes/reads are
handled properly. Yet, that abstraction allows a nice default fallback
when TCP/IP is not available.

> Because of these drawbacks we implemented a general service to
> efficiently stream binary data over the TCF channel, see
> streamservice.[ch] in the TCF prototype (unfortunately we don't have any
> documentation for this service this yet).  Basically what this service
> does is to creation of arbitrary number of byte streams over a TCF
> channel.  The commands a designed to minimize round trips and allow
> multiple concurrent read and write commands can be issued asynchronously
> so there are no delays between commands.

If I understand correctly, this service takes care of the multiple
asynchronous streams functionality. This is separate from zero copy
binary block writes/reads but similarly useful for high performance
tasks.

Is the agent threaded in some way? Will we need to use the streamservice
for tracing even just to insure not to block the other services (e.g.
RSE)?

> The stream service is using standard encoding format, so there is some
> overhead as compared to a pure TCP/IP stream, however, there are plans
> to add support for raw and compressed formats for transferring binary
> data to TCF by extend the JSON format when both peers supports it.  The
> message header overhead can also be minimized by reading/writing larger
> chunks of data.

The idea would then be to extend JSON and streams to provide unbuffered
zero copy block writes. This may even mean to support shared memory if
the client is local. The message header overhead is of little concern to
me, given the comparatively much larger size of the tracing buffers.





Back to the top