Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [paho-dev] Embedded-C dropping PUBACKs

Joe,

I just had another idea, which might work in many cases.  It just did for me.   Change send the first part of sendPacket from:

    while (sent < length && !timer.expired())
    {
        rc = ipstack.write(&sendbuf[sent], length - sent, timer.left_ms());
        if (rc < 0)  // there was an error writing the data
            break;
        sent += rc;
    }


to

    do
    {
        rc = ipstack.write(&sendbuf[sent], length - sent, timer.left_ms());
        if (rc < 0)  // there was an error writing the data
            break;
        sent += rc;
    }

    while (sent < length && !timer.expired())

that way, the write always gets attempted at least once.

Ian

On 02/23/2015 09:15 PM, Joe Planisky wrote:
Hi Ian,

Thanks for the additional suggestions.

For #1 (store the MID and send the PUBACK on the next yield()), in addition to requiring more code, it would also require more data storage since multiple messages could be received in the course of one call to yield().  Although I like the current simplicity of completely dealing with messages one at a time, I’m also somewhat attracted to this idea.  It seems like a nicer separation of concerns, sending PUBACKs being distinct from receiving and delivering messages.

The problem with #2 (don’t read if there wasn’t much time remaining) is deciding how much time is “not much”.  Since messages are read and delivered to the user’s message handler function in the context of yield(), the time would vary depending on what the application’s message handler does and/or on what message was received.  Plus, since we don’t know the QoS of a message until we read it, this could unnecessarily penalize QoS-0 messages.

—
Joe


On Feb 20, 2015, at 16:15, Ian Craggs <icraggs@xxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi Joe,

some other potential solutions:

1) store the message id of the publish for response the next time yield() is called.  This would involve more code though, and I'm trying to keep the code size small.

2) Don't attempt to read a packet if there wasn't much time remaining in the yield interval, leave it to the next call to yield.

If we considered the timeout value for yield() as the minimum amount of time it will wait, and in the case of needing to send an ack it could take longer, that does make a lot of sense though.

Ian

On 20/02/15 17:03, Joe Planisky wrote:
Hi Ian,

Thanks for your reply.

(An observation: in the most recent version of the MQTT specification,
3.1.1, the server is not supposed to retry packets...)
Understood, but I don't know how to make my server (mosquitto 1.3.4) behave
that way.  The spec seems to allow reties of unacknowledged packets outside
of reconnects, although such behavior is "not encouraged".  Switching to a
different server is not feasible at this time.

How frequently are you expecting to receive messages?  If you called
yield() less frequently, then when yield was called, then could it be
more likely that the message was available at the start of the 200ms?
I presume just lengthening the yield timeout wouldn't reduce the likelihood
of the message being received towards the end of the timeout period?
It's possible either or both of those could reduce the likelihood of this
happening, but it would impact the rest of my application in unacceptable ways.

Adding a separate timer when sending the PUBACK seems to be the best solution
for my particular case.  It means my calls to yield will sometimes last
slightly longer than the specified timeout period, but that's acceptable in my
application.

—
Joe


On Feb 20, 2015, at 2:18, Ian Craggs <icraggs@xxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi Joe,

the embedded-C code has to make a compromise between catering for all circumstances and its size, because we want it to take up the smallest amount of resources possible.   This sounds like something that we should have to address though.

(An observation: in the most recent version of the MQTT specification, 3.1.1, the server is not supposed to retry packets. This is not a resolution to this problem though.)

How frequently are you expecting to receive messages?  If you called yield() less frequently, then when yield was called, then could it be more likely that the message was available at the start of the 200ms?  I presume just lengthening the yield timeout wouldn't reduce the likelihood of the message being received towards the end of the timeout period?

The asynchronous version of the client API (which I haven't written yet) would start a background thread to receive and deliver messages, and avoid this problem.  But, in the process of doing that, it would use significantly more resources, which for an environment like the Arduino Uno, could be too much.

As you say, one obvious solution would be to add some time add the end of yield to allow the sending of the puback.  The amount of time required would vary from environment to environment though...

Ian


On 02/19/2015 09:36 PM, Joe Planisky wrote:
Hi all,

I am using the Paho embedded-C code in an Arduino environment (i.e. single threaded, no RTOS) and am running into an issue with QoS-1 PUBACKs frequently not getting sent to the broker. I've looked at the source code and I think I see the problem, but I want to make sure I'm not missing anything.

My program's main loop calls MQTT::Client::yield() with a timeout of 200 mS.  yield() starts a timer for 200 mS and calls cycle() until either the timer expires or cycle() returns an error.

cycle() in turn reads a packet, deserializes it, and if it's a PUBLISH message, delivers it to my handler, and if it's QoS1 (or 2, although I'm not using QoS2), sends a PUBACK by calling sendPacket().

sendPacket() then tries to write the data to the ipStack until EITHER all the data is sent OR the timer expires.

What appears to be happening is that a new PUBLISH message comes in late in my 200 mS yield time, and by the time sendPacket() is called, the timer has expired, so the PUBACK never gets sent.  This results in the broker resending the message even though my client got the initial message just fine.  This is happening annoyingly often for me.

Is that an expected scenario?  It doesn't affect the integrity of the published data, but it does increase network traffic and the burden of dealing with duplicate PUBLISH messages.

It seems like when sending a PUBACK, the call to sendPacket should be given a separate timer rather than whatever time happens to be left in the yield() timer to make sure the PUBACK has enough time to actually be sent.  Or is there another way to deal with this scenario that I'm not seeing?

--
Joe

_______________________________________________
paho-dev mailing list
paho-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/paho-dev
-- 
Ian Craggs
icraggs@xxxxxxxxxx                 IBM United Kingdom
Paho Project Lead; Committer on Mosquitto

_______________________________________________
paho-dev mailing list
paho-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/paho-dev
_______________________________________________
paho-dev mailing list
paho-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/paho-dev
_______________________________________________
paho-dev mailing list
paho-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/paho-dev
_______________________________________________
paho-dev mailing list
paho-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/paho-dev

-- 
Ian Craggs                          
icraggs@xxxxxxxxxx                 IBM United Kingdom
Paho Project Lead; Committer on Mosquitto


Back to the top