conn = Connection.open(url.host, url.port)
ssn = conn.session()
rcv = ssn.receiver('amq.topic/org.j5live.demo')
msg = rcv.fetch(timeout=timeout)
except ReceiveError, e:
There are two options I see here, keep the fetch method and require the programmer to use timers to read from the local queue or, as currently implemented in my own high level API, attach callbacks to receivers.
The timer/fetch method has a couple of advantages. First it more closely resembles the API of the other bindings. Also it allows tighter management of the local queue. Since each time the timer is triggered we can determine how many messages we wish to process we can give some performance guarantees based on how long it takes to run operations in response to the messages. In this way we can dictate when a message is processed and tweak applications to run smoother. This comes at the price of added complexity which could be even more of a detriment to performance when in the wrong hands.
Why not implement both? This is an option but that doesn’t mean it is a good option. Multiple ways of doing the same thing often confuse new users. Understanding the differences between the API’s and the nuances between usecases are often more complex than either API alone. This could be daunting to a new user.
The trick is to make the API’s build on one another. For instance I could add an onReady handler which then allows the user to use the fetch API to grab any messages inside the handler. We would then set up an internal timer if the queue was not completely drained. This would require users be familiar with the fetch API without having to set up their own timer. If they wanted to have more control, they could set up their own timers instead of using the handler.