Seems pretty straightforward to use, although you have to think a bit differently about the problems that you are trying to solve. FeedLounge was already written in such a way to do as much as possible in an asynchronous (background) fashion, so Twisted fits well with the backend design.
Side note: In playing with some of the bits of Twisted, the already excellent documentation still wasn’t enough. Googling for things would usually find the solution, but it would also find issues where the Twisted team was being “less than helpful”. When you get Ian Bicking frothed up enough to respond, you MUST be doing something wrong. Caveat coder. While this won’t stop me from using Twisted, it definetly doesn’t encourage me to participate in the community to any great extent.
With the current code, it was written to believe that there would only be one backend worker (KISS, and work your way up the complexity ladder). I was able to extend the worker to use threads to grow with us as we scaled, but once we launch live, we will absolutely need many machines performing these backend tasks.
To be able to do this, I essentially need a task queue structure that is outside of any worker process. Coming from the Java world, I would use JMS with a durable Queue as the task dispatcher. What to use in the Python world, though? After searching for many solutions, it seems as though people end up building their own one off message dispatchers for this type of task. I found quite a few options in the multicast arena, but none in the single message to one of N clients.
I have set up ActiveMQ, with a STOMP protocol adpater, and that is the task dispatcher for now. The problem with the STOMP protocol is that you subscribe, and then messages are delivered to you asynchronously, so you end up queuing the messages on each work client as well. Since different tasks take different amounts of time to complete, you have just failed because you are round-robining the message to all connected clients. So, I am using the STOMP API to send new tasks, and using the ActiveMQ servlet to take tasks from the queue, one at a time, synchronously. This way, load balancing will automatically happen, as the task workers only take tasks as they can work on them, and I can add more workers as the load increases.
In the future, it may be a custom Twisted server, with a Berkeley DB backend for some speed.
Does anyone have any ideas on sending many messages, making sure that only one message ever gets delivered to one and only one client? In the C or Python worlds? I would have expected something like this built on top of Spread or something similar.