I have been recently working on features in FeedLounge where we want to present a pretty URI to our users, so I decided to pull in mod_rewrite into the lighttpd mix.
No matter what I tried, I couldn’t get the rewritten URI passed down to the FastCGI processes. According to the lighttpd documentation, you only need to order the directives, and everything ‘just works’. I enabled request handling debugging, and I could see the URI being rewritten, but then be passed to FastCGI as the untouched URL.
I tried older versions of lighttpd, just to see if it was a bug, same problem. I then jumped on the lighttpd IRC channel, and was told that the rewritten URI is passed to the FastCGI process as the variable REDIRECT_URI. Brilliant. I read the FastCGI spec again, and could not find anything related to this, so I ended hacking up my request class something along these lines:
req.uri = env.get('REDIRECT_URI', 'BLANK')
if req.uri == 'BLANK':
req.uri = env.get('REQUEST_URI', '')
Not the best solution, but it does get the right URI passed into the python processes to get the work done.
I should note a couple of things:
- I am much more comfortable with Apache
- I did get a response from the lighttpd community in the same time I would have got a response about a similar issue with Apache
TechCrunch has a review of 9 online feed readers, and FeedLounge was happy to be included. A follow-up podcast was also published by TalkCrunch, discussed the state of online feed reading with Newsgator, Attensa, Rojo, and FeedLounge.
I wanted to mention a few things that were either missing or just wrong in either of the entries.
In the features grid, a number of things should be changed for accuracy:
- Number of panes for viewing - FeedLounge was listed as a 3 pane interface, when we actually offer a 2 pane version, as well as two 3 pane versions, so FeedLounge should be changed to show 2/3. The real question should be: “How many views is the user offered to view their content?”, as one size does not fit all.
- Date sorting - This was just added in the SOG release, so notch that one.
- Public subscriber counts - FeedLounge does tell the feed publisher the subscriber count for a feed, so while I would consider that public, some may not.
- Mark Item Read/Unread - Just as one view does not suit all users, neither should marking an entire feed read. There should really be a row for marking a single item read, as some (cough cough) readers do not allow that.
In the podcast, a few things are worth mentioning:
- I do like it when FeedLounge is called ‘Super Quick’. I am very happy when people notice the work we have done.
- Very interesting that no one is willing to disclose the number of users.
- Feed reading can become a ‘pillar application’ of the internet (Chris of Rojo). Growth is in growing the market, not stealing from each other.
- Michael notes that rendering HTML in the online products is crap: I don’t think that is necessarily true. FeedLounge does this as well as NNW, in my opinion. This is just a simple styling issue, and I am glad to have Alex around to make that happen.
- Michael said he had tried all 9 readers at the start, and then mentions he never tried FeedLounge at about 37 minutes in. I assume the latter is the truth. I guess I have to call ‘bullshit’ on Michael.
- Does FeedLounge have a chance (given the business model)? No and maybe, and yes. I would say we are here now and our hard expenses are above water already, so I would have to go with yes. Yes, I believed before I started, so I am only more of a believer at this point.
My opinion on what makes FeedLounge better
Frank Gruber noted in his review:
Aside from the exceptional performance rating, I wonder what else sets FeedLounge apart from its free competitors.
User experience is very hard to quantify, but my personal favorite example is this:
- Login to FeedLounge
- Press the space bar (and again, and so on…)
That is all you need to know for the simplest, most pleasing user experience online right now - in my opinion, of course. If you came from Bloglines and are used to reading that way, you may not see the benefits from the differences in FeedLounge immediately, but I do believe it will sink in after a while. If you come from an client based reader such as NetNewsWire, FeedLounge is going to be the closest thing to emulate that richer experience. Many people that use a client based reader have a hard time with an online reader, and FeedLounge is trying to bridge that ‘user experience’ gap.
Why we charge for FeedLounge
Many people continue to speculate about whether we should charge, or whether we will succeed, etc. FeedLounge is a business, and like any and every other business out there, it has to make money to survive.
All of the other players with “free” readers are keeping afloat by other means such as VC funding, other products keeping the reader afloat, showing ads, etc. In the end, there are costs, and someone/something has to pay them. When the VC funding runs out for some of these companies, there will need to be a way to monetize the product.
As a service, FeedLounge charges the user to keep the content downloaded, arranged, and tracked whether they are online or not. We don’t present ads, and we are not funded by other entities.
We are delivering real value to our users today, and will be able to survive as long as the user community believes we are providing value. Nothing hidden.
We just released the SOG release, with some nice features:
- TagThru™ - Using FeedLounge, whenever you tag a feed item, it will now be tagged in del.icio.us as well. Since we do the remote tagging asynchronously, you may not immediately see the item show up, but rest assured it will get there. Decouling the tagging process also allows for hiccups in service on either side, as well as other issues (like password changes).
- Sorting items oldest first - pretty self-explanatory, and it made it up the feature voting page to warrant implementation.
We also expanded the FeedLounge demo to a daily ‘24 hour tour’. Some people were having issues being forced to try and make a decision in such a small time frame, so we expanded it out a bit, giving people some elbow room in taking FeedLounge for a test drive.
We are also continuing work to keep the speed up. We want it to be as fast or faster than it is now, even as we grow, so we are planning ahead on what it takes to make it happen.
And a personal note: Stephen, thank you so very much for all the time you have shared with us. It really does mean quite a bit to me, and your opinion truly is invaluable. All of your paying customers should be happy, they are getting a total deal.
PS. I still have many posts still in editing talking about the development ups and downs of getting FeedLounge to where it is at now, as long as other technobabble, but I can’t seem to find the time to polish them into something semi-readable. Hopefully, I can give more attention to those posts, and share in the story a bit more.
PPS. Joe, this takes care of most of the issues you had with FeedLounge, maybe it is worth a shot again? You can sort by oldest, it groups the way you want, and you get 24 hours to try it out.
Joe Shaw recently compared several aggregators, and had a few things to say about FeedLounge:
This is a $5/month service with a free 3 hour tour. How am I supposed to evaluate a complete piece of software in three hours? I need to be stranded on an island with Mary Ann for three seasons. Metaphorically.
We just can’t afford for people to come over to the demo server, dump a 1000 feed OPML on us, and then never come back. The load for OPML import is insane compared with the rest of the FeedLounge experience, so we need to do something to mitigate it. Didn’t have enough time in the first tour? Take another one. We’re not preventing you from it. We may open this up more in the future, but for now we want to make sure everyone gets a chance to try the FeedLounge experience.
Fortunately FeedLounge doesn’t have featureitis, so I was pretty much able to get the gist of it within 30 minutes or so. It’s a great feed reader: the layout is clean and it has AJAXy features which make interacting with the site pleasant.
I’m glad you like it, and thanks for noticing the design. Alex and I have worked very hard to create something with a sort of ’simple elegance’ that we would want to use ourselves. Keyboard shortcuts, Ajax where it can help, but not overly used “just because”, etc. It is the only news reader I use now, and with the upcoming features, we hope to covert more people.
It does have a two-paned view, but you can’t sort oldest-first.
Sorting oldest first is a feature up for vote by the FeedLounge community on our features voting page. It is currently the 4th most popular missing feature (3rd since the performance is back from vacation). Feel free to cast your vote for this, or any other feature. You only need a forum login to vote, you needn’t be a user (yet).
I am pretty sure it didn’t do groups the way I want — by clicking on the group and displaying all new items — but we’ll never know because my three hours have been up for a good twenty-something hours.
That is exactly what it does. Create a tag with 4, 5, 20 feeds, and when you click on that tag, only the new items show from those feeds.
So the end result? I would pay $5/mo for this if it had these features, but it doesn’t, so I definitely won’t.
What is missing from FeedLounge besides sorting items from oldest to newest? We want to make FeedLounge better for everyone, and your review of all those readers seemed very balanced and introspective.
We have announced the pricing for FeedLounge. Comments welcome.
The older version of feedvalidator that I was using automatically imported timeoutsocket, whether it used it or not. This was borking SSL feeds (such as gmail) from working. I have it fixed locally, and the next update of FeedLounge should have the fixes.
And of course, FeedLounge will stop crying about the Atom 1.0 namespaces, which is a good thing, since our feedparser was already updated to support them.
Oh, and the obligatory error in Python for Google to find:
ssl() argument 1 must be _socket.socket, not _socketobject
Playing with Twisted for the past few days to see how it can help me with FeedLounge work.
Seems pretty straightforward to use, although you have to think a bit differently about the problems that you are trying to solve. FeedLounge was already written in such a way to do as much as possible in an asynchronous (background) fashion, so Twisted fits well with the backend design.
Side note: In playing with some of the bits of Twisted, the already excellent documentation still wasn’t enough. Googling for things would usually find the solution, but it would also find issues where the Twisted team was being “less than helpful”. When you get Ian Bicking frothed up enough to respond, you MUST be doing something wrong. Caveat coder. While this won’t stop me from using Twisted, it definetly doesn’t encourage me to participate in the community to any great extent.
With the current code, it was written to believe that there would only be one backend worker (KISS, and work your way up the complexity ladder). I was able to extend the worker to use threads to grow with us as we scaled, but once we launch live, we will absolutely need many machines performing these backend tasks.
To be able to do this, I essentially need a task queue structure that is outside of any worker process. Coming from the Java world, I would use JMS with a durable Queue as the task dispatcher. What to use in the Python world, though? After searching for many solutions, it seems as though people end up building their own one off message dispatchers for this type of task. I found quite a few options in the multicast arena, but none in the single message to one of N clients.
I have set up ActiveMQ, with a STOMP protocol adpater, and that is the task dispatcher for now. The problem with the STOMP protocol is that you subscribe, and then messages are delivered to you asynchronously, so you end up queuing the messages on each work client as well. Since different tasks take different amounts of time to complete, you have just failed because you are round-robining the message to all connected clients. So, I am using the STOMP API to send new tasks, and using the ActiveMQ servlet to take tasks from the queue, one at a time, synchronously. This way, load balancing will automatically happen, as the task workers only take tasks as they can work on them, and I can add more workers as the load increases.
In the future, it may be a custom Twisted server, with a Berkeley DB backend for some speed.
Does anyone have any ideas on sending many messages, making sure that only one message ever gets delivered to one and only one client? In the C or Python worlds? I would have expected something like this built on top of Spread or something similar.
UPDATE: Jeremy responds:
I guess the low-down is that if you’re a company that provides a service, you either need to be ready to scale or you need to be ready to limit access to your service. Users shouldn’t suffer. But if they do, at least communicate. Thankfully, that’s something FeedLounge does really, really, really well.
Agreed, agreed, agreed, agreed, thanks.
I guess they’re made of different stuff than most of the companies I deal with. Best of luck to you guys, and sorry my rant put you in the crosshairs
Alex and I are very heavily customer focused. We want to keep it that way.
In his post entitled Web 2.0 Companies NEED To Scale, Jeremy Wright makes a few good points, and a few bad ones. I am glad he chose FeedLounge as an example, as it gives me more than enough reason to respond. The points he attempts to make are mostly valid, but not always applicable to the small, bootstrapped player.
I’m not sure when building a scaleable web app became optional. But Feedster, Technorati, Delicious, Google Analytics (and numerous other Google apps of late), BlogPulse and many of the other “big apps” have “suddenly” been hit by scaleability issues.
First, building anything is optional. Building an app, building a web app, building a scalable web app. All optional. You don’t need to do any one of the list. Even when you choose to build a web app, you will pick a target to scale to. FeedLounge chose 2 users as the initial ’scale to’ numbers, seeing if we could build enough functionality and a great user experience. We then released it to a few friends to see if they liked what we built. They did. When we were hit ’suddenly’ by scalability issues, we knew it would happen sometime and dealt with it accordingly.
Yeah. Here’s their process:
1. Start with a handful of users. This is too much for ded box.
2. Move to dedicated server.
3. Add a few more users til they’re at 100. This is too much for one box.
4. Add more hardware. It’s obvious this isn’t enough.
Erm… Hello? Should the recoding have happened after step 1? I mean, if you draw a graph of “okay if we use 10% of a CPU with 10 users, with 100,000 users we’ll need 10K CPU’s” … Something’s wrong.
The FeedLounge development process was more along the lines of:
- Build a webapp, see if the features are compelling to a set of users, keeping a design in mind that is capable of scaling
- Overrun the shared server that you are using, switch to dedicated server, so you can properly measure the effects of the application.
- Add more users, adding requested features from the users, measuring the load in a fixed, known environment, and start work on “Distributed” part of ladder. The is where the build portion of the scalability starts.
- Now that you believe you have something that has value, invest in the hardware and software development necessary to scale. Continue working on priority based tasks towards release of your product.
The design of the application allows scalability/availability to be added as time and money allow. The ‘recoding’ has happened every step along the way. The focus was not and has not been on scalability. It has been on whether we can provide value to our user base. If we were to focus on what you deem important in this article from day one, a lot of people would be able to look at a horrible application, and no one would use it for any significant amount of time. Perhaps you think that FeedLounge has infinite pockets to dip into for hardware infrastructure and development talent? Hint: We don’t.
We are on step 4, and it is going slowly since our team is so small. It was much more important to show what user interaction we could build, and then worry about total performance and scalablility afterwards.
The business model also comes into play. If we were to choose to sell a software product instead of a service, the software we have will work fine for hundreds of users, no problems. Since we have finally chosen to go out as a web service, scalability to many thousands of users is a very important requirement, but only now.
Nothing is wrong. It is all choices that you make trying to start a company. Alex and I chose to focus on the user experience instead of scalability, and now we know that we have to scale. I knew going in to this venture that it would be a huge amount of data to move. I do have a great number of years experience making fast things faster in the software world. You mention that is “astounds you” as to what people define as scalable and available. No one has ever used those two words to define FeedLounge. Nor will they, until we have proven that we can. Ask any of our alpha users. They don’t stick around for the availability, they stick around for the features that they have been given. And yes, scalability is a feature, but not one that should be the major focus in incubation.
Maybe I’m just spoiled, having worked in high performance, high availability apps before, but it constantly astounds me what some folk consider “scaleable” and “available” applications.
Scaling of resources and time is also important in the real world. Expecting Alex and Scott to scale to the level of Google and Yahoo! (or even VC funded companies like feedster and technorati) is just silly. Once you look at what we have done with the resources given, I think your tone will be quite different.
At FeedLounge, we are taking a realistic reactive approach to optimization, versus a predictive, all-encompassing approach. We designed a platform that we knew we could scale in a distributed environment, identified the areas that needed to be refactored to scale, validated those with measurements, and then wrote the code to make it a reality. Remember, premature optimization is the root of all evil.
Alex and I have officially announced the beta release date of FeedLounge. Read it here.
It will be a huge flurry of activity in a push to the release. More code, more infrastructure, less sleep
I have noticed in the past year while working on FeedLounge, that I just cannot keep up with my old reading list. This may sound crazy, but I am down to actually reading about 12-15 feeds, and skimming a couple dozen more. The rest are lost to my feature development, and just simple testing of the server and the new features.
Just a random though to start my morning…
Note: This post was written a couple of months ago, and never posted, because I always wanted to add more to it. All testing was done while the FeedLounge alpha was on a single server, and FeedLounge has moved on from that. Seeing as I might be switching away from MySQL altogether, I thought I should post this for anyone else running into these types of issues…
I was thinking that the MySQL MyISAM engine’s table level locking was killing the FeedLounge application and user experience in the past week, since the feed update daemon does a large amount of writes to the database almost constantly. I wanted to test the InnoDB storage engine, to see if the row-level locking would offset the fact that I use GUIDs as primary keys, which make primary key inserts not in sort order. So, I started on my journey..
First step, get the data over to the test server. This step was fairly simple and straightforward, as I do a nightly database dump from the alpha server, and rsync it over to the test server, to have a geographically diverse near-online backup. This was just a matter of doing this again to have the latest. Total time: about 25 minutes.
Step two, configure the test server for InnoDB. This was the easiest step, just setting a few variables in the my.cnf file, then restarting mysql. Total time: 10 minutes.
Step three, load the data into the test database. I dropped the existing test database, and then wrote a quick sed script to change the engine type in the massive SQL backup file. I then started the load. A dump takes about 20 minutes now, and a load on the alpha box took about an hour, so I figured it would be in the one to two hour range. Boy was I wrong! The 1.9 GB SQL file ended up taking about 14 hours to load on the test box, but that wasn’t the worst thing.
On the alpha server, using MyISAM, the entire database took about 2.4GB of disk space, including the indices. Once the load was done on the test server, the InnoDB files totaled over 10.6 GB!!! You’ve got that right, the FeedLounge grew to take up almost 5x the space just by changing database storage engines! This obviously throws our disk space calculations out the window. The bin logs on the test box total 1.7GB, so the data seemed to be correct.
So, now I am over an entire day into this testing, but the main question is, can I live with the 5x growth in data size? Doesn’t that seem a little like overkill? Is this stated anywhere? Is this just another one of those ‘known things’ that you are supposed to know as a MySQL DBA?
We are now on InnoDB, and the lockinng/contention issues are gone. The only issue left is the speed of count() queries on InnoDB, which is a known issue, and even documented in Zawodny’s excellent book. I am working on refactoring the code to remove the counting queries, to free up the database to do real work.
After looking at the source for Mochikit, Dojo, Prototype and others, I decided that FeedLounge could use a bit more object-orientation in the client side code.
I created a simple object structure, and wrote a few little test scripts that I ran in the browser. Since I subscribe to 1500+ feeds in FeedLounge, the feed loading/processing on the client side is getting quite slow, so I wanted to make sure that this refactoring was going to help and not hinder client-side performance.
While the test results show what I have built will help performance quite a bit (no more looping over DOM nodes, looking for the right one), the results were a bit surprising…
Given 2 objects, feed and tag, where a tag can contain feeds, I wrote a simple loop to create n feeds, and set an unread count on the feed, then add that feed to a single tag (the worst case in FeedLounge). I wanted to make sure that this solution would scale for the power user, so I started the test at n=5000.
At 5000 feeds to a tag, Firefox was running the code in less than 1 second, which seemed acceptable, so I bumped up n=50000. Firefox choked saying “Out of memory”, so I dropped it in half to n=25000.
The loop at 25000 took 3 seconds of full CPU to run, and the count items return was quite quick. On to test IE, since we support more than just one browser…
In testing IE, I was shocked to find it taking 3 minutes at 100% CPU to run the loop at n=25000. It did eventually finish, but while I can justify 3 seconds on initial home page load to my users, I don’t think they will stand for 3 minutes! I also noticed that memory consumption in IE was over 3 times as much as Firefox. For n=25000, the delta in Firefox had been 50MB. That seems like a lot, but I also didn’t see many people subscribing to 25000 feeds. In IE, the memory usage was 160MB!
I am not too terribly worried, as the patch for power users of FeedLounge is to switch to Firefox
I have been watching the new FeedLounge install with possibly too much attention today. Is that CPU spike a performance problem, or just a random spike? Where is that disk I/O going? Why is this machine loaded and that one mostly idle? You know the drill. Obsession to the point of losing a bit of the higher level picture.
As the obsession waned today, it seems that FeedLounge is back to an alive and usable state, and for that I am very happy. Now, however, is when the work really begins. In preparation for a larger (much larger, we hope) rollout in beta and beyond, I have to step back and start removing bottlenecks in the system. That is going to take quite a bit of measurement and design, and a lot of elbow grease to acclomplish correctly, lest we end up in the same position again, and very soon. I feel the pain of the Technoratis of the world.
I know where the current 80% problem(s) are in the architecture, and I will begin work of adding infrastructure, both in code and hardware, to alleviate the problem, so that I can then find the next 80%, and so on down the line. As FeedLounge continues to scale, I will also be putting into place key indicators to tell me when/where I may have a problem in the near future, rather than learn about yesterday’s scaling problem today.
Took all of the day, but the move was completed physically, now for all the configuration that is necessary to complete the move on the software level. Will post more on the FeedLounge blog.
I am heading home today, and will return home late tonight to begin the second FeedLounge migration. Seems that we just did this last week, but it was actually almost 2 months ago. Wow.
The first migration was such a smashing success that we are going to be leaving the poor old alpha server melting down in a small pool of its own solder, and moving on to our own rack in a colo closer to home. Busy holiday weekend for the FeedLounge crew.
After stopping the server tonight, it goes down for one final backup, rsyncing that over to the new server. We will also be adding the DNS changes, so that the new servers are ready to go once we finish the install.
Then it is off to the colo in the morning, with a small truckload of hardware. Cabling and installation (don’t forget the cable ties) should take about 4-6 hours, and then we can stop and have a snack (dinner?).
After all the connectivity is sorted out, then it will be a sit down root-fest, making sure all the configuration is correct, nagios is all set up, etc.
Then for the test run. Start up the daemon to start working the queue, trying to catch up on the feed backlog (making sure to time it to get a feel for the new hardware).
Need to remember to take as many pictures as necessary to document the adventure, so I can share with everyone the joy that is a colo move.
Previous experience (moving the apache.org colo) tells me estimates are an impossibility to get right. One thing I did learn though: a smaller crew, or at least small crews focused on single tasks, get the job done faster than a big crew (too much concensus decision making).
Spent most of the week out here in Denver with Alex, and got some of our ’strategic planning’ done. Didn’t code as much as I had planned, but sometimes you have to have a little fun. Downtown Denver is clean and nice, I would actually consider moving here, and either living in some sort of industrial loft downtown, or somewhere up in the mountains.
Since most of the servers for the beta are here now, I have to get cracking and get all the software installed and tested, then installed in the colo. More on that later next week.
Welcome to the second of a series of posts on the development. If you missed the first one, check it out here: FeedLounge development: the parser.
The feed validator
We have feeds being parsed, but we also wanted to help make the world a better place by allowing the end user to know whether or not the feed is valid. As Geof rightly points out, we MUST follow Postel’s Law when parsing feeds (”be conservative in what you do, be liberal in what you accept from others”). Why not give a quick heads-up to someone that might be able to help fix the problem?
So, in FeedLounge, there is a banner near the item content that tells the user the feed is invalid. The user is given a link to click on to see for themselves what is wrong with the feed, using the service from feedvalidator.org.
Hopefully someone will come along that knows the person publishing the feed, and helps nudge them to fix the problem, to make the world a better place for all feed reading entities. If this banner gets too annoying, it can be hidden down to a small icon, and the user can go on with her feed reading. And of course this setting is persistent, so the user does not continue to be annoyed.
Quick Note to Sam
: First, thank you from the entire FeedLounge team for the excellent code that is feedvalidator. Second, I hope you will yell at us if we are not using feedvalidator.org according to your Terms of Service
, we believe that we are, as it takes the end user clicking on the link to activate feedvalidator.org, all of our backend validation is done on our own server.
Interesting stats around validation
Of all the feeds that FeedLounge is currently parsing, validating, and tracking, 34.5% have some issue, 22.5% are NOT valid, and 17.8% are completly broken (404, 304, no response at all, etc). That’s 1/5 to 1/3 of all feeds that FeedLounge might not allow the users to read if we were using a strict feed parser!
As a side note, a large portion of the invalid feeds so far are from some version of WordPress. There are a lot of users out there that haven’t felt that upgrading is important. Please upgrade!!! The security fixes alone are worth it. I hope in a future release that the WordPress team might use one common codebase for feed creation, rather than separate code for each feed format. Disclaimer: dotnot.org uses WordPress 184.108.40.206, and will not change to something else anytime soon. I like WordPress, just offering constructive criticism, and I just want to give the WP team a friendly nudge from time to time
We have noted in our alpha invitations that we intend for FeedLounge (company, people and application) to be as open as we can possibly be. So along those lines, I will be posting here and on the FeedLounge Blog about architecture, features and development of FeedLounge, so that everyone can see inside the beast, so to speak.
Which feed parser should we use?
When are you building a web based feed reader like FeedLounge, having data to read is step one. Luckily, there are many feed parsers already out there, so the “build vs. buy” decision was fairly easy. Focusing on the development of the user experience of the feed reader, the feed parser part of the application is only a ‘necessary evil’ in the scheme of things. After checking out several possiblities, including using my own Java/SAX framework, we decided on feedparser, the canonical namesake of the feed parsing world. Built by Mark Pilgrim, and currently at version 3.3, this is probably the most forgiving feed parser on the planet. Had I gone with my own solution, I would have spent months and months creating something as good. And with a liberal open source license, I am allowed to use it in a commercial project like this.
- feed format support - v3.3 has impress support of 4 feed formats and 15 different versions of those formats. This probably would have taken a good chunk of time to come up with support for.
- encoding detection - Anyone who has done this understands the difficulty without any explanation.
- tidy support - Want clean HTML content as output? No problem, it’s in there
- translated access between specific terms - If you know channel instead of feed, these are the same thing in feedparser. Use the terms that you are comfortable with.
- relative url support - Useful to us since we are ripping the feed apart to store it. Having no relative URLs is a great relief.
- great documentation - Mark produces some of the best, most-useful documentation in the open source world. feedparser is no exception here. Terse, but covering what you need to know. Need to do 401 auth? Here. Wondering about E-Tag support? There.
- over 2000 unit tests - I may run into some arcane case not covered here, but the likelihood is not very high.
- date parsing - Support for every date format they came across. You get a simple date format, consistent from feed to feed.
- It just works!- The best is saved for last, as this point cannot be made often enough. In the months of development so far, feedparser has never been the spotlight of a single problem. The closest we have come to some kind of problem is not checking for the existence of some item before accessing it. feedparser has been a huge net positive on development, with an almost nil overhead. To have alpha testers say that some of the feeds that don’t open in nearly anthing else show up in FeedLounge, that wasn’t us, it was feedparser and its magic voodoo.
Mark, thanks a million. I know you have ‘gone dark’ in the blogging world, but you are still rocking mine.
I should have followed this earlier, but since we launched FeedLounge yesterday, I started checking the search engines for mentions of FeedLounge. Yes, before you ask, we were already watching PubSub, Feedster, Technorati, populicio.us, etc. But Google, Yahoo, and MSN are still pretty important.
So, 24 hours after the launch, the stats for a ‘feedlounge’ search are thus:
- MSN - 636 hits
- Yahoo - 124 hits
- Google - 2 hits
2 hits with Google. WTF! Earlier yesterday, Alex had prepared and submitted a Google SiteMap, hoping that would help the situation.
Fast-forward to 34 hours post launch, and a full 24 hours after the SiteMap submission, the stats are now:
- Google - 674 hits
- MSN - 636 hits
- Yahoo - 261 hits
That changes things a bit. Is there any equivalent to SiteMap for Yahoo? Looks like they could use a bit of help.
Having been quiet in the back of the room for the last few months, Alex and I are proud to announce our new project to the world.
Welcome to the world of FeedLounge, a web-based news reader that was designed and built to act like a rich-client, but delivered in a web client package.
But Scott, why did you build it?
I’m so please that you asked
I was continually frustrated by the fact that my favorite news reader, NetNewsWire, only allowed me to read items on one machine. While that is not the fault of a rich client application like NNW, my attempt to read news from many machines led me to frustration. Since I do a lot of roaming work with my laptop, and have a couple of desktops in various locations, I constantly used several machines and news reading applications to read news throughout my work day. The problem with all of this is syncing. NNW does syncing with Bloglines, but not in a good way. Bloglines forces itself to be the master in the relationship, so the sync relationship is not a two way street. Thus, I co-created FeedLounge. While I long for the user experience that a rich client can provide, I believe web applications are the correct delivery mechanism for this type of application. I hope everyone that uses the application find it to be as useful as a rich client application.
Alex led the website development, user experience and UI design, while I focused on the backend application, database, etc. The application has been an awesome experience in building, and we are all excited to share it with you. The FeedLounge team is larger than just the two of us, and they will start speaking up as we go along. I will be covering the development of the backend, etc, since I am a geek like that.
Since Alex and I are both believers of being open and up front, we will both be talking about most aspects of feedlounge on our own blogs, as well as the FeedLounge blog. I can’t wait to ‘brain dump’ all of this info that has been pent up waiting for the Alpha. So pull up a chair, sit down, check out the website, and let us know what you think. Hopefully we can push through the alpha test and into beta to get the application into more hands.