{"id":93,"date":"2003-11-17T20:58:32","date_gmt":"2003-11-17T19:58:32","guid":{"rendered":"http:\/\/wp.devco.net\/?p=93"},"modified":"2009-10-09T17:32:26","modified_gmt":"2009-10-09T16:32:26","slug":"desktop_aggregators","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2003\/11\/17\/desktop_aggregators.php","title":{"rendered":"Desktop Aggregators"},"content":{"rendered":"
I am sick of Newzcrawler<\/a>. It used to have a really useful liberal parser that would work with most things, now is using the MSXML<\/a> as its core parser and it has been turned into the worlds strictest parser. This is all fine and well in a perfect world however we do not live in one. Earlier today I posted a quote about the Technorati<\/a> growing pains and it stated that it has 1.2 Million weblogs and is adding 4 000 to 5 000 new ones every day. For a feed reader to expect all these blogs to have valid XML is ludicrous. I am sick of Newzcrawler. It used to have a really useful liberal parser that would work with most things, now is using the MSXML as its core parser and it has been turned into the worlds strictest parser. This is all fine and well in a perfect world however we do not live in […]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","footnotes":""},"categories":[5],"tags":[43],"_links":{"self":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/93"}],"collection":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/comments?post=93"}],"version-history":[{"count":1,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/93\/revisions"}],"predecessor-version":[{"id":897,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/93\/revisions\/897"}],"wp:attachment":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/media?parent=93"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/categories?post=93"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/tags?post=93"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}
\nThe developers of Newzcrawler has stated that they are working on this problem and have released a Beta that has a new parser. Their previous “stable” release was a joke of crashes and instabilities and regardless of these being reported on their forums they still released it as a stable version. Now we are back in Beta stage and it is even worse.
\nSo my search for a replacement reader got me to Sharpreader<\/a> a rather nice looking aggregator for windows, it is written in .Net so you will need the 20 Meg worth of .Net framework but so far its been well worth the hassle. Sharpreader – while still Beta – is very usable and attractive and it is a lot better at parsging dodgy RSS but still not perfect, it has issues with sites like Rootprompt<\/a> but their feed does indeed suck. The only feature that I am going to miss in newzcrawler so far is NNTP as I read quite a few mailing lists via Gmane<\/a>. The Author<\/a> believes in Parsing At All Costs<\/a> so that is encouraging.<\/p>\n","protected":false},"excerpt":{"rendered":"