{"id":16,"date":"2003-09-24T19:16:03","date_gmt":"2003-09-24T18:16:03","guid":{"rendered":"http:\/\/wp.devco.net\/?p=16"},"modified":"2009-10-09T17:35:17","modified_gmt":"2009-10-09T16:35:17","slug":"fighting_email_harvesters_and_other_unfriendlies","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2003\/09\/24\/fighting_email_harvesters_and_other_unfriendlies.php","title":{"rendered":"Fighting email harvesters and other unfriendlies."},"content":{"rendered":"
Since I put up this site I have been paying attention to my log files to see how it gets accessed. One of my main motivations for putting up a personal site is not to publish content or personal ideas etc but to study the blogging world, how it communicates and how information flows. Be sure to read http:\/\/www.robotstxt.org\/<\/a> for more information and most importantly take a look at their database of search engines<\/a> where you will find entries for common engines like the Googlebot<\/a>. All reputable search engines are registered there. I also found a good tutorial about robots.txt here<\/a>. To take this further you can deny IP addresses from places you do not like, you can use the simple “deny from” entries in the .htaccess for specific IP addresses or for something more flexible mod_rewrite is useful again since it support regular expressions. In this example I will deny some RIAA IP Adresses<\/a> and a spybot.<\/p>\n I hope this is of some help, I will follow up later with details of log analysis tools that can show you stats etc.<\/p>\n","protected":false},"excerpt":{"rendered":" Since I put up this site I have been paying attention to my log files to see how it gets accessed. One of my main motivations for putting up a personal site is not to publish content or personal ideas etc but to study the blogging world, how it communicates and how information flows.
\nObviously RSS [1<\/a>, 2<\/a>] and other XML<\/a> technologies are the underlying technology that enables interesting services such as Technorati<\/a>, Feedster<\/a>, Blogosphere<\/a>, Geoblog<\/a>, Blogshares<\/a> and many more and a study of this is essential. I have been looking for the RSS book for a while and might have to resort to ordering from Amazon.
\nThere are however a lot more to a website than a XML file. The net is constantly being trawled by unwelcome guests these range from Email address harvesters, services that “monitor” your server, badly behaved search engine crawlers and bad people like the RIAA.
\nHere I present some strategies for combating these services from simply asking the well behaved ones to go away by using a robots.txt and by forcing the bad ones to go away by using mod_rewrite and other such methods.<\/p>\n
\nFirst off you need to be able to figure out a web server log by looking at it, you want to be using the combined format<\/a> or even a custom<\/a> one that lists more information. The most important things to list are Remote IP Address\/Host, Connection Status, Request Protocol, Time, First Line of Request, Referer and User Agent.
\nOf the above fields the most important ones are what they are trying to see (First Line of Request, Request Protocol), Who they are (Host\/IP), what software they are using (User Agent) – often spoofed – and where they were before (Referer).
\nOnce you have this going and can understand it you will notice a whole long list of User Agents. This is a good indication of what is accessing your site. In some cases it will be obvious like in the case of search engines, their referers or user agent fields usually include a URL where you can find out more, Google has “Googlebot\/2.1 (+http:\/\/www.googlebot.com\/bot.html)” as User Agent. Others are web browsers like your standard Internet Explorer says something like “Mozilla\/4.0 (compatible; MSIE 5.01; Windows NT 5.0)”.
\nI found a really usefull resource that has a database of User Agents<\/a>, they list 492 at the moment. They are conveniently categorised by Search Engine, Offline Browser, Validators and Email Collectors and it can even create config files for robots.txt or mod_rewrite, but more on this a bit later.
\nNow for combatting unwanted traffic. Unwanted traffic can be a search engine that you do not like or one that simply trawls your site too often, it can be people mirroring your site using software such as the ones making many parallell requests, it could be people you do not like (the RIAA comes to mind) and finally it can be people harvesting email addresses for spam lists.
\nThe good spiders or robots will honour the Robots Exclusion Protocol<\/a>. This protocol allows you to tell a bot to only access certain or no parts of your site. It takes two forms, one is by including Meta Tag<\/a> into your html or by creating a file called robots.txt<\/a> in the root of your server. The file is pretty simple and controls bots based on user agent. A sample file below will block the Inktomi\/Hot Bot search engine from seeing any pages on your server.<\/p>\n\n
User-agent: Slurp\nDisallow: \/<\/pre>\n<\/blockquote>\n
\nThe above approach is effective to control the good guys but is unfortunatly of no use against E-Mail harvesters and other such things. For combating these you need to get tough. A few Apache modules are usefull here, most importantly mod_rewrite<\/a>.
\nUsing mod_rewrite is not trivial and I suggest you play somewhere other than a live server before going forward with this. I also suggest starting small with one site or possibly even a subset of a site. Also doing this will slow things down slightly so if you do have a server that is under high load this may not be for you.
\nThe basic concept here is to use RewriteCond to pick up on Browsers (Agents), Remote Hosts, Users, Access Methods or even URI’s and set a environment variable that will classify them as such and then use mod_rewrite to either send them to a page with a nice error page or to simply return a error such a 403. You can put this in the main webserver configuration file or in the .htaccess file for certain directories.
\nI will use .htaccess files in my examples and so I can use the .htaccess deny from and allow from lines to block hosts, this should be a bit faster than using mod_rewrite for denying specific hosts.
\nA simple .htaccess file that will block the same bot as above looks like this, it will return a 403 error:<\/p>\n\n
Order allow,deny\nallow from all\nRewriteEngine on\nRewriteBase \/\nRewriteCond ${HTTP_USER_AGENT} \" ^slurp [NC]\nRewriteRule .* - [F,L]<\/pre>\n<\/blockquote>\n
\n
Order allow,deny\ndeny from 80.88.129.28\ndeny from 211.157.36.7\ndeny from riaa.com\ndeny from mpaa.com\nallow from all\nRewriteEngine on\nRewriteBase \/\nRewriteCond %{REMOTE_ADDR} ^12\\.148\\.209\\.(19[2-9]|2[0-4][0-9]|25[0-5])$ [OR] # NameProtect spybot\nRewriteCond ${HTTP_USER_AGENT} ^slurp [NC]\nRewriteRule .* - [F,L]<\/pre>\n<\/blockquote>\n
\nObviously RSS [1<\/a>, 2<\/a>] and other XML<\/a> technologies are the underlying technology that enables interesting services such as Technorati<\/a>, Feedster<\/a>, Blogosphere<\/a>, Geoblog<\/a>, Blogshares<\/a> and many more and a study of this is essential. I have been looking for the RSS book for a while and might have to resort to ordering from Amazon.
\nThere are however a lot more to a website than a XML file. The net is constantly being trawled by unwelcome guests these range from Email address harvesters, services that “monitor” your server, badly behaved search engine crawlers and bad people like the RIAA.
\nHere I present some strategies for combating these services from simply asking the well behaved ones to go away by using a robots.txt and by forcing the bad ones to go away by using mod_rewrite and other such methods.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","footnotes":""},"categories":[7],"tags":[63],"_links":{"self":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/16"}],"collection":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/comments?post=16"}],"version-history":[{"count":1,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/16\/revisions"}],"predecessor-version":[{"id":923,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/16\/revisions\/923"}],"wp:attachment":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/media?parent=16"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/categories?post=16"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/tags?post=16"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}