I have been using the bot traps described by others only to find that the bots are avoiding them by scanning the robots file.
The solution is to unlist your robot trap. O yea you say good bots will just fall into it. Well thats ok we just ignore the major search systems and only scan for useragents starting with mozilla. Once we ignore the major search systems and other bots known to use mozilla everything that is left will be a bot faking a web browser and we can bann it.
I have a beta test of the unlisted bot up and running and I will soon see what it does.
If this still doesn't work then bots must be avoiding the traps in other ways.
Update on this. No one has fallen into the new bot trap. This tells me that the bots are not really spidering from image links they must be following in links from google.
Going to switch to using text links to the trap and see what I catch.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment