robots.txt quandry

Discuss webmaster issues, website development and resources. Ask questions about website building, coding, SEP/SEO etc here. NO ADS!

Moderators: magnetize, Oosha, ftello, shezz

robots.txt quandry

Postby Scooter » Thu Aug 20, 2009 8:12 pm

When my house is on fire I call the fire department...

When an intruder is entering my home I call the police -- if they don't run off because the're not terrified by the 50 cal. barrel staring them in the face...

And when I have a website/blog problem I respectfully find the solution here at THW!

For months now I've been creating blogs at as a link resource to my primary sites. Links are links -- especially when they have the relevant content to support them, right? Anyhow... Generally I also submit these blogger URl's to MSN (BING) for submission in their respective search engine. However, just today I stumbled upon the realization that they are NOT being crawled by the MSN (BING) [bot] due to what I believe to be a robots.txt "disallow" tag?
Here's the message I get when I check "crawl errors":

"Bing encountered a "404 File Not Found" HTTP status code when last attempting to download these URLs."

My Google webmaster tools dsiplays the following when I analyze the "test robot.txt" file for any of my given blogs:

User-agent: Mediapartners-Google

User-agent: *
Disallow: /search

Sitemap: h**p://

So WTF! How do I get rid of this "disallow" tag so I can get ALL search engines to crawl my blogs?

Thanks for ANY advise and solution to this problem...

Posts: 65
Joined: Sat Apr 04, 2009 7:32 pm
Location: Wisconsin. USA

Return to Web Development Resources

Who is online

Users browsing this forum: No registered users and 2 guests