Web Access Patterns of Actual Human Visitors and Web Robots: A Correlated Examination

Web Access Patterns of Actual Human Visitors and Web Robots: A Correlated Examination

DOI: 10.4018/978-1-5225-3870-7.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Web robots are autonomous software agents used for crawling websites in a mechanized way for non-malicious and malicious reasons. With the popularity of Web 2.0 services, web robots are also proliferating and growing in sophistication. The web servers are flooded with access requests from web robots. The web access requests are recorded in the form of web server logs, which contains significant knowledge about web access patterns of visitors. The presence of web robot access requests in log repositories distorts the actual access patterns of human visitors. The human visitors' actual web access patterns are potentially useful for enhancement of services for more satisfaction or optimization of server resources. In this chapter, the correlative access patterns of human visitors and web robots are discussed using the web server access logs of a portal.
Chapter Preview
Top

Introduction

Web robots are automated programs mainly designed to traverse websites over the internet. Web robots are used for a variety of functions including searching, indexing, hacking, scraping, spamming and spying, etc. (Gaffan, 2012). With the advent of web 2.0 services, web robots are playing a key role in everything that we do online and shaping our web experience. It is believed that first web robots were introduced in 1993 and since their origin, they are escalating with the unprecedented rate. Web robots are very simple to create as well as offer a great job by circumventing the collection of information (Tan & Kumar, 2002). Depending on their core functionality Web robots are also known as following (Derek Doran & Gokhale, 2011):

  • 1.

    Indexers (or Search Engine Crawlers): This seeks to harvest as much web content as possible on a regular basis, to build and maintain large search indexes.

  • 2.

    Analyzers (or Shopping Bots): It is used to crawl the web to compare prices and products sold by different e-Commerce sites.

  • 3.

    Experimenters (Focused Crawlers): This seeks and acquires web pages belonging to pre-specified thematic areas.

  • 4.

    Harvesters (Email Harvesters): This is used to collect email addresses on behalf of email marketing companies or spammers.

  • 5.

    Verifiers (Site-Specific Crawlers): It is used to perform various website maintenance chores, such as mirroring web sites or discovering their broken links.

  • 6.

    RSS Crawlers: It is used to retrieve information from RSS feeds on a web site or a blog.

  • 7.

    Scrapers: It is used to create copies of websites for malicious purposes automatically.

The normal perception is that the major chunk of web server resources is used to handle human visitor’s generated traffic for any web portal. This perception is changed if we observed the recent reports (Gaffan, 2012) which state that major portion of web traffic is generated through automatic software agents. Most website owners simply rely on web analytics tools(“Google Analytics,” 2013) to track who’s visiting their site.However, these tools don't show you 51% of your site’s traffic including some seriously shady non-human visitors such as web robots.

Key Terms in this Chapter

Popular Web Resources: Web resources frequently accessed by users through any version of the HTTP protocol (for Example, HTTP 1.1 or HTTP-NG).

Access Patterns: Repeated web user access behavior over a period of time.

Visitors: A person or software who requests services from the web server.

Access Paths: Sequence of URLs traversed to retrieve particular resources.

Web Robots: Mechanized software programs designed to traverse and retrieve web resources for malicious and non-malicious reasons.

Sessions: Session is a set of web resources requested in a particular time during a website visit.

Response Codes: Unique code generated by web servers in response to any HTTP request.

Complete Chapter List

Search this Book:
Reset