Context Oriented Search Engine with Web Crawler
Active In SP
Joined: Mar 2010
22-04-2010, 12:32 AM
Search Engine with Web Crawler
A web crawler (also known as a Web spider or Web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner.
This process is called Web crawling or spidering. Search engines use spidering as a means of providing up-to-date data. Web crawlers Download and will index web pages to provide fast searches.
A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
There are two important characteristics of the Web that generate a scenario in which Web crawling is very difficult: its large volume and its rate of change, as there are a huge number of pages being added, changed and removed every day. Also, network speed has improved less than current processing speeds and storage capacities.
The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that by the time the crawler is downloading the last pages from a site, it is very likely that new pages have been added to the site, or that pages have already been updated or even deleted.
The behavior of a Web crawler is the outcome of a combination of policies:
selection policy that states which pages to download.
re-visit policy that states when to check for changes to the pages.
politeness policy that states how to avoid overloading websites.
parallelization policy that states how to coordinate distributed web crawlers.
In this search engine project and implimentation the webcrawler will start with some seeds and Will select the pages using some filters and policies. For Example If we are to create a blog Search Engine the crawler will be programmed to download blog related pages only.
The crawler can be developed using a simple java program. The program will download and index the pages in a database for faster searching.
The search in can be developed using JSP/Servlet and Ajax. The Search engine will accept the search keywords and will search the database for the keywords using some search algorithms. Most relevant results will show as list using paging in basis of merit.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion