Peer to Peer (P2P) Search Engine
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
project topics
Active In SP
**

Posts: 2,492
Joined: Mar 2010
#1
21-04-2010, 11:59 PM


Peer to Peer (P2P) Search Engine

World Wide Web (WWW) is emerging as a source of online information at a very faster rate.Itâ„¢ s content is considerably more diverse and certainly much larger than what is commonly understood. Information content in WWW is growing at a rate of 200% annually. The sheer volume of information available makes searching for specific information quite a daunting task. Search engines are efficient tools used for finding relevant information in the rapidly growing and highly dynamic web. There are quite a number of search engines available today. Every search engine consists of three major components: crawler, indexed repository and search software. The web crawler fetches web pages (documents) in a recursive manner according to a predefined importance metric for web pages. Some example metrics are back link count of the page, forward link count, location, page rank etc. The Indexer parses these pages to build an inverted index that is then used by the search s oftware to return relevant documents for a user query.

Even though search engine is a very useful tool, the technology is young and has quite a number of problems, which become worse as the web grows rapidly. Searching in the web today is like dragging a net on the surface of an ocean and therefore, missing out the information in the deep. The reason for this is simple: basic search methodology and technology have not evolved significantly since the inception of the Internet. WWW consists of the surface web (the visible part of the web consisting of static documents) and deep web (which consists of documents those are hidden in searchable databases and generated dynamically on demand). Deep web is currently 400 to 550 times larger than the surface web and is of much higher quality.

Traditional search engines create their index by crawling the surface web. Crawlers can fetch documents, which are static and linked from other documents. Dynamic pages cannot be fetched by crawlers and hence, cannot be in dexed by traditional search engines. Dynamic pages are often generated by scripts that need information like cookie data, session id or query string before they generate the content. The crawler has no way to figure out what information to give at different databases to produce dynamic pages, which makes it impossible for them to fetch the pages. If the spider tries to wander deep into a site, it could enter a never-ending loop where request for a page by the spider is met with a request for information from the server. This leads to a poor performance by the spider and a potential crash of the web server due to repeated requests from the spider.

The only way of searching the deep web is by sending direct queries to their searchable databases. But the process of one at a time direct query to different deep websites is a time consuming and laborious process. In our peer -to-peer search engine PtoP, we have automated the process of sending query to deep web sites. The client side tool allows the user to search for a query and peer-to-peer technology is used to propagate the user query automatically to a large number peer sites. The results obtained from different sites are integrated and presented to the user. The advantages of using PtoP are that it obtains f resh and up-to-date information from the sites, it eliminates the risk of single point of failure as the network can work even if few peer servers are down, it can search for a file in the file system given the filename as the key word (this is not possibl e for traditional search engines like htdig). Effort has been made to keep the communication load between peer severs as low as possible. Also PtoPâ„¢ s peer sever interface is capable of interacting with any local or external search engine.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page

Quick Reply
Message
Type your reply to this message here.


Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Learn to Personalized Image Search from the Photo Sharing Websites seminar flower 7 4,176 15-11-2014, 08:36 AM
Last Post: Guest
  Publishing Search Logs - A Comparative Study of Privacy Guarantees – JAVA/J2EE seminar flower 2 757 07-03-2014, 04:38 PM
Last Post: seminar project topic
  PERSONALISED SEARCH ENGINE WITH DYNAMIC UPDATION pdf seminar projects maker 0 298 28-09-2013, 12:28 PM
Last Post: seminar projects maker
  Development Of A Repository And Search Engine For Alumni Of College seminar surveyer 8 5,620 19-09-2013, 09:47 AM
Last Post: seminar projects maker
  Efficient Fuzzy Type-Ahead Search in XML Data pdf study tips 0 499 07-09-2013, 04:56 PM
Last Post: study tips
  Efficient search based on data personalization with location context Report study tips 0 294 30-07-2013, 02:54 PM
Last Post: study tips
  Personalizing Search Based on User Search Histories pdf study tips 0 256 29-06-2013, 03:02 PM
Last Post: study tips
  A Fast Pattern-Match Engine for Network Processor-Based Network Intrusion PPT study tips 0 303 18-06-2013, 12:17 PM
Last Post: study tips
  Organizing User Search Histories Abstract project girl 3 1,681 25-05-2013, 12:41 PM
Last Post: Guest
  Mobile Cloud Computing Service Based on Heterogeneous Wireless and Mobile P2P Network study tips 0 335 22-05-2013, 02:24 PM
Last Post: study tips