A
web search engine is a tool designed to search
for information on the
World Wide
Web. The search results are usually presented in a list and are
commonly called
hits. The information may consist of
web pages, images, information and other
types of files. Some search engines also
mine data available in
databases or
open
directories. Unlike
Web
directories, which are maintained by human editors, search
engines operate
algorithmically or are a
mixture of algorithmic and human input.
History
Before there were web search engines there was a complete list of
all
webservers.
The list was edited by
Tim Berners-Lee and hosted on the
CERN
webserver. One historical snapshot from 1992
remains. As more and more webservers went online the central list
could not keep up. On the
NCSA site new servers
were announced under the title "What's New!" but no complete
listing existed any more.
The very first tool used for searching on the (pre-web) Internet
was
Archie.The name stands for
"archive" without the "v."
It was created in 1990 by Alan Emtage, a student at McGill
University
in Montreal
. The
program downloaded the directory listings of all the files located
on public anonymous FTP (
File
Transfer Protocol) sites, creating a searchable database of
file names; however, Archie did not index the contents of these
sites.
The rise
of Gopher (created in 1991 by
Mark McCahill at the University of
Minnesota
) led to two new search programs, Veronica and Jughead. Like Archie, they
searched the file names and titles stored in Gopher index systems.
Veronica (
Very
Easy
Rodent-
Oriented
Net-wide
Index to
Computerized
Archives) provided a
keyword search of most Gopher menu titles in the entire Gopher
listings. Jughead (
Jonzy's
Universal
Gopher
Hierarchy
Excavation
And
Display) was a tool for
obtaining menu information from specific Gopher servers. While the
name of the search engine "
Archie" was not a reference to the
Archie comic book series, "
Veronica" and "
Jughead" are characters in the series, thus
referencing their predecessor.
In June
1993, Matthew Gray, then at MIT
, produced what was probably the first web robot, the Perl-based
World Wide Web Wanderer, and
used it to generate an index called 'Wandex'. The purpose of
the Wanderer was to measure the size of the World Wide Web, which
it did until late 1995. The web's first search engine
Aliweb appeared in November 1993. Aliweb did not use
a
web robot, but instead depended on being
notified by website administrators of the existence at each site of
an index file in a particular format.
JumpStation (released in December 1993)
used a
web robot to find web pages and to
build its index, and used a
web form as the
interface to its query program. It was thus the first WWW
resource-discovery tool to combine the three essential features of
a web search engine (crawling, indexing, and searching) as
described below. Because of the limited resources available on the
platform on which it ran, its indexing and hence searching were
limited to the titles and headings found in the web pages the
crawler encountered.
One of the first "full text" crawler-based search engines was
WebCrawler, which came out in 1994.
Unlike its predecessors, it let users search for any word in any
webpage, which has become the standard for all major search engines
since. It was also the first one to be widely known by the public.
Also in
1994 Lycos (which started at Carnegie Mellon
University
) was launched, and became a major commercial
endeavor.
Soon after, many search engines appeared and vied for popularity.
These included
Magellan,
Excite,
Infoseek,
Inktomi,
Northern
Light, and
AltaVista.
Yahoo! was among the most popular ways for people to
find web pages of interest, but its search function operated on its
web directory, rather than full-text
copies of web pages. Information seekers could also browse the
directory instead of doing a keyword-based search.
In 1996,
Netscape was looking to give a
single search engine an exclusive deal to be their featured search
engine. There was so much interest that instead a deal was struck
with Netscape by 5 of the major search engines, where for $5Million
per year each search engine would be in a rotation on the Netscape
search engine page. These five engines were:
Yahoo!,
Magellan,
Lycos,
Infoseek and
Excite.
Search engines were also known as some of the brightest stars in
the Internet investing frenzy that occurred in the late 1990s.
Several companies entered the market spectacularly, receiving
record gains during their
initial public offerings. Some have
taken down their public search engine, and are marketing
enterprise-only editions, such as Northern Light. Many search
engine companies were caught up in the
dot-com bubble, a speculation-driven market
boom that peaked in 1999 and ended in 2001.
Around 2000, the
Google search engine
rose to prominence. The company achieved better results for many
searches with an innovation called
PageRank. This
iterative algorithm ranks web pages
based on the number and PageRank of other web sites and pages that
link there, on the premise that good or desirable pages are linked
to more than others. Google also maintained a minimalist interface
to its search engine. In contrast, many of its competitors embedded
a search engine in a
web portal.
By 2000, Yahoo was providing search services based on
Inktomi's search engine. Yahoo! acquired
Inktomi in 2002, and
Overture (which owned
AlltheWeb and
AltaVista)
in 2003. Yahoo! switched to Google's search engine until 2004, when
it launched its own search engine based on the combined
technologies of its acquisitions.
Microsoft first launched MSN Search in the fall of 1998 using
search results from
Inktomi. In early 1999
the site began to display listings from
Looksmart blended with results from
Inktomi except for a short time in 1999 when results
from
AltaVista were used instead. In 2004,
Microsoft began a transition to its own search technology, powered
by its own
web crawler (called
msnbot).
Microsoft's rebranded search engine,
Bing, was launched on June 1, 2009. On
July 29, 2009,
Yahoo! and
Microsoft finalized a deal in which
Yahoo! Search
would be powered by Microsoft Bing technology.
According to Hitbox, Google's worldwide popularity peaked at 82.7%
in December, 2008. July 2009 rankings showed Google (78.4%) losing
traffic to
Baidu (8.87%), and
Bing (3.17%). The market share of
Yahoo! Search (7.16%) and
AOL
(0.6%) were also declining.
In the United States, Google held a 63.2% market share in May 2009,
according to Nielsen NetRatings.
In the People's Republic of China
, Baidu held a 61.6% market
share for web search in July 2009.
How web search engines work
A search engine operates, in the following order
- Web crawling
- Indexing
- Searching
Web search engines work by storing information about many web
pages, which they retrieve from the html itself. These pages are
retrieved by a
Web crawler (sometimes
also known as a spider) — an automated Web browser which follows
every link on the site. Exclusions can be made by the use of
robots.txt. The contents of each page are
then analyzed to determine how it should be
indexed (for example, words are
extracted from the titles, headings, or special fields called
meta tags). Data about web pages are
stored in an index database for use in later queries. A query can
be a single word. The purpose of an index is to allow information
to be found as quickly as possible. Some search engines, such as
Google, store all or part of the source page
(referred to as a
cache) as well as
information about the web pages, whereas others, such as
AltaVista, store every word of every page they
find. This cached page always holds the actual search text since it
is the one that was actually indexed, so it can be very useful when
the content of the current page has been updated and the search
terms are no longer in it. This problem might be considered to be a
mild form of
linkrot, and Google's handling
of it increases
usability by satisfying
user expectations that the search
terms will be on the returned webpage. This satisfies the
principle of least
astonishment since the user normally expects the search terms
to be on the returned pages. Increased search relevance makes these
cached pages very useful, even beyond the fact that they may
contain data that may no longer be available elsewhere.
When a user enters a
query into a
search engine (typically by using
key words), the engine examines
its
index and provides a listing of
best-matching web pages according to its criteria, usually with a
short summary containing the document's title and sometimes parts
of the text. The index is built from the information stored with
the data and the method by which the information is indexed. Most
search engines support the use of the
boolean operators AND, OR and NOT to
further specify the
search query.
Boolean operators are for literal searches that allow the user to
refine and extend the terms of the search. The engine looks for the
words or phrases exactly as entered. Some search engines provide an
advanced feature called
proximity search which allows users
to define the distance between keywords. There is also
concept-based searching where the research involves using
statistical analysis on pages containing the words or phrases you
search for. As well, natural language queries allow the user to
type a question in the same form one would ask it to a human. A
site like this would be ask.com.
The usefulness of a search engine depends on the
relevance of the
result set it gives back. While there may be
millions of web pages that include a particular word or phrase,
some pages may be more relevant, popular, or authoritative than
others. Most search engines employ methods to
rank the results to provide the "best" results
first. How a search engine decides which pages are the best
matches, and what order the results should be shown in, varies
widely from one engine to another. The methods also change over
time as Internet usage changes and new techniques evolve. There are
two main types of search engine that have evolved: one is a system
of predefined and hierarchically ordered keywords that humans have
programmed extensively. The other is a system that generates an
"inverted index" by analyzing texts it locates. This second form
relies much more heavily on the computer itself to do the bulk of
the work.
Most Web search engines are commercial ventures supported by
advertising revenue and, as a result,
some employ the practice of allowing advertisers to
pay money to have their listings ranked
higher in search results. Those search engines which do not accept
money for their search engine results make money by
running search related ads alongside
the regular search engine results. The search engines make money
every time someone clicks on one of these ads.
See also
References
- * GBMW: Reports of 30-day punishment, re: Car maker BMW had its
German website bmw.de delisted from Google, such as: Slashdot-BMW (05-Feb-2006).
- * INSIZ: Maximum size of webpages indexed by MSN/Google/Yahoo!
("100-kb limit"): Max Page-size (28-Apr-2006).
Further reading
External links