Ads

List Of Yahoo Services

Monday, November 29, 2010

Flickr
Flickr is a popular photo sharing service which Yahoo! purchased on 29 March 2005.

Yahoo! Advertising
A combination of advertising services owned by Yahoo!.

Yahoo! Answers
Yahoo! Answers is a service that allows users to ask and answer questions other users post. It competes with Ask MetaFilter. Yahoo! Answers uses a points system whereby points are awarded for asking and answering questions, and deducted for deleting a question or answer, or getting reported.

Yahoo! Avatars
Yahoo! Avatars allows users to create personalized character images, also known as avatars, which are displayed on

Yahoo! Messenger, Yahoo! Answers and the user's Yahoo! 360° profile.

Yahoo! Babel Fish
Yahoo! Babel Fish is a translation service.

Yahoo! Bookmarks
Yahoo! Bookmarks is a private bookmarking service. All Users from Yahoo MyWeb were transferred to this service.

Yahoo! Buzz
Yahoo! Buzz is a community based publishing service much like that of Digg, where users can buzz about certain
stories and allow them to be featured on the main page of the site.

Yahoo! Developer Network
Yahoo! Developer Network offers resources for software developers which use Yahoo! technologies and Web services.

Yahoo! Digu

Yahoo! Directory
Yahoo! was first formed as a web directory of web sites, organized into a hierarchy of categories and subcategories, which became the Yahoo! Directory. Once a human-compiled directory, Yahoo! Directory now offers two methods of inclusion: Standard, which is free and only available for non-commercial categories, and Express, which charges over US$300 for a quick inclusion in the directory.

Yahoo! Finance
Yahoo! Finance offers financial information, including stock quotes and stock exchange rates.

Yahoo! Games
Yahoo! Games allows users to play games, such as chess, billiards, checkers and backgammon, against each other. Users can join one of various rooms and find players in these rooms to play with. Most of the games are Java applets, although some require the user to download the game, and some games are single-player. Yahoo! acquired a one person effort called ClassicGames.com in 1997, which became Yahoo! Games.[citation needed]

Yahoo! Groups
Yahoo! Groups is a free groups and mailing list service which competes with Google Groups. It was formed when Yahoo! acquired eGroups in August 2000. Groups are sorted in categories similar to the Yahoo! Directory. Yahoo! Groups also offers other features such as a photographic album, file storage and a calendar.

Yahoo! Kids
Yahoo! Kids is a children's version of the Yahoo! portal. It also offers some online safety tips.

Yahoo! Local
Find local businesses and services and view the results on a map. Refine and sort results by distance, topic, or other factors. Read ratings and reviews. Uses hCalendar and hCardmicroformats, so that event and contact details can be downloaded directly into calendar and address-book applications.

Yahoo! Mail
Yahoo! acquired Four11 on 8 October 1997, and its webmail service Rocketmail became Yahoo! Mail. Since Google released Gmail on 1 April 2004, Yahoo! Mail has made several improvements to keep ahead of the competition, which also includes MSN Hotmail and AOL Mail. Yahoo! Mail is the only web-based email service that offers unlimited storage for all users. On 9 July 2004, Yahoo! acquired an e-mail provider named Oddpost and used its technology to create Yahoo! Mail Beta, which uses Ajax to mimic the look and feel of an e-mail client. On 19 June 2008, Yahoo Mail introduced its 2 new email domains: ymail.com and rocketmail.com ("@ymail.com" and "@rocketmail.com" athttp://mail.yahoo.com).[1]

Yahoo! Maps
Yahoo! Maps offers driving directions and traffic.

Yahoo! Meme
Yahoo! Meme is a beta social service, similar to the popular social networking site Twitter.

Yahoo! Messenger
Yahoo! Messenger is an instant messaging service first released on 21 July 1999, which competes with AOL Instant Messenger, MSN Messenger, Google Talk, ICQ and QQ. It offers several unique features, such as IMvironments, custom status messages, and custom avatars. On 13 October 2005, Yahoo! announced that Yahoo! Messenger and MSN Messenger would become interoperable.

Yahoo! Mobile
Yahoo! Mobile is a mobile website used predominantly in the UK. It offers mobile downloads such as ringtones.

Yahoo! Movies
Yahoo! Movies offers showtimes, movie trailers, movie information, gossip, and others.

Yahoo! Music
Yahoo! Music offers music videos and internet radio (LAUNCHcast), a for-fee service known as Yahoo! Music Unlimited and the Yahoo! Music Engine- which has been sold to Rhapsody on Oct. 31st, 2008.

Yahoo! News news updates and top stories at Yahoo! News, including world, national, business, entertainment, sports, weather, technology, and weird news.

Yahoo! OMG
OMG is a Yahoo! Entertainment online tabloid with most content provided by Access Hollywood and X17.

Yahoo! Parental Controls
Yahoo! Parental Controls are special controls given by parents for their children, closely associated with Yahoo! Kids.

Yahoo! Personals
Yahoo! Personals is an online dating service with both free and paid versions. However, the free service is limited, as only paying users can contact users they meet through Yahoo! Personals and exchange contact information.

Yahoo! Pipes
Yahoo Pipes is a free RSS mashup visual editor and hosting service.

Yahoo! Publisher Network
Yahoo! Publisher Network is an advertising program, which is currently in beta and only accepts US publishers.

Yahoo! Real Estate
Yahoo! Real Estate offers real estate-related information and allows users to find rentals, mew houses, real estate agents, mortgages and more.

Yahoo! Search
Yahoo! Search is a search engine which competes with MSN Search and market leader Google. Yahoo! relied on Google results from 26 June 2000 to 18 February 2004, but returned to using its own technology after acquiring Inktomi and Overture (which owned AlltheWeb and AltaVista). Yahoo! Search uses a crawler named Yahoo! Slurp.

Yahoo! Search Marketing
Yahoo! Search Marketing provides pay per click inclusion of links in search engine result lists, and also delivers targeted ads. The service was previously branded as Overture Services after Yahoo! acquired Overture in 2003.

Yahoo! Shopping
Yahoo! Shopping is a price comparison service, that allows users to search for products and compare prices of various online stores.

Yahoo! Small Business
Yahoo! Small Business offers web hosting, domain names and e-commerce services for small businesses.

Yahoo! Smush.it
Yahoo! Smush.it optimizes digital images by removing unnecessary bytes and reducing their file size.

Yahoo! Sports
Yahoo! Sports offers sports news, including scores, statistics, and fixtures. It includes a "fantasy team" game.

Yahoo! Travel
Yahoo! Travel offers travel guides, booking and reservation services.

Yahoo! TV
Yahoo! TV offers TV listings and scheduled recordings on Tivo box remotely.

Yahoo! Video
Yahoo! Video is a video sharing site.

Yahoo! Voice
Yahoo! Voice was formerly known as Dialpad. It is a Voice over IP PC-PC, PC-Phone and Phone-to-PC service.

Yahoo! Web Analytics
IndexTools was acquired by Yahoo! and re-branded as 'Yahoo! Web Analytics'.

Yahoo! Widgets
Yahoo! Widgets is a cross-platform desktop widget runtime environment. The software was previously distributed as a commercial product called 'Konfabulator' for Mac OS X andWindows until it was acquired by Yahoo! and rebranded 'Yahoo! Widgets' and made available for free.

Yahoo! 360° Plus Vietnam
Social networking services. Popular in Vietnam.
Read more ...

General Search Engine Information....

Saturday, November 27, 2010
History of search engines

In the early days of Internet development, its users were a privileged minority and the amount of available information was relatively small. Access was mainly restricted to employees of various universities and laboratories who used it to access scientific information. In those days, the problem of finding information on the Internet was not nearly as critical as it is now. Site directories were one of the first methods used to facilitate access to information resources on the network. Links to these resources were grouped by topic. Yahoo was the first project of this kind opened in April 1994. As the number of sites in the Yahoo directory inexorably increased, the developers of Yahoo made the directory searchable. Of course, it was not a search engine in its true form because searching was limited to those resources who’s listings were put into the directory. It did not actively seek out resources and the concept of seo was yet to arrive. Such link directories have been used extensively in the past, but nowadays they have lost much of their popularity. The reason is simple – even modern directories with lots of resources only provide information on a tiny fraction of the Internet.

For example, the largest directory on the network is currently DMOZ (or Open Directory Project). It contains information on about five million resources. Compare this with the Google search engine database containing more than eight billion documents. The WebCrawler project started in 1994 and was the first full-featured search engine. The Lycos and AltaVista search engines appeared in 1995 and for many years Alta Vista was the major player in this field. In 1997 Sergey Brin and Larry Page created Google as a research project at Stanford University. Google is now the most popular search engine in the world.

Currently, there are three leading international search engines – Google, Yahoo and MSN Search. They each have their own databases and search algorithms. Many other search engines use results originating from these three major search engines and the same seo expertise can be applied to all of them. For example, the AOL search engine (search.aol.com) uses the Google database while AltaVista, Lycos and AllTheWeb all use the Yahoo database.

Common search engine principles

To understand seo you need to be aware of the architecture of search engines. They all contain the following main components:

Spider - a browser-like program that downloads web pages.

Crawler – a program that automatically follows all of the links on each web page.

Indexer - a program that analyzes web pages downloaded by the spider and the crawler.

Database– storage for downloaded and processed pages.

Results engine – extracts search results from the database.

Web server – a server that is responsible for interaction between the user and other search engine

components. Specific implementations of search mechanisms may differ.

For example,

the Spider+Crawler+Indexer component group might be implemented as a single program that downloads web pages, analyzes them and then uses their links to find new resources. However, the components listed are inherent to all search engines and the seo principles are the same. Spider. This program downloads web pages just like a web browser. The difference is that a browser displays the information presented on each page (text, graphics, etc.) while a spider does not have any visual components and works directly with the underlying HTML code of the page. You may already know that there is an option in standard web browsers to view source HTML code.

Crawler.

This program finds all links on each page. Its task is to determine where the spider should go either by evaluating the links or according to a predefined list of addresses. The crawler follows these links and tries to find documents not already known to the search engine.

Indexer.

This component parses each page and analyzes the various elements, such as text, headers, structural or stylistic features, special HTML tags, etc.

Database.

This is the storage area for the data that the search engine downloads and analyzes. Sometimes it is called the index of the search engine. Results Engine. The results engine ranks pages. It determines which pages best match a user's query and in what order the pages should be listed.

This is done according to the ranking algorithms of the search engine. It follows that page rank is a valuable and interesting property and any seo specialist is most interested in it when trying to improve his site search results. In this article, we will discuss the seo factors that influence page rank in some detail.

Web server.

The search engine web server usually contains a HTML page with an input field where the user can specify the search query he or she is interested in. The web server is also responsible for displaying search results to the user in the form of an HTML page.
Read more ...

List of Google services

Friday, November 19, 2010

1. Google search
2. Google Adword
3. Google Adsense
4. Google apps
5. Google Analytics
6. Google map
7. Google webmaster tool
8. Google sites
9. Google feedburner
10. Google Picasa
11. Google orkut
12. Google Gmail
13. Google labs
14. Google earth
15. Google local business centre
16. Google books library
17. Gtalk
18. Google Blogger
19. Google docs
20. Google trands
21. Google global
22. Google checkout
23. Google pack
24. Google Calendar
25. Google Desktop
Read more ...

Do you know about Clickthrough rate (CTR)..?

Friday, November 19, 2010

Clickthrough rate (CTR) is the number of clicks your ad receives divided by the number of times your ad is shown (impressions). Your ad and keyword each have their own CTRs, unique to your own campaign performance.
A keyword's CTR is a strong indicator of its relevance to the user and the overall success of the keyword. For example, a well targeted keyword that shows a similarly targeted ad is more likely to have a higher CTR than a general keyword with non-specific ad text. The more your keywords and ads relate to each other and to your business, the more likely a user is to click on your ad after searching on your keyword phrase.
A low CTR may point to poor keyword performance, indicating a need for ad or keyword optimization. Therefore, you can use CTR to gauge which ads and keywords aren't performing as well for you and then optimize them.
CTR is also used to determine your keyword's Quality Score. Higher CTR and Quality Score can lead to lower costs and higher ad position.
Read more ...

Dynamic content used to be a red flag for search engine friendly design

Thursday, November 11, 2010

Dynamic content used to be a red flag for search engine friendly design, but times have changed. Search engines now include dynamically-generated pages in their indexes, but some particulars of dynamic pages can still be obstacles to getting indexed. Whether it’s keeping in synch with inventory or updating a blog, more than likely if you’re a website owner you have some level of dynamic or CMS-managed content on your site (and if not, you should really be looking into it for your next redesign). Follow the guidelines here to avoid major pitfalls and ensure that your dynamic body of work is search engine friendly from head to toe.

Rule #1: Be sure that search engines can follow regular HTML links to all pages on your site.

Any website needs individually linkable URLs for all unique pages on the site. This way every page can be bookmarked and deep linked by users, and indexed by search engines. But dynamic websites have an additional concern: making sure the search engine robots can reach all of these pages.

For example, suppose you have a form on your website: you ask people to select their location from a pull-down, and then when people submit the form your website generates a page with content that is specifically written for that geographical area. Search engine robots don't fill out forms or select from pull-down menus, so there will be no way for them to get to that page.

This problem can be easily remedied by providing standard type HTML links that point to all of your dynamic pages. The easiest way to do this is to add these links to your site map.

Rule #2: Set up an XML site map if you can’t create regular HTML links to all of your pages, or if it appears that search engines are having trouble indexing your pages.

If you have a large (10K pages or more) dynamic site, or you don’t think that providing static HTML links is an option, you can use an XML site map to tell search engines the locations of all your pages.

Most website owners tell Google and Yahoo! about their site maps through the search engines' respective webmaster tools (Links: Google Yahoo!). But if you're an early adopter, you should look into the new system whereby a site map can be easily designated in the robots.txt file using sitemap autodiscovery. Ask.com, Google and Yahoo! currently support this feature. Cool!

Rule #3: If you must use dynamic URLs, keep them short and tidy

Another potential problem - and this is one that is subject to some debate - is with dynamic pages that have too many parameters in the URL. Google itself in its webmaster guidelines states the following: "If you decide to use dynamic pages (i.e., the URL contains a "?" character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them few."

Here are a few guidelines you should follow for your website parameters:

Limit the number of parameters in the URL to a maximum of 2

Use the parameter "?id=" only when in reference to a session id

Be sure that the URL functions if all dynamic items are removed

Be sure your internal links are consistent - always link with parameters in the same order and format

Rule #4: Avoid dynamic-looking URLs if possible

Besides being second-class citizens of search, dynamic-looking URLs are also less attractive to your human visitors. Most people prefer to see URLs that clearly communicate the content on the page. Since reading the URL is one of the ways that people decide whether to click on a listing in search engines, you are much better off having a URL that looks like this:

rather than this:

We also think that static-looking, “human-readable” URLs are more likely to receive inbound links, because some people will be less inclined to link to pages with very long or complicated URLs.

Furthermore, keywords in a URL are a factor, admittedly not a huge one, in search engine ranking algorithms. Notice how, in the above example, the static URL contains the keywords “discount” and “church bells” while the dynamic URL does not.

There are many tools available that will re-create a dynamic site in static form. There are also tools that will re-write your URLs, if you have too many parameters, to "look" like regular non-dynamic URLS. We think these are both good options for dynamic Intrapromote has a helpful post on dynamic URL rewriting.

Rule #5: De-index stubs and search results

Have you heard of “website stubs?” These are pages that are generated by dynamic sites but really have no independent content on them. For example, if your website is a shopping cart for toys, there may be a page generated for the category “Age 7-12 Toys” but you may not actually have any products in this category. Stub pages are very annoying to searchers, and search engines, by extension, would like to prevent them from displaying in their results. So do us all a favor and either figure out a way to get rid of these pages, or exclude them from indexing using the robots.txt file or robots meta tag.

Search results from within your website is another type of page for which Google has stated a dislike: “Typically, web search results don’t add value to users, and since our core goal is to provide the best search results possible, we generally exclude search results from our web search index.” Here’s our advice: either make sure your search results pages add value for the searcher (perhaps by containing some unique content related to the searched term), or exclude them from indexing using the robots.txt file or robots meta tag.

Bonus Points: Handling duplicate content

While it's not a problem that's specific to dynamic sites, this rule is one that dynamic sites are more likely to break than static ones. If multiple pages on your site display materials that are identical or nearly identical, duplicates should be excluded from indexing using the robots.txt file or a robots meta tag. Think of it this way: you don’t want all your duplicate pages competing with each other on the search engines. Choose a favorite, and exclude the rest. [Editor's note: we no longer (2009) recommend de-indexing duplicate content. A better approach is to either redirect your duplicate pages to the primary page using a server-side, 301 redirect, or to set up a tag for any page that has been duplicated. A good explanation of best practices for handling duplicate content in 2009 can be found at Matt Cutts' Blog]

Dynamic content is usually timely and useful, which is why users love it, and the search engines want to list it. And now you know how to help your dynamic website reach its full search engine potential.


Read more ...

Web Crawling.............!!!

Monday, November 1, 2010

When most people talk about Internet search engines, they really mean World Wide Web search engines. Before the Web became the most visible part of the Internet, there were already search engines in place to help people find information on the Net. Programs with names like "gopher" and "Archie" kept indexes of files stored on servers connected to the Internet, and dramatically reduced the amount of time required to find programs and documents. In the late 1980s, getting serious value from the Internet meant knowing how to use gopher, Archie, Veronica and the rest.

Today, most Internet users limit their searches to the Web, so we'll limit this article to search engines that focus on the contents of Web pages.

Before a search engine can tell you where a file or document is, it must be found. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling. (There are some disadvantages to calling part of the Internet the World Wide Web -- a large set of arachnid-centric names for tools is one of them.) In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages.

How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.


Google began as an academic search engine. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. They built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.

Keeping everything running quickly meant building a system to feed necessary information to the spiders. The early Google system had a server dedicated to providing URLs to the spiders. Rather than depending on an Internet service provider for the domain name server (DNS) that translates a server's name into an address, Google had its own DNS, in order to keep delays to a minimum.

When the Google spider looked at an HTML page, it took note of two things:

  • The words within the page
  • Where the words were found

Words occurring in the title, subtitles, meta tags and other positions of relative importance were noted for special consideration during a subsequent user search. The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the." Other spiders take different approaches.

These different approaches usually attempt to make the spider operate faster, allow users to search more efficiently, or both. For example, some spiders will keep track of the words in the title, sub-headings and links, along with the 100 most frequently used words on the page and each word in the first 20 lines of text. Lycos is said to use this approach to spidering the Web.

Other systems, such as AltaVista, go in the other direction, indexing every single word on a page, including "a," "an," "the" and other "insignificant" words. The push to completeness in this approach is matched by other systems in the attention given to the unseen portion of the Web page, the meta tags. Learn more about meta tags on the next page.

Read more ...