The search engine optimization world is complicated and ever-changing, but the basics can be easily grasped and even a small amount of SEO awareness will make a huge difference.
It's about knowing what online users are searching for, the responses they 're looking for, the words they 're using and the kind of content they want to consume.

SEO stands for "search engine optimization." It is the practice of growing both website traffic quality and quantity as well as access to your brand through non-paid (also referred to as "organic") search engine results.


Search engines do all of this by finding and cataloging all of the content available on the Web (web pages, PDFs, images , videos, etc.) through a process known as "crawling and indexing," and then ordering it by how well it matches responses in a process is referred to as "rank."


Organic search results are those produced by successful SEO, not paid for (i.e. not ads). It is important to note that search engines make advertising money. Their aim is to help solve search queries (within SERPs), keep searchers coming back, and keep them longer on the SERPs (search engine results pages).Some SERP features on Google are organic and may be inspired by SEO. Which include featured snippets (an organic outcome promoted that shows a answer inside a box) and related questions.



Why SEO is important?

What is SEO and how it works

SEO is also one of the few online marketing platforms which can continue to pay dividends over time when set up correctly. If you have a solid piece of content that deserves to rank for the right keywords, it will increase your traffic, while advertising requires continuous funding to bring traffic to your site.


White hat SEO:

"White hat SEO" refers to the SEO tactics, best practices, and methods that obey the law of the search engine, the primary goal to give users more value.

black hat SEO:


"Black hat SEO" refers to the techniques and strategies which attempt to search engines for spam / fool. Although black hat SEO can work it poses tremendous risk to websites.
What is SEO and how it works


Know the aims of your website / client!!!

Every website is different, so take the time to better understand the business goals of a particular site. This will not only help you decide which areas of SEO you should work on, where to track conversions and how to set targets, but will also help you build talking points for negotiating SEO projects with customers.

If your business has a local component, you'll also want to define your Google My Business listings with KPIs (Key Performance Indicators). Those may include:

  • Clicks-to-call
  • Clicks-to-website
  • Clicks-for-driving-directions

How are Search Engines working? 

Search engines come with three key functions:

crawl:Scour the web for content, testing the code / metadata for every URL you find.

Index:Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.

Rank: Provide the information parts that best answer a searcher 's question, meaning results are sorted from the most relevant to the least important.


search engine crawling: 

Crawling is the process of exploration in which search engines send a team of robots (called crawlers or spiders) to find new and modified content. Content can vary — it may be a website, an image, a video, a PDF, etc. — but content is discovered through links irrespective of format.

Search engine ranking:

Search engines process and store information they find in an index, an vast archive of all the material they have found, and they consider it good enough to satisfy searchers.

Search engines scan their database for highly appropriate content when someone conducts a search and then order the content in the hopes of addressing the search query. This ordering by relevance of the search results is called ranking. Generally speaking, you can presume the higher a website is rated, the more important the search engine assumes the site is to the query.


What is a robots.txt file?

robots.txt

Robots.txt is a webmaster text file created to guide web robots (usually search engine robots) about how to crawl pages about their website. The robots.txt file is part of the robots exclusion protocol (REP), a community of web standards that govern how robots crawl content from the web, access and index, and serve the content up to users. The REP also includes directives such as meta robots, as well as instructions on how search engines should treat links (such as "follow" or "nofollow") through page, subdirectory, or site-wide.

Not all robots on the Web obey robots.txt. Bad-intentioned people (e.g. email address scrapers) create bots that don't follow this protocol. Indeed, some bad actors use robots.txt files to find where the private content has been stored. Although blocking crawlers from private pages such as login and administration pages may seem logical so that they do not appear in the index, placing the location of those URLs in a publicly accessible robots.txt file also means that people with malicious intent can find them more easily. Instead of putting these pages in your robots.txt file, NoIndex is better at gate them behind a login form.

for more information : click

Can crawlers find all your important content?

Often crawling may allow a search engine to locate parts of your site, but other pages or sections that be hidden for one purpose or another. It is important to ensure that search engines are able to discover all the content that you want to index, and not just your homepage.
If you allow users to sign in, fill out forms, or respond to surveys before they access such content, search engines may not see those protected pages. Definitely not a crawler can log in.

Robots are not permitted to use search forms. Some people assume that if they put a search box on their site, search engines will be able to locate whatever their visitors are searching for.


Can you use Sitemaps!

A sitemap is exactly what it sounds like: a list of URLs on your web which can be used by crawlers to discover and index content. One of the best ways to ensure that Google recognizes your highest priority pages is by developing and uploading a file that meets Google's criteria through the Google Search Console. Although uploading a sitemap does not remove the need for good navigation on the web, it will definitely help crawlers take a path to all of your important sites.

for more information:click

Why do the search engines view the pages and store them?

After you have ensured that your site is crawled, the next order of business is to insure that it is indexable. That's right — just because a search engine can discover and crawl your site doesn't automatically mean it's going to be included in their database. We addressed how search engines discover the web pages in the previous segment on crawling. The index is where you store your discovered pages. The search engine makes it much as a browser does after a crawler discovers a file. In doing so, the search engine analyzes the contents of that page. All the data is contained in its index.

How do URLs rate Search Engines?

Search engines use algorithms, a mechanism or method to assess importance by retrieving and ordering the stored information in meaningful ways. Through the years, these algorithms have undergone many changes to improve the quality of the search results. For example , Google makes daily algorithm adjustments — some of these updates are minor quality tweaks, while others are core / broad algorithm updates that are deployed to address a specific issue, such as Penguin tackling link spam.





Previous Post Next Post