The first SEO pillar we will discuss is technical SEO. This pillar plays a significant role in how search engines will initially discover a website. Some people might be a little intimidated by the idea of having to look at a website’s back end. But I assure you, it’s not as terrifying as you may expect.
What you will know after this lesson:
- Technical SEO definition
- Which essential on-page technical SEO components to check for
- What technical SEO items you need to review on a new or old site
Technical SEO: What is it?
The simple definition is optimizing the code on your website so that both Google and users can understand your content. A competent SEO specialist can develop clear communication between a website and search engine bots (or crawlers).
Do you want to ensure that Google can crawl the content on your product or service pages?
Do you want to hide a login page from Google in order to protect the privacy and security of your users?
If yes, then you need to implement technical SEO.
If you are new to SEO, you should consider the following:
Technical SEO audits can range in complexity from a straightforward one-pager to a lengthy 40-page user story document. When learning the industry jargon and looking at code for the first time, can be intimidating, but once you stick with it, you will eventually have a lot of power at your fingertips!
Important technical SEO items:
- H1 – <h1>
- Title tag – <title>
- <meta name=”description” content=”insert content here.”/>
- Index / NoIndex
- Canonical rel=”canonical”
Let’s examine each of these in more detail:
H1 tags are an important signal to search engines indicating the page’s topic. The likelihood that a page will appear in search results will ultimately depend on how relevant the page is to the searcher’s query.
Title Tags are a key factor in helping both users and search engines understand what a page is about. In addition to being the first impression many people have of a page, a well-written Title Tag may determine whether a visitor clicks through or not, even if the website ranks highly and is a well-known brand.
The best way to understand a Title Tag is to imagine it as the title of a book. A book or website’s title needs to be intriguing, insightful, and relevant to the content you are about to read.
A meta description is a property that you create to describe the content of a page. Essentially, it’s a short summary of what your page is about. Although search engines will seldom replace your meta-description with a snippet of text from your landing page, it’s still important to write your own meta description as it will allow you to improve your clickthrough rate for keywords that are valuable to your business.
This will signal to Google or other search engines whether you want them to index this page and make it available to everyone or whether you want to hide it and prevent it from being found. The terms Index and NoIndex are used to tell Google whether or not you want a page to be displayed in the search results. Although it is a very simple tag to include in your code, it has significant implications.
Canonical tags are used to indicate the ‘canon’ or ‘master page’ between duplicated pages. This tag assists the search engine in prioritizing the page you have chosen to be the master page. It primarily aids in resolving problems like duplicate content. A website with the URL “/office-chairs/” as opposed to “office-chairs/?color=black” is one example. eCommerce or news publishing websites are examples of websites that frequently experience duplicate content problems, this is because websites these websites have largely identical content or product pages. However, whether you have a large site or not, it is always a good idea to ensure that your pages are canonicalized.
A file called Robots.txt instructs search engine spiders not to index particular pages or parts of a website. The majority of significant search engines, including Google, Bing, and Yahoo, are aware of and abide by Robots.txt requests.
Because Google can typically find and index all of your site’s important pages and automatically NOT index pages that aren’t important or duplicates of other pages, the majority of websites don’t need a robots.txt file. If a bot visits your website, and you don’t have a robots.txt file , it will simply crawl and index your pages as it customarily would.
Here are a few reasons why you may want a robots.txt file:
- You want to prevent certain content from appearing in search results.
- You are creating a live website but do not want the search engines to index any new pages just yet.
You want to optimize how trusted bots and crawlers can access your website.
- You are utilizing paid links or advertisements that require bots to follow specific instructions.
- They assist you in some circumstances in adhering to certain Google policies.