Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • SEO Q&A
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • Case Studies
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Find competitor keywords with NEW Keyword Gap
      Moz Pro

      Find competitor keywords with NEW Keyword Gap

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      What is your Brand Authority?
      Moz

      What is your Brand Authority?

      Check yours now
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • SEO Q&A

        Insights & discussions from an SEO community of 500,000+.

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • Case Studies

        Explore how Moz drives ROI with a proven track record of success.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Technical SEO
    4. Best practices for types of pages not to index

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Best practices for types of pages not to index

    Technical SEO
    21
    24
    8631
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • RichHamilton_qcs
      RichHamilton_qcs Subscriber last edited by

      Trying to better understand best practices for when and when not use a content="noindex".  Are there certain types of pages that we shouldn't want Google to index?  Contact form pages, privacy policy pages, internal search pages, archive pages (using wordpress).  Any thoughts would be appreciated.

      1 Reply Last reply Reply Quote 0
      • clippingnext
        clippingnext Banned last edited by

        Certainly! When it comes to SEO (Search Engine Optimization), there are certain types of pages that you may want to prevent search engines from indexing. This can help ensure that only your most relevant and valuable content is displayed in search engine results. Here are some best practices for types of pages not to index:

        Duplicate Content Pages:

        Avoid indexing pages with duplicate content, as search engines prefer unique content.
        Use canonical tags to indicate the preferred version of a page.
        Thin or Low-Quality Content Pages:

        Pages with little to no valuable content may harm your site's overall SEO.
        Consider adding substantial content to these pages or use meta tags to prevent indexing.
        Internal Search Results Pages:

        Exclude internal search results pages from indexing, as they may lead to a poor user experience in search results.
        Use the robots.txt file to disallow crawling of these pages.
        Thank You and Confirmation Pages:

        Pages that users see after completing a form submission or transaction may not provide significant value to search engine users.
        Use the noindex meta tag to prevent indexing of thank you and confirmation pages.
        Login and Account Pages:

        Secure pages containing login forms or user account information to prevent unauthorized access.
        Use the robots.txt file to disallow crawling of these pages.
        Tag and Category Pages:

        Depending on your content management system, tag and category pages may be automatically generated. These can sometimes result in duplicate content issues.
        Use the noindex meta tag or canonical tags as appropriate.
        Paginated Pages:

        For large sets of paginated content, consider only indexing the main paginated page and using the rel="next" and rel="prev" tags to indicate the paginated structure.
        Privacy Policy and Terms of Service Pages:

        While it's important to have these pages, they might not need to be indexed.
        Use the noindex meta tag if you don't want search engines to index these legal pages.
        Media Files and Non-HTML Content:

        Files like PDFs, images, and other non-HTML content may not need to be indexed.
        Use appropriate meta tags or header directives to prevent indexing.
        Test and Development Pages:

        Pages used for testing or development purposes should not be indexed.
        Use authentication or the robots.txt file to block search engine bots from accessing these pages.
        Always keep in mind that SEO best practices evolve, and it's essential to stay updated with the latest recommendations from search engines. Regularly check your website's performance in search engine results and adjust your indexing strategy accordingly.

        1 Reply Last reply Reply Quote 0
        • epicsportsx
          epicsportsx last edited by

          Best practices for determining which types of pages not to index involve strategic decisions to enhance the overall performance and relevance of your website on search engines. Here are some key considerations:

          Thin or Low-Quality Content:

          Recommendation: Identify and exclude pages with thin or low-quality content that doesn't provide substantial value to users. Focus on creating high-quality, informative content that aligns with user intent.

          Duplicate Content:

          Recommendation: Avoid indexing pages with duplicate content, as it can lead to confusion for search engines and may result in lower rankings. Use canonical tags to specify the preferred version of the content.

          Internal Search Result Pages:

          Recommendation: Exclude internal search result pages from indexing, as they often lead to duplicate content issues. Ensure that search engines focus on the primary content pages of your site.

          Archive or Staging Pages:

          Recommendation: Prevent search engines from indexing archive or staging pages. Use robots.txt or meta tags to disallow indexing of such pages to maintain the integrity of your live content.

          Thank You and Confirmation Pages:

          Recommendation: Non-essential pages like thank you or confirmation pages for form submissions may not need indexing. Exclude these pages to avoid unnecessary clutter in search engine results.

          Login or Session-Specific Pages:

          Recommendation: Exclude pages that require user authentication or are session-specific. This prevents search engines from indexing content that's not meant for public access.

          Paginated Pages:

          Recommendation: For paginated content, consider using rel="next" and rel="prev" tags to signal the relationship between pages. This helps search engines understand the structure without indexing each individual page.

          Category or Tag Pages:

          Recommendation: Depending on your website structure, category or tag pages may not need indexing. Ensure that these pages don't dilute the overall relevance of your site and use noindex tags if necessary.

          Privacy Policy, Terms of Service, and Legal Pages:

          Recommendation: While important for compliance, legal and policy pages may not require indexing in search results. Use noindex tags for these pages, allowing them to serve their purpose without being prominent in search listings.

          Dynamic URLs with Parameters:

          Recommendation: Exclude dynamically generated pages with URL parameters that don't represent unique content. Utilize canonical tags or parameter handling in Google Search Console to manage these pages.

          Unnecessary Media or File Attachment Pages:

          Recommendation: Media or file attachment pages may not need indexing. Use noindex tags to prevent these pages from appearing in search results while still providing access to the media itself.

          Regularly audit and monitor your site's performance in search engine results to ensure that the selected pages for non-indexing align with your SEO strategy and user experience goals. Always consider the specific needs and structure of your website when implementing these best practices.

          Read My Recent post here : PBU in Football

          1 Reply Last reply Reply Quote 0
          • successdunia
            successdunia last edited by

            Here are some best practices for types of pages not to index:

            Duplicate content pages. If you have multiple pages with the same or similar content, it's generally a good idea to avoid indexing all of them. This could include printer-friendly versions, alternate language versions, or slight variations of the same content.

            Thin or low-quality content pages. Pages with little or no content, or pages with content that is poorly written or irrelevant to your target audience, should not be indexed.

            Internal search results pages. These pages are typically not meant for users to land on directly, and they can clutter up search engine results pages (SERPs).

            Privacy and policy pages. These pages are typically not relevant to search users, and they can contain sensitive information that should not be indexed.

            Thank-you pages. These pages are typically displayed after a user submits a form or makes a purchase, and they are not meant for indexing.

            Login and checkout pages. These pages are typically not relevant to search users, and they can contain sensitive information that should not be indexed.

            Staging or test pages. These pages are not meant to be seen by the public, and they can clutter up SERPs.

            Paginated pages. If your paginated pages contain the same content as your main product or category pages, you may want to consider noindexing them. This will help to avoid duplicate content issues.

            You can use a number of methods to prevent pages from being indexed, including:

            Robots.txt file. You can use your robots.txt file to block search engines from crawling certain pages on your website.

            Noindex meta tag. You can add a noindex meta tag to the header of a page to prevent it from being indexed.

            Canonical tags. You can use canonical tags to specify which version of a page is the preferred version for search engines. This can be helpful for preventing duplicate content issues.

            It's important to note that noindexing pages is not always necessary. For example, if you have a blog with a lot of high-quality content, you may want to index all of your pages, even if they have similar content. However, if you have a lot of low-quality or irrelevant pages on your website, it's a good idea to noindex them to avoid harming your SEO.

            1 Reply Last reply Reply Quote 0
            • ShoaibRaza134
              ShoaibRaza134 last edited by

              Thanks to all of you for sending me valuable information.

              1 Reply Last reply Reply Quote 0
              • ray660
                ray660 last edited by

                To prevent specific types of web pages from being indexed by search engines, follow these best practices: Use the robots.txt file to disallow indexing for entire sections of your website or directories. On individual pages, utilize the meta robots tag to specify "noindex" or "nofollow" directives. Employ the X-Robots-Tag HTTP header to communicate indexing preferences, either at the server level or on specific pages. Password-protect pages that should be accessible only to authorized users. Implement canonical tags to indicate the preferred version of a page. Include only desired pages in your XML sitemap. Maintain a clean URL structure, and use "noindex" directives in robots meta headers for dynamic or user-generated content. For pages you want completely removed from search results, return 404 or 410 HTTP status codes. Regularly monitor indexed pages using tools like Google Search Console to ensure compliance with your indexing preferences while considering the potential impact on SEO and user experience.

                1 Reply Last reply Reply Quote 0
                • Moodesburn1991
                  Moodesburn1991 last edited by

                  When it comes to search engine optimization (SEO), there are certain types of pages that you may consider excluding from being indexed by search engines. Here are some common examples:

                  Duplicate content pages: If you have multiple pages with similar or identical content, it's generally a good idea to avoid indexing all of them. This could include printer-friendly versions, alternate language versions, or slight variations of the same content.

                  Temporary or seasonal pages: Pages that are only relevant for a limited time, such as seasonal promotions or special event pages, may not need to be indexed. Once the event or promotion has passed, you can remove them from being indexed to prevent clutter in search engine results.

                  Private or internal pages: If you have pages that are intended for internal use only, such as employee login pages, private user profiles, or administrative sections, it's typically best to exclude them from indexing. This ensures that sensitive or irrelevant content doesn't appear in search results.

                  Thin or low-quality pages: Pages with minimal or insufficient content, such as placeholder pages, thin affiliate pages, or low-quality auto-generated content, might not provide much value to search engine users. It's generally better to improve or remove such pages rather than indexing them.

                  Pagination and sorting pages: Pages that only differ in sorting, filtering, or pagination functionality, such as category listings or search result pages, may not need individual indexing. In these cases, it's often recommended to use canonical tags or URL parameters to consolidate them into a single indexed page.

                  Remember, these are general recommendations, and the specific needs of your website may vary. It's always a good idea to consult with an SEO professional to understand what pages should or shouldn't be indexed based on your unique circumstances.

                  things i cant accomplish ai seems interesting being looking at this blog

                  https://givevaluefirst.com/artificial-intelligence-for-dummies/

                  1 Reply Last reply Reply Quote 0
                  • hazelalia
                    hazelalia last edited by

                    To prevent certain types of pages from appearing in search engine results, use methods like robots.txt, meta robots tags, or canonicalization. Common pages to exclude include duplicates, low-quality content, search result pages, login/profile pages, thank you pages, and outdated content. Be cautious when choosing which pages to exclude to avoid affecting your site's SEO and user experience.

                    1 Reply Last reply Reply Quote 0
                    • zarajan
                      zarajan @zarajan last edited by

                      Best practices for preventing indexing of certain types of pages on your website include:

                      Avoid indexing duplicate content pages.
                      Exclude pages with thin or low-quality content.
                      Do not index internal search results pages.
                      Privacy and policy pages are typically not meant for indexing.
                      Consider "noindexing" tag or category archive pages.
                      Author pages may be "noindexed" if they lack substantial content.
                      "Thank-you" pages after form submissions or purchases can often be excluded.
                      Dynamic parameters or session IDs should not be indexed.
                      Pagination pages can be "noindexed" if they duplicate content.
                      Login or registration pages often don't need indexing.
                      Implement these practices using "noindex" meta tags or "robots.txt" directives while being cautious not to inadvertently block essential pages. Regularly monitor indexing status through tools like Google Search Console.

                      1 Reply Last reply Reply Quote 0
                      • zarajan
                        zarajan last edited by

                        Best practices for preventing indexing of certain types of pages on your website include:

                        1. Avoid indexing duplicate content pages.
                        2. Exclude pages with thin or low-quality content.
                        3. Do not index internal search results pages.
                        4. Privacy and policy pages are typically not meant for indexing.
                        5. Consider "noindexing" tag or category archive pages.
                        6. Author pages may be "noindexed" if they lack substantial content.
                        7. "Thank-you" pages after form submissions or purchases can often be excluded.
                        8. Dynamic parameters or session IDs should not be indexed.
                        9. Pagination pages can be "noindexed" if they duplicate content.
                        10. Login or registration pages often don't need indexing.

                        Implement these practices using "noindex" meta tags or "robots.txt" directives while being cautious not to inadvertently block essential pages. Regularly monitor indexing status through tools like Google Search Console.

                        zarajan 1 Reply Last reply Reply Quote 0
                        • FanDirect
                          FanDirect last edited by

                          I try not to index terms and conditions pages and privacy policy. Then there's the "Thank you" pages that might have conversion tracking pixels on. I do this for a few sites.

                          1 Reply Last reply Reply Quote 0
                          • ShoaibRaza134
                            ShoaibRaza134 last edited by

                            I have created a new website. I am new to blogging, so I need help with indexing and indexing. Some experts say we need to index our all pages and a few blog posts are just the opposite of that opinion.
                            I have a few pages like Affiliate Disclosures, Contact, About, and Policy.
                            Should I index them or not?

                            1 Reply Last reply Reply Quote 0
                            • educationedge012
                              educationedge012 last edited by

                              Duplicate Content: Avoid indexing pages with duplicate or thin content to prevent SEO issues.

                              Private or Confidential Pages: Secure login, checkout, or admin pages should not be indexed.

                              Thank You Pages: Exclude pages users see after form submissions or purchases.

                              Low-Value Pages: Hide pages with low-quality or outdated content from indexing.

                              Temporary Pages: Prevent indexing of staging, test, or under-construction pages.

                              Pagination: Use rel="nofollow" for paginated pages to consolidate value.

                              Canonicalization: Set canonical tags to specify preferred URLs for similar content.

                              Sitemaps: Exclude non-essential pages from your sitemap to control indexing.

                              Non-Content Files: Don't index PDFs, images, or other non-HTML files.

                              Disallowed in Robots.txt: Use robots.txt to block search engines from indexing unwanted pages.

                              1 Reply Last reply Reply Quote 0
                              • windzbazel
                                windzbazel last edited by

                                Certainly! Best practices for types of pages not to index are essential for optimizing your website's SEO performance. By carefully selecting which pages to exclude from search engine indexing, you can improve crawl budget allocation, enhance user experience, and maintain the quality of your site. These practices typically include:

                                Duplicate Content: Identifying and addressing duplicate content issues by using canonical tags to consolidate signals to search engines.

                                Thin Content: Evaluating and improving pages with thin or low-quality content by either costume updating them with relevant information or redirecting them to more pertinent pages.

                                Private or Internal Pages: Ensuring that private or internal pages, such as login pages or admin sections, are not indexed to prevent them from appearing in search results.

                                Search Result Pages: Excluding search result pages from indexing to prevent user-generated queries from appearing in SERPs.

                                Media Files: Preventing indexing of media files like images, videos, and PDFs, as they may not provide valuable information in search results.

                                1 Reply Last reply Reply Quote 0
                                • msalmanz7
                                  msalmanz7 last edited by

                                  There are a few types of pages that you may not want to index, for a variety of reasons. Here are some of the best practices for types of pages not to index:

                                  Pages that are not relevant to your website's visitors: If a page is not relevant to the content of your website, or if it is not likely to be of interest to your visitors, then there is no reason to index it. This could include pages such as login pages, error pages, and internal pages that are only used by administrators.

                                  Pages that are duplicate content: If a page is duplicate content of another page on your website, then there is no need to index both pages. This could include pages that are generated dynamically, such as search results pages or product pages.

                                  Pages that are not secure: If a page is not secure, such as a page that uses HTTP instead of HTTPS, then you may not want to index it. This is because search engines may flag these pages as insecure, which could deter visitors from visiting your website.

                                  Pages that are frequently updated: If a page is frequently updated, such as a blog page, then you may not want to index it. This is because the search engines will have to crawl the page more often, which could slow down your website.

                                  Pages that are not mobile-friendly: If a page is not mobile-friendly, then you may not want to index it. This is because more and more people are using mobile devices to access the internet, and search engines are starting to favor mobile-friendly websites.

                                  By following these best practices, you can ensure that your website is indexed by search engines only with the pages that are most relevant and useful to your visitors.

                                  Here are some additional tips for deciding which pages not to index:

                                  Consider your audience: Think about the type of content that your visitors are looking for. If a page is not relevant to their interests, then there is no need to index it.

                                  Use your analytics: Look at your website analytics to see which pages are the most popular. These are the pages that you should focus on indexing.

                                  Get feedback from your visitors: Ask your visitors what type of content they are looking for. This feedback can help you decide which pages to index and which pages to exclude.

                                  By following these tips, you can make sure that your website is indexed by search engines in a way that is beneficial to your visitors.

                                  1 Reply Last reply Reply Quote 0
                                  • epicsportsx
                                    epicsportsx last edited by

                                    Indexing decisions for web pages play a crucial role in search engine optimization (SEO) and overall website management. There are certain types of pages that you may want to prevent search engines from indexing to maintain the quality of your website's search engine results and to avoid potential SEO issues. Here are some best practices for types of pages not to index:

                                    Thin Content Pages: Avoid indexing pages with minimal or low-quality content. Such pages can include placeholder pages, duplicate content, or pages with very little text. Thin content can harm your website's SEO.

                                    Internal Search Result Pages: Search engines can sometimes index internal search result pages, which can lead to duplicate content issues. Use the "noindex" meta tag to prevent indexing of these pages.

                                    Tag and Category Pages: If you have a blog or a content-heavy website, tag and category pages may contain duplicate or low-value content. Consider using the "noindex" tag for these pages.

                                    Thank You and Confirmation Pages: Pages that users see after completing a form or making a purchase are often not useful for search engine results. Prevent these pages from being indexed to avoid cluttering search results.

                                    Private or Confidential Pages: Pages with sensitive information or private data should never be indexed. Make sure to use proper authentication and access controls to protect these pages.

                                    Duplicate Content Pages: If you have multiple versions of the same content (e.g., print-friendly versions, mobile versions), use canonical tags to indicate the preferred version and prevent duplicate content issues.

                                    Session ID or URL Parameters: Pages with session IDs or excessive URL parameters can create many duplicate URLs. Use URL canonicalization techniques or robots.txt to prevent indexing of unnecessary variations.

                                    Login Pages and Admin Sections: Prevent search engines from indexing login pages and admin sections of your website to maintain security and keep sensitive information hidden.

                                    Temporary or Under-Construction Pages: If you're working on a page that's not ready for public viewing, use the "noindex" tag to prevent it from appearing in search results.

                                    404 Error Pages: While 404 error pages should not be indexed, it's essential to provide a helpful 404 page that guides users to relevant content or the homepage.

                                    Pagination Pages: For paginated content like articles split across multiple pages, it's often best to let search engines index the main content and use rel="prev" and rel="next" tags to indicate the paginated structure without indexing each page individually.

                                    Regards : Epicsprtsx

                                    1 Reply Last reply Reply Quote 0
                                    • marketingratis
                                      marketingratis last edited by

                                      I want to extend my gratitude to the author for this comprehensive guide on best practices for managing indexing in SEO. As someone deeply involved in digital marketing and SEO, I find this topic to be of utmost importance. In today's fast-paced online landscape, it's crucial to make informed decisions about which pages to index and which to keep out of search engine results pages (SERPs).

                                      One of the key takeaways from this article is the emphasis on optimizing crawl budget. Google's crawl budget is a finite resource, and ensuring that search engines allocate it wisely can significantly impact a website's overall performance. The author rightly points out that preventing the indexing of pages that don't add substantial value to users can help in this regard.

                                      One of the practices highlighted in the article is the use of the "noindex" meta tag. This is a simple yet effective way to communicate to search engines that specific pages should not be included in their index. I appreciate the step-by-step instructions provided on how to implement this tag properly. This can be particularly helpful for those new to SEO.

                                      Additionally, the article's discussion on using robots.txt to disallow crawling of certain pages is a valuable strategy. However, as mentioned, it's important to exercise caution when using this method to prevent accidentally blocking important pages. The emphasis on regularly monitoring the robots.txt file and conducting thorough testing is a crucial piece of advice. This shows that the author is not just focused on prevention but also on maintaining site health and visibility.

                                      Another aspect that I found intriguing is the section on "thin content" pages. Identifying and addressing these pages is essential for maintaining the quality of a website. It's great to see practical recommendations on how to handle such pages, including updating them with relevant content or redirecting them to more relevant pages. This demonstrates a commitment to providing the best possible user experience, which is at the core of SEO success.

                                      Furthermore, the article delves into the nuances of handling duplicate content, an issue that many SEO practitioners encounter. The explanation of canonical tags and their role in consolidating duplicate content signals to search engines is spot on. The emphasis on regular audits to identify and resolve duplicate content issues is a proactive approach that can prevent potential ranking and indexing problems down the road.

                                      I would like to add that, in my experience, it's also important to stay updated with Google's guidelines and algorithm changes. Google's algorithms are constantly evolving, and what works today may not be as effective tomorrow. Therefore, staying informed through resources like Google Webmaster Guidelines and reputable SEO news sources is essential.

                                      In conclusion, this article provides a wealth of practical information and strategies for managing indexing in SEO. It's evident that the author has a deep understanding of the subject matter and is committed to helping SEO professionals make informed decisions. I look forward to reading more insightful articles from this source in the future. Thank you for sharing these valuable insights.

                                      Team marketingratis.com

                                      1 Reply Last reply Reply Quote 0
                                      • Moodesburn1991
                                        Moodesburn1991 last edited by

                                        When it comes to optimizing your website for search engines, there are certain types of pages that you may want to consider not indexing. Here are a few examples of such pages:

                                        Duplicate content: Pages that contain identical or substantially similar content to other pages on your site or elsewhere on the web. Search engines prefer unique and original content, so it's best to avoid indexing duplicate pages.

                                        Thin content: Pages that lack substantial content or are of low quality. These can include pages that have little to no text, primarily consisting of images, videos, or advertisements. Search engines tend to prioritize pages with valuable and informative content.

                                        Temporary or staging pages: Pages created during the development or testing phase of your website, which may not be relevant or useful to search engine users. It's a good idea to prevent these pages from being indexed to avoid confusion or negative impacts on search engine rankings.

                                        Private or sensitive information: Pages that include personal information, login pages, or any content that should be restricted to authorized users only. Preventing indexing of such pages can help maintain privacy and security.

                                        To prevent search engines from indexing specific pages, you can use the "robots.txt" file directives or the HTML "noindex" meta tag. These methods allow you to control which pages search engines can or cannot index.

                                        Remember, it's essential to regularly review and update your website's indexing strategies to ensure optimal visibility and user experience.

                                        1 Reply Last reply Reply Quote 0
                                        • talentwide
                                          talentwide last edited by

                                          The article does a great job of highlighting various scenarios where you should consider using the "noindex" meta tag or other techniques to prevent certain pages from appearing in search engine results. (Canada PR)Whether it's duplicate content, thin or low-quality pages, internal search result pages, or sensitive information, this post provides valuable insights and actionable tips to help improve your website's SEO and user experience.

                                          1 Reply Last reply Reply Quote 0
                                          • LouieFlynn
                                            LouieFlynn last edited by

                                            Pages with duplicate content, like printer-friendly versions, should be set as "noindex" to prevent confusion. Thin or low-quality content pages, such as placeholders or login screens, should also be excluded. Internal search results, tag/category pages, and user-generated content areas might be better off without indexing due to potential duplicate content e.g https://smamepestimate.com/ or spam concerns. Thank you/confirmation pages, sensitive content, and paginated/sorted versions of content can also benefit from not being indexed.

                                            1 Reply Last reply Reply Quote 0
                                            • maribailey10
                                              maribailey10 @donsilvernail last edited by

                                              @donsilvernail What should I do with pages that I've de-indexed intentionally?
                                              Like contact us and privacy policy generator I have to do it on my personal blog Footyware. Can I interlink it with my other pages too like homepage etc please guide.

                                              1 Reply Last reply Reply Quote 0
                                              • seoman10
                                                seoman10 last edited by

                                                Need to be clear on the purpose of "no-index".  Search engines will still crawl the page, but in theory will not be published in the index.  Some search engines may still choose to index the page despite no-index tag.  Also that page will still be publicly accessible on your website.

                                                As already noted a couple of times I would be very slow to noindex any page.

                                                I can't think of very many applications where it would be used. The way I view it is either something is public or its private, if it's public you properly want search engines to find it, or if it's private it should be locked away behind a username and password.

                                                1 Reply Last reply Reply Quote 1
                                                • donsilvernail
                                                  donsilvernail last edited by

                                                  Hi Richard,

                                                  Some archive pages in WordPress can produce significant traffic. Especial if the articles that reside under the archive are informative and the tag or category you use is a good keyword and provides value. So i only no index archives that have no real value.

                                                  Contact forms are up to you. Does the form sit on a landing page you want visitors? Or is it an internal link for data collection. A determination on what should be indexed or no indexed is what pages bring value to potential visitors. Many internal search pages bring no value to a user searching for your content on google. So these could be no index. User archives could be no index especial if the user is not an author of content on your site.

                                                  Thanks,

                                                  Don Silvernal

                                                  maribailey10 1 Reply Last reply Reply Quote 1
                                                  • Joe_Stoffel
                                                    Joe_Stoffel last edited by

                                                    Hi there,

                                                    Really any pages that you would not want returned to a user in the SERPs. Does the site contain sensitive personal information in some sort of customer profile? If so, you would want to index these pages.

                                                    I would not noindex contact form pages (valuable for users to be able to find) but internal search pages would be a good candidate as well as 'thank you' pages. If you have an ecommerce website, noindexing the shopping cart would be another smart idea.

                                                    As for archive pages, I tend to handle these with a canonical tag.

                                                    Hope this helps!

                                                    1 Reply Last reply Reply Quote 1
                                                    • 1 / 1
                                                    • First post
                                                      Last post

                                                    Got a burning SEO question?

                                                    Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                                                    Start my free trial


                                                    Browse Questions

                                                    Explore more categories

                                                    • Moz Tools

                                                      Chat with the community about the Moz tools.

                                                    • SEO Tactics

                                                      Discuss the SEO process with fellow marketers

                                                    • Community

                                                      Discuss industry events, jobs, and news!

                                                    • Digital Marketing

                                                      Chat about tactics outside of SEO

                                                    • Research & Trends

                                                      Dive into research and trends in the search industry.

                                                    • Support

                                                      Connect on product support and feature requests.

                                                    • See all categories

                                                    Related Questions

                                                    • mitty27

                                                      Unsolved Why My site pages getting video index viewport issue?

                                                      technical seo

                                                      Hello, I have been publishing a good number of blogs on my site Flooring Flow. Though, there's been an error of the video viewport on some of my articles. I have tried fixing it but the error is still showing in Google Search Console. Can anyone help me fix it out?

                                                      Technical SEO | | mitty27
                                                      0
                                                    • niconico101

                                                      Disallow: /404/ - Best Practice?

                                                      Hello Moz Community, My developer has added this to my robots.txt file: Disallow: /404/ Is this considered good practice in the world of SEO? Would you do it with your clients? I feel he has great development knowledge but isn't too well versed in SEO. Thank you in advanced, Nico.

                                                      Technical SEO | | niconico101
                                                      1
                                                    • reidsteven75

                                                      How Does Google's "index" find the location of pages in the "page directory" to return?

                                                      This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched.  These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory".  The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls.   Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.

                                                      Technical SEO | | reidsteven75
                                                      0
                                                    • JoaoPdaCosta-WBR

                                                      Splitting Page Authority with two URLs for the same page.

                                                      Hello guys, My website is currently holding two different URLs for the same page and I am under the impression such set up is dividing my Page Authority and Link Juice. We currently have the following page with both URLs below: www.wbresearch.com/soldiertechnologyusa/home.aspx
                                                      www.wbresearch.com/soldiertechnologyusa/ Analysing the page authority and backlinks I identified that we are splitting the amount of backlinks (links from sites, social media and therefore authority). "/home.aspx"
                                                      PA: 67
                                                      Linking Root Domains: 52
                                                      Total Links: 272 "/"
                                                      PA: 64
                                                      Linking Root Domains: 29
                                                      Total Links: 128 I am under the impression that if the URLs were the same we would maximise our backlinks and therefore page authority. My Question: How can I fix this? Should I have a 301 redirect from the page "/" to the "/home.aspx" therefore passing the authority and link juice of “/” directly to “/homes.aspx”? Trying to gather thoughts and ideas on this, suggestions are much appreciated? Thanks!

                                                      Technical SEO | | JoaoPdaCosta-WBR
                                                      0
                                                    • TalkInThePark

                                                      De-indexing millions of pages - would this work?

                                                      Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
                                                      www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
                                                      www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
                                                      And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
                                                      TalkInThePark

                                                      Technical SEO | | TalkInThePark
                                                      0
                                                    • zeepartner

                                                      ECommerce: Best Practice for expired product pages

                                                      I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?

                                                      Technical SEO | | zeepartner
                                                      1
                                                    • gallreddy

                                                      What is the best URL designed for a product page?

                                                      Should a product page URL include the category name and subcategory name in it? Most ecommerce platforms it seems are designed to do have the category and sub-category names included in the URL followed by the product name. If that is the case and the same product is listed in more then 1 category and sub-category then will that product have 2 unique urls and as a result be treated as 2 different product pages by google?   And then since it is the same product in two places on the site won't google treat those 2 pages as having duplicate content? SO is it best to not have the category and sub-category names in the URL of a product page? And lastly, is there a preferred character limit for a URL to be less than in size? Thanks!

                                                      Technical SEO | | gallreddy
                                                      0
                                                    • DavidS-282061

                                                      Are .html pages better for ranking than .asp pages

                                                      It seems that .html pages do better for the long tail...

                                                      Technical SEO | | DavidS-282061
                                                      0

                                                    Get started with Moz Pro!

                                                    Unlock the power of advanced SEO tools and data-driven insights.

                                                    Start my free trial
                                                    Products
                                                    • Moz Pro
                                                    • Moz Local
                                                    • Moz API
                                                    • Moz Data
                                                    • STAT
                                                    • Product Updates
                                                    Moz Solutions
                                                    • SMB Solutions
                                                    • Agency Solutions
                                                    • Enterprise Solutions
                                                    Free SEO Tools
                                                    • Domain Authority Checker
                                                    • Link Explorer
                                                    • Keyword Explorer
                                                    • Competitive Research
                                                    • Brand Authority Checker
                                                    • MozBar Extension
                                                    • MozCast
                                                    Resources
                                                    • Blog
                                                    • SEO Learning Center
                                                    • Help Hub
                                                    • Beginner's Guide to SEO
                                                    • How-to Guides
                                                    • Moz Academy
                                                    • API Docs
                                                    About Moz
                                                    • About
                                                    • Team
                                                    • Careers
                                                    • Contact
                                                    Why Moz
                                                    • Case Studies
                                                    • Testimonials
                                                    Get Involved
                                                    • Become an Affiliate
                                                    • MozCon
                                                    • Webinars
                                                    • Practical Marketer Series
                                                    • MozPod
                                                    Connect with us

                                                    Contact the Help team

                                                    Join our newsletter
                                                    Moz logo
                                                    © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                                                    • Accessibility
                                                    • Terms of Use
                                                    • Privacy

                                                    Looks like your connection to Moz was lost, please wait while we try to reconnect.