# STRINGERSEO > 20-years of experience delivering SEO and content marketing strategies. Bespoke SEO and digital marketing solutions! Get in touch to find out more. --- ## Pages - [content strategy.](https://stringerseo.co.uk/expertise/content-strategy/): Bespoke content strategy & production for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - [technical seo.](https://stringerseo.co.uk/expertise/technical-seo/): Bespoke technical SEO & GEO services for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - [seo services in west sussex, united kingdom.](https://stringerseo.co.uk/expertise/seo-west-sussex-united-kingdom/): SEO training, consultancy and content services in West Sussex, United Kingdom. Get in touch today and find out more! - [seo consultant in brighton, united kingdom.](https://stringerseo.co.uk/expertise/seo-brighton-united-kingdom/): Customer-focused SEO, and content strategy consultancy services in Brighton, East Sussex. Get in touch today! - [seo services in worthing, west sussex.](https://stringerseo.co.uk/expertise/seo-worthing-west-sussex/): SEO training, consultancy and content services in Worthing, West Sussex. Get in touch today and find out more! - [seo services in shoreham-by-sea, west sussex.](https://stringerseo.co.uk/expertise/seo-shoreham-by-sea/): SEO training, consultancy, and content services in Shoreham-by-Sea, West Sussex. Digital marketing consultancy. Get in touch to learn more. - [services.](https://stringerseo.co.uk/expertise/): Discover professional & bespoke SEO services, consultancy and website development for SMEs and small businesses. Get in touch today! - [sitemap.](https://stringerseo.co.uk/sitemap/): Discover all the pages and blog posts on https://stringerseo.co.uk. - [about.](https://stringerseo.co.uk/jonathan-stringer/): Jonathan Stringer is an SEO professional tailoring bespoke digital marketing campaigns for SBs, SMEs, and LEs in the UK. Get in touch today! - [contact.](https://stringerseo.co.uk/contact/): Get in touch if you would like to explore tailored SEO, PPC & Web build solutions. Fill the form below or email jonathan@stringerseo.co.uk. - [seo guides.](https://stringerseo.co.uk/guides-resources/): Explore a wealth of in-depth articles, comprehensive guides, and real-world case studies to help you keep up with SEO. - [case studies.](https://stringerseo.co.uk/case-studies/): SEO case studies about Jonathan Stringer collaborating with businesses to grow organic traffic. Get in touch for SEO consultancy! - [cooky policy.](https://stringerseo.co.uk/cookie-policy-uk/): This Cookie Policy was last updated on 19 March 2024 and applies to citizens and legal permanent residents of the United Kingdom. --- ## Posts - [How to use SerpApi for keyword research (step-by-step with Python examples)](https://stringerseo.co.uk/content/how-to-use-serpapi-in-keyword-research/): See an example of a python script snippet that calls SERPAPI, and learn how you can use it in your SEO keyword research work-flow. - [How to Automate Keyword Research with Keywords Everywhere and Google Sheets](https://stringerseo.co.uk/technical/automate-keyword-research/): If you’re working on keyword research in Google Sheets, you know how tedious it can be to copy‐and‐paste keywords and... - [10 Steps to Evaluate an SEO Agency](https://stringerseo.co.uk/seo-ai/marketing/10-steps-to-evaluate-your-seo-agency/): Discover 10 practical steps to audit SEO agencies' work, set clear KPIs, demand meaningful reports and decide whether to find a new agency. - [SEO Website & Blog Migrations: Checklist & Best Practice Guide](https://stringerseo.co.uk/technical/seo-blog-migrations/): Find out how to do a SEO blog or website migration. Simple process with a detailed checklist of actions to help nurture organic traffic. - [Semantic SEO: what it is and how to optimise for it](https://stringerseo.co.uk/content/semantic-seo/): Find out what Semantic SEO is, and how you can optimize for it to achieve greater visibility in search results and AI Answer Engines. - [How to Find Keyword Opportunities Using Google Search Console](https://stringerseo.co.uk/content/how-to-find-keyword-opportunities-on-google-search-console/): Find out how to use Google Search Console for keyword research to unlock opportunities for content optimisation or to create new pages. - [How SEO is a long-term, user-focused strategy](https://stringerseo.co.uk/seo-ai/how-is-seo-a-long-term-user-focused-strategy/): Learn how SEO is a user-focused strategy, what techniques to avoid, and what questions help you understand if your content is helpful. - [How to Use Structured Data for Local SEO](https://stringerseo.co.uk/content/how-to-use-structured-data-for-local-seo/): Learn how to apply structured data to help grow you local visibility for different search terms that people use in Google! - [How to Extract Canonical URLs in Google Sheets Using Python](https://stringerseo.co.uk/technical/how-to-extract-canonical-urls-in-google-sheets-using-python/): Learn how to extract canonical URL in Google Sheets using python. Useful for when IMPORTXML is being clunky for large URL lists! - [How to Optimise a Product Data Feed for Google Merchant Center](https://stringerseo.co.uk/content/how-to-optimise-a-product-data-feed-for-google-merchant-center/): Learn how to optimise and create a product data feed following Google's best practice guidelines from merchant center. - [How to Add an Author Bio on Author Archives in WordPress](https://stringerseo.co.uk/technical/how-to-add-an-author-bio-on-author-archives-in-wordpress/): Find out how to inject an author's biography underneath the H1 of an author archive page templates in WordPress! - [How to Add Image Links in a Custom WordPress XML Sitemap](https://stringerseo.co.uk/technical/how-to-add-image-links-in-a-wordpress-xml-sitemap/): If you have built a custom WordPress XML sitemap generator then this article shows you how to link to images from the post XML sitemap. - [How to Structure a Blog Post](https://stringerseo.co.uk/content/how-to-structure-a-blog-post/): SEO Tips for small businesses. Find out how to structure and optimise a blog post to generate organic website traffic. - [How To Use Keywords Everywhere for Keyword Research](https://stringerseo.co.uk/content/keyword-research-using-keywords-everywhere/): Learn how to do keyword research using Keywords Everywhere. A tool to help you understand what keywords are being searched for in Google. - [How to Build a Custom XML Sitemap in WordPress](https://stringerseo.co.uk/technical/how-to-build-a-custom-xml-sitemap-in-wordpress/): XML sitemaps are important for SEO, it provides a list of pages for search engines to crawl from your website.... - [How to Start a Basic SEO Strategy in 5 Steps](https://stringerseo.co.uk/content/seo-tips-for-small-business-owners/): Learn how to start an SEO strategy for your small business. Find out what SEO factors are important and help generate more website traffic. - [How to Create a WordPress Plugin to Warn Content Editors About Long URLs](https://stringerseo.co.uk/technical/wordpress-plugin-warning-about-long-urls/): URLs are a very important part of user experience and SEO. Here's a WP plugin to warn admin users about long URLs. - [How & Why Brands Are Winning With Reddit Marketing ](https://stringerseo.co.uk/link-earning/why-brands-are-winning-with-reddit-marketing/): Reddit is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. - [How to Automate & Extract Screaming Frog Issues into Google Sheets](https://stringerseo.co.uk/technical/how-to-automate-extract-screaming-frog-issues-into-google-sheets/): Learn how to automate the process of organising issue reports in Google Sheets by running a simple python script! - [How to Modify All WordPress Links for a Reverse Proxy Setup](https://stringerseo.co.uk/technical/how-to-modify-all-wordpress-links-for-a-reverse-proxy-setup/): If you are hosting your WordPress blog or website behind a reverse proxy under a different domain, you might experience... - [HubSpot Blog Post Exports: How to Clean Them Up for WordPress Using Google Sheets](https://stringerseo.co.uk/technical/hubspot-blog-posts/): HubSpot Blog Post Export: Find out how to clean the m up before importing into WordPress. This technique is useful for blog migratons. - [URL Redirect Mapping with Python for Website Migrations](https://stringerseo.co.uk/technical/seo-migration-automate-url-mapping-with-python/): Streamline your SEO migration with this Python script. Automate URL mapping using page titles, the body of content and fuzzy matching. --- # # Detailed Content ## Pages > Bespoke content strategy & production for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - Published: 2025-06-25 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/expertise/content-strategy/ why it matters. The best written content can sit unread if it doesn’t resonate with a target audience or if it fails to guide them toward taking action. Our human-first content strategy targets search intent, and helps foster decisions with an aim to turn visitors into customers. what we've achieved. Leading end-to-end SEO & content strategies for Purpl Discounts, we grew organic traffic by 51. 9%, and conversions by 38. 8% in the first half of 2025. find out more → 20 years experience. user & data driven. measurable results. what you'll gain. content roadmap. A customised calendar of topics, formats and channels mapped to each stage of your customer journey and seasonal factors. user-focused content. Jargon-free copy that engages readers and drives them to engage with your brand or services. competitive edge. In-depth gap analysis to fill topics and niches your competitors missed in order to claim untapped search visibility. measurement & optimisation. Monthly updates and tests to keep your content fresh, relevant and aligned with evolving user needs & algorithms. agile content process. what we offer. research & discovery. This phase digs into the keywords people are actually searching for, not just the obvious ones, but the ones that signals real intent. We use our own AI tools to size up the competition and figure out where the gaps are. strategic planning. We’ll build out a clear content calendar with priorities, deadlines, and formats that make sense for your goals. Each brief includes target keywords, suggested headlines, and calls to action that aren’t just ticking boxes, they’re there to drive action. We also look closely at who we’re talking to. That means proper audience and demographic research, so the tone, topics and angles actually land. We’ll group themes into clusters and pillars to build momentum over time. creation, integration, optimisation & distribution. We publish a mix of formats, from practical guides and listicles to trending news and narrative-led content, all designed to support your broader goals. Every asset is optimised for search, covering the essentials like title tags, meta descriptions, internal links and schema markup to maximise visibility. We also integrate rich media to drive deeper engagement like custom imagery, infographics or video content that adds value and strengthens SEO performance. Content isn’t just created and left to sit. We distribute it across paid, owned and earned channels to help boost awareness, attract links, and drive meaningful interaction. And if there’s existing content that performed well in the past? We’ll look at ways to refresh or repurpose it so it continues to support your growth targets and KPIs. measurement & reporting. You get a monthly dashboard showing what’s working, traffic, visibility, engagement, and where to tweak. From there, we make small but meaningful changes that keep performance moving in the right direction. frequently asked questions. FAQs. What makes an SEO content strategist different from a writer? An SEO content strategist does deep keyword and search intent research for content planning and implements optimisation best practices to help make sure content appears in organic search results, resonates with audiences and converts. Is this service right for my business? We tailor content strategies to your goals and budget, focusing on the topics and keywords that drive the most impact whether you’re a startup, small business, small medium enterprise, or a global brand. What are the 4 steps of content strategy? A straightforward four-step framework is: 1. Define your goals and audience. 2. Map topics to users and channels. 3. Launch content and distribute across paid, owned, and earned channels. 4. Measure and revise based on performance data. --- > Bespoke technical SEO & GEO services for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - Published: 2025-06-24 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/expertise/technical-seo/ why it matters. Even the most compelling content can underperform if search engine crawlers hit roadblocks or if your site loads slowly for users. Technical SEO lays a rock-solid foundation so Google (and your users) can reach every page, fast. what we've achieved. After our technical SEO migration and strategy, My Bespoke Room experienced a 21% average MoM growth in organic traffic in 2025. find out more → 20 years experience. user & data driven. boost search visibility. what you'll gain. index more pages. Prevent lost opportunities: reduce indexing errors so every key piece of content, and page, appears in organic search. faster page loads. Load in under 2 seconds on desktop and mobile to help keep bounce rates down and engagement up on both devices. SEO & GEO visibility. Gain visibility in Google's AI Overviews, and generative AI platforms by implementing a mix of technical and content marketing strategies. improved experiences. Hit Core Web Vitals thresholds on all the important KPIs for smooth performance across devices. eliminate crawl errors. what we offer. comprehensive website audit. We’ll audit deep into your site with industry-leading tools to identify crawl errors, broken links and any blockers hiding in your websites code. Then we’ll hand you a clear, prioritised list of fixes alongside strategic tips for bigger-picture improvements so you can start winning quick SEO gains today and plan for long-term health tomorrow. user-first architecture. We'll work on your content and URL hierarchy, implement a pillar and cluster content strategy to drive sentiment, and fine-tune your internal links so users and crawlers alike can navigate through your site. We aim to achieve deeper crawl reach, smoother navigation and a happier audience that sticks around longer. speed & performance tuning. From smart image compression and lazy-loading to caching and server-side tweaks, we’ll work towards reducing your load times, and aim for under 2 seconds for users that are either on a desktop or mobile device. Faster pages mean fewer bounces, and more engagement. core web vitals optimisation. We provide suggestions to fine-tune your CSS, JavaScript and resource loading patterns to meet Google’s thresholds for Largest Contentful Paint, First Input Delay and Cumulative Layout Shift. By delivering a smooth experience on smartphones and tablets, you’ll keep users engaged regardless of device. structured data. We’ll craft and test the right schema markup, Product, Review, FAQ and more, so Google, and generative AI platforms, really “understand” your content and serves it for relevant queries. website & blog migrations. Move domains or platforms without losing hard-won SEO equity. We plan and execute every 301 redirect, update your crawl map, validate canonical tags and run full staging tests so you retain rankings and traffic through launch and beyond. on-going monitoring & reporting. With continuous scans, real-time alerts and a live dashboard, we catch crawl spikes, Core Web Vitals regressions and broken links before they hurt you. Every month, you’ll get a concise report that maps technical health directly to your business goals. frequently asked questions. FAQs. What exactly is Technical SEO? It’s the practice of optimising your site’s infrastructure, crawling, indexing, load speed and user experience, to help improve how much content is indexed and ranking in search engine results. How much will this cost? Bespoke. This really depends on the size of your website and how long it will take to complete the audit. We work on a bespoke model and will see what is possible with the budget that has been allocated. We use AI to save time and costs where possible, and aim to still deliver the best value to our customers. How soon will I see results? Most clients notice a drop in crawl errors and speed improvements within 30 days, with search visibility gains becoming evident by month two or three. However, this really depends on how long it takes to implement the technical audit. --- > Discover all the pages and blog posts on https://stringerseo.co.uk. - Published: 2025-02-05 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/sitemap/ Browse https://stringerseo. co. uk Pagesabout. case studies. contact. Content Strategy Services. SEO Consultant in Brighton, United Kingdom. seo guides. SEO Services in Shoreham-by-Sea, West Sussex. SEO Services in West Sussex, United Kingdom. SEO Services in Worthing, West Sussex. services. sitemap. STRINGERSEO Ltd Cookie Policy. Technical SEO Services. Blog Posts10 Steps to Evaluate an SEO AgencyCanonical Tags: What They Are and Why They MatterHow & Why Brands Are Winning With Reddit Marketing How SEO is a long-term, user-focused strategyHow to Add an Author Bio on Author Archives in WordPressHow to Add Image Links in a Custom WordPress XML SitemapHow to Automate & Extract Screaming Frog Issues into Google SheetsHow to Automate Keyword Research with Keywords Everywhere and Google SheetsHow to Build a Custom XML Sitemap in WordPressHow to Create a WordPress Plugin to Warn Content Editors About Long URLsHow to Extract Canonical URLs in Google Sheets Using PythonHow to Find Keyword Opportunities Using Google Search ConsoleHow to Modify All WordPress Links for a Reverse Proxy SetupHow to Optimise a Product Data Feed for Google Merchant CenterHow to Start a Basic SEO Strategy in 5 StepsHow to Structure a Blog PostHow To Use Keywords Everywhere for Keyword ResearchHow to use SerpApi for keyword research (step-by-step with Python examples)How to Use Structured Data for Local SEOHubSpot Blog Post Exports: How to Clean Them Up for WordPress Using Google SheetsSemantic SEO: what it is and how to optimise for itSEO Website & Blog Migrations: Checklist & Best Practice GuideURL Redirect Mapping with Python for Website Migrations --- > Get in touch if you would like to explore tailored SEO, PPC & Web build solutions. Fill the form below or email jonathan@stringerseo.co.uk. - Published: 2025-02-05 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/contact/ Get in touch by filling out the form below or email jonathan@stringerseo. co. uk. * Contact form secured by JetPack. --- > Explore a wealth of in-depth articles, comprehensive guides, and real-world case studies to help you keep up with SEO. - Published: 2025-02-05 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/guides-resources/ Explore articles and guides designed to empower you with the knowledge and confidence to tackle SEO, content, and digital marketing projects. Whether you're a beginner looking for step-by-step instructions or an experienced enthusiast seeking expert insights, these resources will help you master new skills, troubleshoot challenges, and bring your ideas to life. --- > SEO case studies about Jonathan Stringer collaborating with businesses to grow organic traffic. Get in touch for SEO consultancy! - Published: 2024-03-15 - Modified: 2026-02-26 - URL: https://stringerseo.co.uk/case-studies/ Working with a diverse range of businesses, from ambitious start-ups, small local businesses (SBs) and dynamic small-to-medium enterprises (SMEs) to well-established large enterprises (LEs). STRINGERSEO's experience spans across industry-leading brands such as Zoopla, Hamptons International, Pepsi Max, Land Rover, Mazda, Honda, Valusys, REKKI, Dexerto, and many more. 2025, My Bespoke Room Developed a new blog for My Bespoke Room and led the SEO migration from a subdomain to the main website. The development and migration process took two months, resulting in an initial 6% uplift and then a 21% MoM uplift in organic traffic post migration. MyBespokeRoom. com 2024 to 2025, Purpl Discounts Leading end-to-end SEO strategy, content, technical and editorial, to enhance accessibility and conversions for disabled consumers. Grew organic traffic by 51. 9%, and conversions by 38. 8% in the first half of 2025. Purpldiscounts. com 2025, KaizenJDM Web development in progress. Kaizenjdm. com 2024, STL Training Worked alongside the digital marketing team at STL Training to ensure the website adhered to 2024's new data privacy laws for third-party cookies. STL-Training. co. uk 2023 to 2024, eventbrite Worked as part of the Foundation Marketing team and put together a strategy to recover performance after the core algorithmic and helpful content updates in 2023. Eventbrite. com 2022 to 2023, eBay We launched the eBay sneaker platform in the USA as a POC (proof of concept) for future content development. This drove a 5x increase in organic visits from the end of May-23. ebay. com/sneakers 2022 to 2023, PHIN SEO consultancy that focused on optimising content for audiences that are searching for private healthcare information. This grew clicks at an average rate of 10% MoM (month-on-month) in H2 2022. In 2023, worked on improving user experience metrics. PHIN. org. uk 2022, Dexerto We set an EEAT and technical SEO strategy that improved performance by 22% comparing the before and after Google releases core, helpful and product reviews algorithm updates in September 2022. This contributed to a record number of visits ahead of Black Friday 2022. Dexerto. com 2021, Boomin At the time of when the business was trading, we got the website SEO ready technically and wrote evergreen content before it's launch. This grew traffic at an average rate of 6% DoD (day-on-day) on launch. Boomin. com 2020 to 2021, REKKI We were able to connect with new audiences by pragmatically publishing new content on a UK & USA location-level. Performance improved by +85% YoY in just 3-months. Rekki. com/food-wholesalers 2019 to 2020, MyVoucherCodes We restored and published new content to recover organic search visibility after a 2 year decline period. Search rankings improved by 60% in 7-months and traffic was up 12% MoM on average throughout this period. Myvouchercodes. co. uk 2016 to 2019, Zoopla Property Group Traffic grew by 6% for Zoopla. co. uk, 41% for Primelocation. com, and 62% for Smartnewhomes. com by integrating product, engineering, and marketing into a cross-functional team. Working closely with social, content, and PR teams, we learnt what mattered to our audience by understanding their pain points in the property journey, and how they searched for information. This allowed us to create location-specific content that genuinely helped users, leading to stronger engagement. Zoopla. co. uk --- --- ## Posts > See an example of a python script snippet that calls SERPAPI, and learn how you can use it in your SEO keyword research work-flow. - Published: 2025-08-28 - Modified: 2026-02-07 - URL: https://stringerseo.co.uk/content/how-to-use-serpapi-in-keyword-research/ - Categories: Content Summary: SerpApi is a third-party service that captures and parses search engine results pages (SERPs) and returns structured data (typically JSON). It can support keyword research by revealing SERP intent, query refinements, “People also ask” questions, related searches, and competitor page sets. It does not provide keyword search volume on its own. Table of contents What SerpApi is (and what it isn’t) When SerpApi is useful in keyword research Quick start (Python) Core parameters for keyword research (hl, gl, location, device) Extracting keyword signals from the SERP Pagination: expanding competitor and intent coverage Keyword clustering workflow (practical approach) Scaling responsibly: rate limits, caching, and storage Common pitfalls FAQs What SerpApi is (and what it isn’t) SerpApi provides programme access to SERP data by collecting results and returning a structured representation. In practice, that means an application can request a “SERP snapshot” for a query and receive parsed modules such as organic results, “People also ask”, related searches, and other features (where present). Important clarification: SerpApi is not an official Google product or an official Google Search API. It is a third-party SERP extraction service. Also important: SerpApi is not a keyword volume tool. It will not provide Google Ads keyword volumes unless another data source is used for that purpose. Compliance note (non-legal guidance): Automated querying of search engines can raise terms-of-service, policy, and legal considerations. Implementations should be designed to operate responsibly (e. g. , rate limiting, caching, minimal collection, and appropriate usage review by internal stakeholders). When SerpApi is useful in keyword research SerpApi is most valuable when the goal is to learn what the SERP reveals, rather than to estimate volume: Intent validation: identifying whether the SERP is informational, commercial, navigational, or local. Query expansion: collecting related searches and “People also ask” questions as modifiers and subtopics. Competitor discovery: building a list of ranking URLs and domains for a query, then analysing patterns. SERP feature mapping: spotting features that change content requirements (e. g. , local packs, shopping, videos). Localisation research: comparing results across countries/languages/locations and devices. Quick start (Python) This example uses SerpApi’s Python client to retrieve a Google SERP response and print key modules. # pip install google-search-results from serpapi import GoogleSearch params = { "engine": "google", "q": "ambient sound design tools", "api_key": "YOUR_API_KEY", "hl": "en", # interface language "gl": "uk", # country code # "location": "London, England, United Kingdom", # optional # "device": "desktop", # optional (depends on engine support) } search = GoogleSearch(params) results = search. get_dict organic = results. get("organic_results", ) paa = results. get("related_questions", ) # often used for "People also ask" related = results. get("related_searches", ) print("Organic results:", len(organic)) print("PAA questions:", len(paa)) print("Related searches:", len(related)) Tip: For keyword research, it is usually more helpful to persist the raw JSON response (for auditing and reproducibility) and extract a smaller “signals” dataset for analysis. Core parameters for keyword research (hl, gl, location, device) Keyword research frequently fails when SERPs are pulled without controlling for geography and language. At minimum, keep these parameters explicit: hl: interface language (e. g. , en, pt, es). gl: country code to influence results (e. g. , uk, br, co). location: city/region-level localisation (useful when local packs appear or for regional intent). device: device context (desktop vs mobile) can change SERP layouts and feature prevalence. Practical workflow: Use a consistent default (e. g. , hl=en, gl=uk) and only vary one dimension at a time during comparisons (location OR language OR device). This makes differences explainable. Extracting keyword signals from the SERP A SerpApi response can be mined for multiple research signals. The exact keys can vary by engine and by which SERP features appear, so extraction should be defensive (i. e. , treat fields as optional). 1) Related searches (query expansion) Related searches are a strong source of modifiers and adjacent topics. They can also be used to form cluster candidates. def extract_related_searches(results: dict) -> list: related = results. get("related_searches", ) or out = for item in related: q = item. get("query") or item. get("title") if q: out. append(q. strip) return out 2) “People also ask” (question-based intent) Questions often map cleanly to FAQ sections, supporting information architecture and internal linking planning. def extract_paa_questions(results: dict) -> list: paa = results. get("related_questions", ) or out = for item in paa: q = item. get("question") if q: out. append(q. strip) return out 3) Organic results (competitor set + page patterns) Collect URLs, titles, and snippets to identify page types, content formats, and repeated subtopics. from urllib. parse import urlparse def extract_organic(results: dict) -> list: organic = results. get("organic_results", ) or out = for r in organic: link = r. get("link") title = r. get("title") snippet = r. get("snippet") if link: domain = urlparse(link). netloc. replace("www. ", "") out. append({ "title": title, "link": link, "domain": domain, "snippet": snippet, "position": r. get("position"), }) return out Pagination: expanding competitor and intent coverage Many keyword research tasks benefit from collecting more than the first page of results, especially when building competitor sets or when SERP intent is mixed. Pagination commonly uses a start parameter (e. g. , 0, 10, 20... ). from serpapi import GoogleSearch def fetch_pages(query: str, pages: int = 3, page_size: int = 10): all_results = for i in range(pages): params = { "engine": "google", "q": query, "api_key": "YOUR_API_KEY", "hl": "en", "gl": "uk", "start": i * page_size, } results = GoogleSearch(params). get_dict all_results. append(results) return all_results Practical note: Pagination increases cost and request volume. It should be used selectively (e. g. , for priority topics, not for every long-tail variation). Keyword clustering workflow (practical approach) SerpApi can support clustering by comparing SERP similarity across queries. A practical approach is to compare overlap of ranking domains or URLs and then group queries that share a high proportion of results. Rule of thumb: A threshold such as “60%+ overlap” can work as a starting heuristic, but it is not universal. Some topics naturally produce more diverse SERPs, especially localised or news-driven queries. Simple overlap example (domain overlap) def domain_set(results: dict) -> set: items = results. get("organic_results", ) or out = set for r in items: link = r. get("link") if link: out. add(urlparse(link). netloc. replace("www. ", "")) return out def overlap(a: set, b: set) -> float: if not a or not b: return 0. 0 return len(a & b) / min(len(a), len(b)) Suggested workflow: Choose a small set of seed queries per topic. Pull one SERP per query using consistent location/language parameters. Compare domain overlap (or URL overlap for stricter matching). Cluster queries above the chosen threshold. Assign a primary keyword per cluster based on relevance and SERP intent (not solely on wording similarity). Scaling responsibly: rate limits, caching, and storage At scale, keyword research runs can become expensive and fragile without guardrails. Rate limiting: implement throttling and retries with backoff. Avoid spikes in request volume. Caching: store responses keyed by query + parameters (q, hl, gl, location, device, start) and reuse within a reasonable TTL. Storage: persist raw JSON for auditability, then derive slim datasets (CSV/Parquet) for analysis. Change tracking: SERPs change; record timestamps and parameter sets to interpret differences correctly. Minimal caching example: import json import hashlib from pathlib import Path from serpapi import GoogleSearch CACHE_DIR = Path(". serp_cache") CACHE_DIR. mkdir(exist_ok=True) def cache_key(params: dict) -> str: canonical = json. dumps(params, sort_keys=True, ensure_ascii=False) return hashlib. sha256(canonical. encode("utf-8")). hexdigest def cached_search(params: dict) -> dict: key = cache_key(params) path = CACHE_DIR / f"{key}. json" if path. exists: return json. loads(path. read_text(encoding="utf-8")) results = GoogleSearch(params). get_dict path. write_text(json. dumps(results, ensure_ascii=False), encoding="utf-8") return results Common pitfalls Uncontrolled localisation: mixing results from different regions/languages can invalidate comparisons. Assuming fields always exist: SERP features appear conditionally; extraction should treat fields as optional. Over-collecting: requesting multiple pages for every query inflates cost without necessarily improving decisions. Confusing SERP signals with volume: SERP patterns indicate intent and competition, not demand size. No audit trail: without timestamps and parameters, SERP differences become hard to explain later. FAQs Is SerpApi an official Google API? No. SerpApi is a third-party service that collects and parses SERP data and returns a structured representation. Does SerpApi provide keyword search volume? No. SerpApi provides SERP data rather than keyword volume metrics. Search volumes require a separate data source. Which SerpApi parameters matter most for keyword research? Localisation parameters typically have the biggest impact: hl (language), gl (country), and location (regional targeting). Device context can also affect SERP composition. What is the best way to use “People also ask” data? Question extraction can support topic coverage, content outlines, and FAQ planning. Questions can also reveal sub-intents that deserve dedicated sections or separate cluster pages. How can SERP overlap help with keyword clustering? Queries that produce highly similar SERPs often share the same underlying intent. Comparing domain or URL overlap provides a pragmatic clustering signal, particularly when combined with manual intent checks. What should be considered before scaling SerpApi usage? Request volume, cost, stability, caching strategy, and compliance considerations should be addressed. Implementations typically benefit from rate limiting, retries with backoff, and response caching. References SerpApi (official site) SerpApi Search API documentation SerpApi pricing / plan limits --- - Published: 2025-07-01 - Modified: 2025-07-01 - URL: https://stringerseo.co.uk/technical/automate-keyword-research/ - Categories: Technical If you’re working on keyword research in Google Sheets, you know how tedious it can be to copy‐and‐paste keywords and search volume data manually especially if you need to switch between Chrome tabs and third-party tools. This guide show's you how to link Google Sheets to the Keywords Everywhere API using Python, fetch search volumes, and automatically paste the output onto your Google Sheet. There’s a no-code solution using Make to achieve the same or a similar output with various other keyword research tools, but we’ll save explaining that for a rainy day. What You’ll Learn How to read keywords from a Sheet with gspread How to call the Keywords Everywhere API using form-encoded requests How to write the results back into your spreadsheet How to share your Sheet with a service account—manually and programmatically What You Need Python 3. 7 or higher A Keywords Everywhere account and valid API key A Google Cloud service account JSON file (with Sheets API enabled) A Google Sheet containing your list of keywords Grant Your Service Account Access Open your credentials. json and copy the client_email value (e. g. my-sa@project. iam. gserviceaccount. com). Open your Google Sheet in the browser and click Share (top right). Paste that service account email, set its role to Editor, and click Send. Automated Sharing (Advanced) If you prefer code to clicks, enable the Drive API and add this at the top of your script: from googleapiclient. discovery import build from google. oauth2 import service_account # Load Drive API credentials SCOPES = creds_drive = service_account. Credentials. from_service_account_file( 'credentials. json', scopes=SCOPES) drive_service = build('drive', 'v3', credentials=creds_drive) SPREADSHEET_ID = 'YOUR_SHEET_ID' SA_EMAIL = 'my-sa@project. iam. gserviceaccount. com' permission = { 'type': 'user', 'role': 'writer', 'emailAddress': SA_EMAIL } drive_service. permissions. create( fileId=SPREADSHEET_ID, body=permission, fields='id', sendNotificationEmail=False ). execute The Python Script Save this as keywords_to_gsheets. py and update the configuration at the top. import gspread from oauth2client. service_account import ServiceAccountCredentials import requests, urllib. parse, time # === CONFIGURATION === SPREADSHEET_NAME = "Your Google Sheet Name" WORKSHEET_NAME = "Automation" INPUT_RANGE = "A2:A100" OUTPUT_START_CELL = "B2" API_KEY = "YOUR_REAL_API_KEY" DELAY_SECONDS = 1 # === Authenticate with Sheets === scope = creds = ServiceAccountCredentials. from_json_keyfile_name( "credentials. json", scope) client = gspread. authorize(creds) sheet = client. open(SPREADSHEET_NAME). worksheet(WORKSHEET_NAME) # === Read keywords === cells = sheet. range(INPUT_RANGE) keywords = # === Helper: cell label → row/col === def cell_to_coords(label): col_str = ''. join(filter(str. isalpha, label)). upper row = int(''. join(filter(str. isdigit, label))) col = 0 for ch in col_str: col = col * 26 + (ord(ch) - ord('A') + 1) return row, col start_row, start_col = cell_to_coords(OUTPUT_START_CELL) # === Fetch volume from Keywords Everywhere === def get_search_volume(keyword): url = "https://api. keywordseverywhere. com/v1/get_keyword_data" headers = { "Authorization": f"Bearer {API_KEY}", "Accept": "application/json", "Content-Type": "application/x-www-form-urlencoded" } params = { "kw": keyword. encode("utf-8"), "country": "gb", "currency": "gbp", "dataSource":"cli" } data = urllib. parse. urlencode(params) resp = requests. post(url, headers=headers, data=data) resp. raise_for_status result = resp. json if result. get("data") and isinstance(result, list): return result. get("vol", 0) return 0 # === Main loop === for idx, kw in enumerate(keywords): vol = get_search_volume(kw) print(f"{kw}: {vol}") sheet. update_cell(start_row + idx, start_col, vol) time. sleep(DELAY_SECONDS) print("All done: volumes are in Google Sheets. ") Sample Google Sheet Layout Keyword (Column A)Search Volume (Column B)seo keyword research tools2,400automate keyword research140keyword research3,600 Troubleshooting Tips 401 Unauthorized: Make sure you use Authorization: Bearer YOUR_REAL_API_KEY. All volumes zero: Confirm you’re parsing data, not a dict lookup. Encoding errors: Always encode keywords as UTF-8 in form data. Future Enhancements Write Month-on-Month (MoM), CPC and competition into adjacent columns Cache results locally to save API credits Use aiohttp for parallel requests Get related/suggested keywords and people also search for data Get domain and URL keywords for competitors How this Script Helps You This set up will automate keyword volume lookups, feed data into keyword research reports, and free up time for higher-value analysis. Hope you enjoy it! --- > Discover 10 practical steps to audit SEO agencies' work, set clear KPIs, demand meaningful reports and decide whether to find a new agency. - Published: 2025-06-30 - Modified: 2026-02-08 - URL: https://stringerseo.co.uk/seo-ai/marketing/10-steps-to-evaluate-your-seo-agency/ - Categories: Marketing Handing a website’s SEO to an agency means trusting a third party with growth targets, budgets, and brand reputation. If progress feels unclear, deliverables feel vague, or “SEO speak” is doing more work than results, a structured review helps separate signal from noise. This guide sets out practical checks to understand what’s happening “under the bonnet”, spot red flags early, and decide whether to reset expectations, correct course, or end the engagement cleanly. Table of contents 1. Nail down exactly what was expected 2. Turn vague hopes into solid KPIs 3. Insist on meaningful reporting 4. Own the data and access 5. Run a quick technical health-check 6. Review content quality and intent 7. Scrutinise link-building methods 8. Benchmark against competitors 9. Fix communication and ways of working 10. Decide and move forward Red flags to watch Handy links for verification FAQs 1. Nail down exactly what was expected Before considering a change, get specific about what the agency agreed to deliver. Most SEO scopes include some combination of: Technical audits and fixes: crawlability, indexation, speed, internal linking, structured data. On-page optimisation: titles, headings, meta descriptions, and information architecture aligned to search intent. Content planning and production: topic research, briefs, editorial calendar, content updates. Off-page activity: digital PR, outreach, partnerships, citations (where relevant), and broader distribution. Reporting and analysis: performance, actions taken, what changed, and what comes next. Pull up the original proposal or SOW (scope of work) and tick off deliverables. Any mismatch is the starting point for a clear, evidence-based conversation. 2. Turn vague hopes into solid KPIs “More traffic” is a desire, not a KPI. SEO performance should be assessed against measurable goals that connect to outcomes. Useful questions to ask an agency: What is being targeted? Example: “Increase organic sessions by 20% in six months for priority pages”. Which metrics matter most? Organic clicks, qualified leads, revenue contribution, sign-ups, calls, or other agreed outcomes. How will progress be reviewed? A regular cadence (monthly is typical) with a consistent reporting format. Tools such as Google Search Console (performance and queries) and Google Analytics 4 (journeys and conversions) should sit at the centre of KPI tracking. 3. Insist on meaningful reporting A deck full of “rankings up two places” is rarely actionable. Reports should explain: What changed: work completed, releases made, content shipped, fixes applied. What happened: performance trends in organic clicks/sessions, conversions, and visibility. Why it happened: key drivers (seasonality, site changes, competitor movement, technical issues, content performance). What happens next: priorities, experiments, and expected impact. At minimum, include traffic trends, query/page performance, technical health, content performance, backlink monitoring (where relevant), and conversion outcomes. Agree a fixed delivery window (for example, first week of the month) to keep everyone aligned. 4. Own the data and access Data ownership prevents dependency and makes accountability simpler. Access should be held internally and shared with the agency as needed. Google Search Console: ensure internal ownership; provide the agency with Full user access where appropriate (or restricted access for read-only needs). Google Analytics 4: ensure internal ownership; provide the agency with appropriate access (Editor/Analyst/Viewer depending on responsibilities). CMS, hosting, and domain registrar: credentials should be controlled internally (shared vaults help). SEO tooling: if third-party tools are used (Ahrefs, Semrush, Moz, Screaming Frog, etc. ), confirm access is available for internal stakeholders. When access is clear, work can be verified and continuity is protected if suppliers change. 5. Run a quick technical health-check A developer-level audit isn’t required to spot obvious issues. A lightweight health-check can reveal problems such as: Broken links and redirect chains Indexation and crawlability issues Slow-loading templates or heavy scripts Missing or incorrect meta tags Structured data errors or missed opportunities If the agency cannot clearly explain what has been fixed, why it mattered, and how impact was measured, that’s a concern. Google’s SEO Starter Guide is a solid baseline for what “good fundamentals” should look like. 6. Review content quality and intent Content quality is non-negotiable. If output looks like generic “me-too” articles, the content process needs scrutiny. Keyword and topic selection: is research driven by intent, not just volume? Briefs and outlines: are outlines reviewed before drafting to avoid wasted effort? Approval workflow: is editorial control defined and documented? Updates and consolidation: is older content refreshed, merged, or retired to reduce duplication and cannibalisation? Google’s guidance consistently points towards understanding how people search and creating genuinely helpful content that satisfies needs. The goal is not keyword repetition; it’s clarity, usefulness, and relevance. 7. Scrutinise link-building methods Links can help when earned legitimately through credible coverage and partnerships. Questions worth asking: Which sites are targeted? Industry publications, reputable directories (where appropriate), local media, and relevant communities. How is outreach done? Digital PR angles, data-led stories, partnerships, and genuinely useful assets. How is quality assessed? Relevance, editorial standards, real audience reach (not just a third-party metric). How are sponsored relationships handled? Sponsorships and paid placements should be labelled appropriately, with correct link attributes where required. Any hint of bulk buying, private blog networks, or “guaranteed DA links” is a serious risk. Google classifies buying or selling links to manipulate ranking as link spam. 8. Benchmark against competitors A strong agency should understand the competitive landscape. A competitor review should include: Keyword gaps: queries competitors rank for that are missing. Content gaps: topics, formats, and depth competitors cover better. Authority signals: brand mentions, PR coverage, and relevant backlinks (quality over quantity). Technical benchmarks: speed, mobile performance, indexation health, and internal linking. No benchmark usually means no clear strategy. 9. Fix communication and ways of working SEO projects stall when communication is unclear. Agree: Cadence: weekly updates, fortnightly calls, and a monthly performance review (as needed). Roles: who owns technical changes, content approvals, dev tickets, and stakeholder sign-off. Escalation: what happens when priorities slip or blockers appear. Documentation: shared notes, roadmaps, and change logs to avoid ambiguity. If simple updates regularly take days, prioritisation is likely off. 10. Decide and move forward Once evidence is gathered, there are three sensible paths: Reset expectations: align deliverables, KPIs, and timelines, then continue. Set a corrective window: define a short period (for example, 30–60 days) with specific targets and agreed actions. Plan an exit: confirm notice periods, data ownership, asset handover, and any fees. If performance remains weak and transparency is lacking, ending the relationship can be done cleanly and professionally, without burning bridges. Red flags to watch Guaranteed rankings (especially “#1 in Google” promises). Vague reporting that avoids outcomes and focuses on vanity metrics only. “Secret sauce” tactics that cannot be explained or verified. Link schemes such as paid links for ranking, private blog networks, or bulk placements. Restricted access where internal ownership of Search Console / Analytics / domains is discouraged. Shadow assets (doorway domains, pages, or accounts controlled by the agency rather than the business). Handy links for verification Google SEO Starter Guide Google guidance on hiring and working with an SEO Search Console: Performance report Google Search spam policies (including link spam) FAQs How long should SEO take to show results? Timelines vary by market, site condition, and starting visibility. Technical fixes can improve crawl/indexation quickly, while content-led growth and authority building often take months. A sensible approach sets expectations by workstream and measures incremental movement against agreed KPIs. What should a good SEO report include? A good report documents actions completed, changes shipped, and performance impact (clicks/sessions, conversions, and page/query movement). It should also explain what drove change and what will happen next, rather than listing rankings in isolation. Which KPIs matter most when evaluating an agency? The most useful KPIs connect to outcomes: qualified leads, sales, sign-ups, calls, or other business goals. Supporting metrics include organic clicks, conversion rate from organic, and performance of priority pages and query groups in Search Console. What access should an SEO agency have? Access should be least-privilege and role-based, with internal ownership retained. Search Console access is typically set as Full user for delivery teams, while Analytics access depends on responsibilities. Domain registrar and hosting ownership should remain internal. Are backlinks still important? Links can help, but quality and legitimacy matter more than volume. Credible editorial coverage and genuine partnerships tend to be safer and more effective than manufactured links. Any approach designed primarily to manipulate rankings carries risk. What questions should be asked before ending an engagement? Confirm what was delivered versus what was contracted, what was measured, and what is owned internally (accounts, access, content, assets, documentation). If a corrective window is offered, define actions, timelines, and success criteria in writing. --- > Find out how to do a SEO blog or website migration. Simple process with a detailed checklist of actions to help nurture organic traffic. - Published: 2025-06-24 - Modified: 2025-11-28 - URL: https://stringerseo.co.uk/technical/seo-blog-migrations/ - Categories: Technical A blog or website migration either runs smoothly or turns into a headache. When you move to a new content management system, consolidate domains, or overhaul your site architecture, you take on a project that demands careful planning. Handle the change well and you improve user experience, preserve rankings, and set the stage for future growth. Rush it and you risk lost traffic, broken links, and a drop in search visibility. This guide supports digital marketing managers, developers, and website owners. It outlines a clear, actionable SEO migration framework based on industry best practices. Every step follows Google’s official guidance to help you manage a smooth transition and keep search performance on track. Key points Plan the migration around a clear keyword strategy and content map before changing any URLs or templates. Give every important page on the old site a clear destination on the new site and use 301 or 308 redirects for permanent moves. Allow genuinely retired pages to return a 404 or 410 status instead of forcing irrelevant redirects. Clean up legacy redirect chains so every redirect jumps straight to the final URL. Update XML sitemaps, canonicals, and internal links so they reflect the new URL structure from launch day. Migrate images, metadata, and structured data carefully to protect visibility in Google Search and Google Images. Review and, where possible, update high-value backlinks so they point directly at the new URLs. Monitor Google Search Console and analytics closely for several weeks after launch and fix issues as soon as they appear. Table of contents Revisit your keyword research Identify long-tail opportunities Migrate blog posts from HubSpot to WordPress Build a full URL mapping strategy Audit existing redirects Decide on your AMP strategy Fix redirect chains before launch Update your XML sitemap Migrate and optimise image URLs Restructure URLs with SEO in mind Find and fix 404 errors Update your most valuable backlinks Post launch: monitor and optimise Final checklist Revisit your keyword research Before you change anything, revisit your keyword strategy. A migration gives you the perfect moment to check whether current targeting still aligns with business goals. Run an audit that covers: Top-performing pages and terms: Which keywords currently drive traffic and conversions? Keyword gaps: Which topics and questions sit in the backlog with no content? Business alignment: Does your targeting still match the right intent and audience segments? Use tools such as Google Search Console, Ahrefs, and SEMrush to gather data. Map each primary keyword to a page on the new site so you preserve visibility when URLs change. Identify long-tail opportunities Long-tail keywords usually bring lower competition and higher intent. They often deliver quick wins that support traffic growth while the migration settles. Look for: Under-optimised pages or blog posts with steady impressions but weak click-through rates. Pages that lack clear metadata, headings, or internal links. FAQs and support content that you can structure more clearly or expand into guides. When you target these opportunities early, you build momentum after migration and support your wider SEO roadmap. Migrate blog posts to WordPress If you plan a platform switch, such as moving from HubSpot to WordPress, treat the blog as a priority. Blog content can often generate a large share of organic sessions and assists with connecting with your target audience to drive conversions. Use this process to handle the move: Export all content, including metadata, authorship, categories, tags, and publish dates. Rebuild formatting inside the new theme so readers still enjoy a clear, consistent layout. Update internal links so they match the new permalink structure. Refresh meta titles and descriptions for each post where performance looks weak. Set up 301 or 308 redirects from each old HubSpot URL to the correct WordPress equivalent. Check internal linking carefully and keep anchor text relevant. Strong internal links protect user journeys and help search engines rediscover key posts quickly. Build a full URL mapping strategy Plan the redirect map before you touch anything in production. Give every important page on the old site a clear destination on the new site so you protect rankings and send users to the right content. Shape your redirect map so it: Covers all live URLs that still offer value, not just the top performers. Uses 301 or 308 redirects (permanent) when you move content for the long term. Reflects the new site structure and folder hierarchy. When a page no longer has a useful equivalent, let that URL return a 404 or 410 status rather than forcing a redirect to something unrelated. This approach lines up with Google’s guidance on 404 pages and soft errors. If your blog content now lives on a new domain, point redirects straight to the new location rather than through intermediary URLs or catch-all pages. Audit existing redirects Many sites already carry several layers of redirects from previous restructures. If you ignore them, they slow down responses and dilute link equity. Use a crawler such as Screaming Frog to: Identify all existing redirects and the chains they create. Update internal links so they point directly at the final live URL. Remove or consolidate redundant redirects where they no longer serve a purpose. This work gives the new site a clean redirect structure and helps search engines crawl more quickly. Decide on your AMP strategy If the current setup includes Accelerated Mobile Pages (AMP), decide how you want to handle them on the new platform before you build templates. You can: Migrate AMP content to WordPress and keep AMP templates that still follow AMP HTML requirements. Retire AMP and redirect AMP URLs to the canonical responsive version instead. In both cases, work through these steps: Audit all AMP URLs and note which ones still matter for organic performance. Implement 301 or 308 redirects from each AMP URL to its final destination. Use the Google AMP validator to confirm that any remaining AMP pages pass validation. Keep canonical tags consistent and clear so Google understands which version of a page you want to show in search. Fix redirect chains before launch Redirect chains drain crawl budget and slow down users. Before launch, run tests and make sure every redirect jumps straight to the final URL. Look for: Redirects that hop through several URLs before they settle. Mixed redirect types, such as a 302 that points to a 301 target. Internal links that still reference old URLs instead of the final ones. Clean chains wherever you spot them. When every redirect moves in a single hop, you improve crawl efficiency, response times, and user experience. Update your XML sitemap As soon as the new site goes live, adjust XML sitemaps so they match the new URL structure. Treat the sitemap as a live reflection of your canonical URLs, not as a historical record. Work through this checklist: Remove old, redirected, or 404 URLs from all XML sitemaps. Keep only URLs that return a 200 status and sit in an indexable state. Check canonical tags so each sitemap URL declares the correct canonical version. Submit the new sitemap in Google Search Console. Use the Page Indexing report in Search Console to track indexation trends in the weeks after launch. This report highlights coverage issues, soft 404s, and other crawl problems that need attention. Migrate and optimise image URLs Images influence search visibility, user engagement, and perceived quality. Treat them as part of the SEO migration, not as an afterthought. During the migration, you: Map old image file paths to new file paths so you avoid broken images. Update all HTML, CSS, and JavaScript references so they point at the new URLs. Reapply descriptive alt text and image-related structured data (for example ImageObject inside Article or Product schema). Create and submit an image sitemap if images play a major role in discovery. Google Image Search often delivers incremental traffic and assists conversions, so this work supports more than accessibility and design. Restructure URLs with SEO in mind If you plan to refactor URL patterns, use the migration window to introduce cleaner, more descriptive slugs. Clear URLs help users and search engines understand content quickly. Follow these principles: Use lowercase, hyphenated words in every slug. Strip out unnecessary numbers, parameters, tracking codes, or session IDs. Reflect the main topic of the page inside the slug. For example: example. com/blog-post-38294-xyz/ example. com/inspiration/bedroom-design-ideas/ Once you finalise the new structure, update redirects, sitemaps, internal links, and structured data so they all reference the new patterns. Find and fix 404 errors After launch, expect to uncover a few broken URLs. Treat 404s as signals, not as automatic failures. Use a crawler to: Identify 404 responses that affect pages you still want to serve. Create new redirects for any missed pages that have obvious equivalents. Update internal links that still point to deleted or moved URLs. Allow 404s for content that you retired intentionally and that no longer deserves a replacement. Google treats normal 404s as part of the web. Use Google Search Console’s Page Indexing and crawl reports to spot new 404 issues that appear after launch and decide which ones need action. Update your most valuable backlinks External links still act as one of the strongest signals in organic search. A migration gives you a reason to tidy them up. Start by: Exporting backlink data from tools such as Ahrefs or Google Search Console. Identifying links that still hit old URLs, especially those that now redirect. Prioritising high-authority referrers and pages that drive engaged traffic or conversions. Reach out to those partners and request updates so their links point directly to the new URLs. Redirects will still catch legacy traffic, but direct links send a stronger signal and often speed up reindexing of key pages. Post launch: monitor and optimise A successful migration continues beyond launch day. Over the following weeks, track performance and fix any issues as soon as they appear. Monitor: Crawling and indexing behaviour in Google Search Console. Organic traffic, engagement, and conversions in GA4. Keyword performance using Search Console and rank-tracking tools. Errors or warnings related to response codes, structured data, and Core Web Vitals. You usually see some volatility in the first two to four weeks after launch. Treat this timeframe as a rule of thumb rather than a promise. Focus on resolving issues quickly and continue to optimise the new setup as fresh data arrives. Example: On a recent 10,000-URL blog migration, a clear redirect map, early 404 fixes, and weekly Search Console checks helped maintain around 95% of organic traffic within six weeks, with several long-tail pages improving on previous rankings. Useful Google resources SEO Starter Guide Best practices for site moves Guidance on 404 pages Page Indexing report in Search Console AMP validator tool Final checklist Keyword audit completed and mapped to the new site. Long-tail opportunities identified and prioritised. Blog content migrated, reformatted, and re-optimised. Redirects planned and implemented for all important URLs. Legacy redirect chains reviewed and cleaned. AMP strategy agreed and implemented. XML sitemaps updated, submitted, and monitored. Image URLs migrated with alt text and structured data reapplied. URL structures refactored with SEO-friendly slugs. High-priority 404s fixed and irrelevant legacy URLs allowed to retire. Key backlinks reviewed and updated where possible. Post-launch metrics tracked and optimisation work scheduled. Website migrations always introduce risk, but they also create a chance to strengthen your SEO foundation. With deliberate preparation, a clear redirect strategy, and disciplined post-launch monitoring, you can move to a better platform and keep organic performance moving in the right direction. If you want support with the strategy or implementation, get in touch and we can walk through a migration plan that matches your stack, timelines, and targets. --- > Find out what Semantic SEO is, and how you can optimize for it to achieve greater visibility in search results and AI Answer Engines. - Published: 2025-05-29 - Modified: 2026-02-08 - URL: https://stringerseo.co.uk/content/semantic-seo/ - Categories: Content, SEO & AI Semantic SEO focuses on context: the meaning behind a query, the intent driving it, and the related concepts that help a page fully satisfy what people are looking for. Rather than optimising for one keyword in isolation, it aims to cover the surrounding topics, questions, and entities that naturally sit alongside the main subject. That approach matters even more as Google continues to improve its language understanding (for example via models such as BERT and MUM) and as AI features (such as AI Overviews and AI Mode) influence how information is discovered. Table of contents What is semantic SEO? Why semantic SEO matters What to consider in a semantic SEO strategy 1. Understanding user intent 2. Entity-based content 3. Natural language and NLP 4. Topic clusters 5. Structured data How to develop a semantic SEO strategy 1. Keyword and topic research 2. Content mapping 3. Regular updates and demand generation Why semantic SEO is important FAQs What is semantic SEO? Semantic SEO goes beyond traditional keyword-first optimisation. It’s about understanding the intent behind a query and publishing content that answers that intent clearly, accurately, and thoroughly. For example, a page targeting “cat food” can perform better when it also covers the natural follow-up questions and related concepts people commonly explore, such as: “What is the healthiest cat food in the UK? ” “How much wet food should a kitten eat? ” “Can cats live on dry food only? ” Those answers can live on the same page (as structured sections), or as supporting articles that link back to a main “pillar” page. This supporting material is often referred to as supplemental or cluster content. Why semantic SEO matters Search engines increasingly reward pages that demonstrate strong topical coverage and that meet user needs across different intents (informational, navigational, commercial, and transactional). Semantic SEO helps improve relevance because it encourages content that reflects how people actually explore a topic, rather than treating each keyword as a separate, disconnected target. It can also support performance in AI-influenced experiences by making content clearer, more complete, and easier to interpret. (Google’s guidance on AI features is worth reviewing as part of any content strategy planning. ) What to consider in a semantic SEO strategy 1. Understanding user intent Different searches reflect different intents: informational, navigational, commercial, and transactional. Planning content around intent helps align page type, layout, and depth of information with what searchers expect. A practical way to start is to review the SERP landscape for a target query: the content types ranking well, the “People also ask” questions, and whether the results lean towards guides, publishers, communities (such as Reddit), or product/service pages. 2. Entity-based content Entities are the things and concepts words refer to. Clarifying entities helps machines interpret meaning when a word has multiple interpretations. For example, “Amazon” could refer to the rainforest, the ecommerce company, or a mythological group — context determines which entity is meant. Tools such as the Google Cloud Natural Language API can help highlight which entities are detected in a piece of text, which can be useful when refining clarity and topical focus. Structured data (Schema. org) can also help provide explicit signals about the entities on a page (for example: Person, Organisation, Product, Place), when it is implemented correctly and matches visible page content. 3. Natural language and NLP Natural language processing (NLP) is about how machines interpret human language. From a content perspective, semantic optimisation benefits from phrasing that reflects how people search, including synonyms and closely related terms where they fit naturally. That typically means incorporating the wording and subtopics surfaced through SERP research (autocomplete suggestions, related searches, and question boxes) and then writing in clear, plain English rather than forcing keyword repetition. 4. Topic clusters Topic clusters organise content into a clear structure: a pillar page that covers a broad topic and supporting pages that go deeper into specific subtopics. This can improve navigation for users and help reinforce the relationships between pages. In the “cat owner” example, clusters might sit around feeding, health, behaviour, and product buying guides. Each cluster can contain content aimed at different intents, with internal links that guide readers through the subject. 5. Structured data Structured data is a standardised way to describe page content so search engines can interpret key details more reliably. When implemented correctly (and for supported features), it can help a page become eligible for rich results in Google. Schema markup (see Schema. org) can be used to describe elements such as articles, FAQs, products, organisations, people, and more. The key is to use markup that matches the content on the page and aligns with Google’s structured data guidance. For reference, Google maintains a gallery of supported rich result types and requirements here: Structured data (Search Gallery). How to develop a semantic SEO strategy A simple, effective approach usually involves three phases: Deep keyword and topic research to surface related concepts and questions Intent-driven content mapping across pillars, clusters, and formats Ongoing optimisation to keep pace with changing behaviour and demand 1. Keyword and topic research Research should identify not only the primary query, but also the questions, synonyms, and related subtopics that shape the “topic landscape”. Useful methods include: Google Autocomplete: note suggested query variations while typing Related searches: review queries at the bottom of the SERP People Also Ask: collect recurring questions and intent patterns Topic modelling: identify semantically related terms that genuinely fit the subject Google Trends: assess seasonality and emerging interest Tools such as Keywords Everywhere, Google Search Console, and Google Trends can support this process. 2. Content mapping Content mapping connects intents and subtopics to the right page types. A typical map includes: Pillar pages that cover broad topics and provide strong orientation Cluster pages that answer specific questions or subtopics in depth Clear internal linking between related pages (pillar ↔ cluster and cluster ↔ cluster) Where relevant, headings can be framed around the exact questions uncovered during SERP research, as long as the answers are genuinely helpful and accurate. 3. Regular updates and demand generation Semantic relevance is not a “set and forget” task. Ongoing maintenance helps keep content aligned with search behaviour and expectations: Refresh headings and sections where intent shifts or new questions emerge Add new supporting content where gaps appear in the cluster Review performance in Search Console and prioritise updates for pages that decline Promote strong content via email and social distribution to build awareness and engagement Why semantic SEO is important Semantic SEO is valuable because it aligns content planning with how search engines interpret meaning and how audiences explore a subject. Strong topical coverage, clear intent matching, and consistent internal structure can support more stable organic performance over time. It can also strengthen content credibility by encouraging clearer definitions, better sourcing, and more complete answers — which is increasingly important in an environment where AI features and language understanding continue to evolve. FAQs Is semantic SEO different from keyword SEO? Keyword targeting still matters, but semantic SEO broadens the focus to include intent, related questions, and connected concepts. In practice, it usually results in more complete pages and better supporting content around a topic. What are entities in SEO terms? Entities are specific “things” or concepts (people, organisations, places, products, topics) that language refers to. Clarifying which entity is being discussed can reduce ambiguity and improve topical clarity. Does structured data improve rankings? Structured data is primarily used to help search engines understand content and to enable eligibility for certain rich results where supported. It is best treated as a clarity and presentation mechanism rather than a guaranteed ranking lever. Do topic clusters replace a traditional blog strategy? Topic clusters are a way to organise and connect content so it’s easier to navigate and maintain. A blog can still exist, but cluster planning helps ensure each post supports a wider topic rather than becoming an isolated article. How does semantic SEO relate to AI Overviews and AI Mode? Semantic SEO aims to make content clearer and more complete for a given topic and intent. That can support discoverability across evolving search experiences, but outcomes are not guaranteed. Google’s own documentation on AI features is the best reference point for how these surfaces work and what site owners should consider. Author note: This article references Google’s published guidance and public announcements where possible (BERT, MUM, and AI features), and focuses on practical content workflows that support topical coverage and intent matching. --- > Find out how to use Google Search Console for keyword research to unlock opportunities for content optimisation or to create new pages. - Published: 2025-05-06 - Modified: 2025-05-07 - URL: https://stringerseo.co.uk/content/how-to-find-keyword-opportunities-on-google-search-console/ - Categories: Content, Marketing A Simple GSC Method to Uncover High-Demand Keyword Opportunities. When it comes to finding new keyword opportunities, most marketers jump straight into third-party tools. But if you've already got content on your site in the form of blog posts, guides, or product pages, it may be worth starting with your own data. Google Search Console (GSC) gives you everything you need to surface keywords with proven demand. These are terms that are already triggering your content in the SERPs... They are just not quite ranking well enough to earn a high volume of clicks yet. The goal here is to find those mid-ranking terms and give them the final push. Here’s a step-by-step approach I’ve used successfully across different types of websites to find high-impact opportunities you can optimise for. This is done by either optimising existing content or creating a new page. Select Three Months of Search Analytics Data Go to the Search Analytics report in GSC and pull data for the last three months or the past month. The time period really depends on how many queries you want to find. The smaller the time period then the amount of queries you will find is less but more real-time and relevant to trends. The metrics that will be looked are listed below: Query Page Country Impressions Clicks Average Position Segment Non-brand Data Exclude any branded keywords by applying a segment on a query-level. Define the Target-Market Select the market by clicking on the country filter and choosing the country where your target audience is from. Filter by Impressions (500+) We’re targeting queries with decent demand, not scraping the long tail. Apply a filter to only show keywords with 500+ impressions over the period. These are terms that users are actively searching and that Google is already associating with your content, even if you’re not seeing traffic yet. Filter by Position (11+) Next, restrict the dataset to average positions above 11, your page 2 and beyond rankings. These are the keywords your website is visible for, but not converting many clicks. At this point, you can safely ignore click data. The fact these keywords are getting a low volume of clicks is exactly the point. You should see a similar interface as the example image shown below after following these steps: Validate Search Volumes & SERP Competitiveness If you have keywords everywhere installed on your browser then it will report on the search volume, competition, trend, and CPC of all the queries that are reported on Google Search Console as shown in the example image that's shown below. The competition and trend data can help you prioritise. Not every keyword will be worth chasing. Focus on those that are trending and where you can realistically compete. This data can also be used to influence which keyword needs their intent analysed to identify where competitors are ranking with weak or misaligned content. Use the ‘Pages’ Tab for Mapping Intentand Checking for Cannibalisation Before rushing into optimisation or creating a new page, make sure you’re not competing with yourself. Run a check to see if multiple URLs are ranking for the same or very similar queries by selecting a query and then viewing the 'Pages' tab. If there is more than one page reported then you might be better off consolidating pages or refining their intent to avoid cannibalisation. Here are some good questions to ask yourself in this step: Does the content genuinely match the user’s intent? Is it the right page to be ranking? Could another, more suitable page rank better with minor improvements? If the current page is a mismatch, consider redirecting or building a new asset entirely. Find Related Queries and Questions Once you’ve shortlisted target keywords, you can consider going deeper by using tools to identify related subtopics or questions that can be incorporated into your new content. The tools listed below can help you do just that: AlsoAsked Keywords Everywhere Semrush Keyword Magic Tool Identifying related queries and questions will help you increase topical authority for a given subject. Optimise or Create New Content Based on your findings, take action: Optimise existing pages by refining headers, updating intro copy, improving internal linking, and ensuring the keyword and its related terms are properly covered. Create a new page if the intent requires a standalone guide, product, or landing page. Avoid forcing keywords into pages that don’t suit them — match structure to intent. Monitor Post-Optimisation Performance Once changes are live, track performance in GSC over the next few weeks. Focus on: Movement in average position CTR improvements Any increase in impressions and clicks Set a reminder to re-run the process quarterly, or even monthly, new queries will emerge, and existing ones may drop or climb based on SERP volatility. Consider Pulling Data from Google Search Console API It's important to note that connecting to Google Search Console's API and storing the data in a warehouse would enlargen the query and page-level datasets. This is suggested for larger websites. Stay tuned for a guide that will show you how to do that but in the mean time, if you are exploring this option, then have a read of https://developers. google. com/webmaster-tools. How This Process Helps This process is about using your own Search data to build on what’s already working (or nearly working). GSC gives you clear signals about what Google already associates with your site. This process helps you spot the keywords that just need a bit more effort to move up the rankings. This process is quick, scalable, and highly actionable, and best of all, it helps you optimise around real user demand. --- > Learn how SEO is a user-focused strategy, what techniques to avoid, and what questions help you understand if your content is helpful. - Published: 2025-04-17 - Modified: 2026-02-08 - URL: https://stringerseo.co.uk/seo-ai/how-is-seo-a-long-term-user-focused-strategy/ - Categories: Marketing, SEO & AI SEO works best as an ongoing, user-led discipline rather than a one-off task. Rankings change as search demand shifts, competitors improve, and search engines update how results are evaluated. A sustainable approach focuses on relevance, usefulness, and technical accessibility — and then measures outcomes over time. This guide explains why long-term SEO tends to outperform short bursts of activity, and what a practical, test-and-learn approach looks like in real workflows. Table of contents Why SEO is a long-term strategy User intent and search behaviour A test-and-learn approach Technical foundations and accessibility Content quality, trust and E-E-A-T signals Avoiding shortcuts that create risk Future-proofing: building resilience through value FAQs Why SEO is a long-term strategy SEO performance usually compounds. Improvements to site structure, content relevance, internal linking, and technical health can take time to be discovered, crawled, indexed, and reflected in results. Even when changes are implemented quickly, outcomes are typically influenced by broader factors such as seasonality, competitor movement, and broad search updates. Google describes core updates as broad changes designed to improve search results overall, rather than fixes targeted at individual pages. That naturally favours a consistent, quality-led approach over tactics aimed at short-term spikes. Practical takeaway: long-term SEO is less about chasing individual ranking fluctuations and more about building a site that consistently meets searchers’ needs. Related official guidance: Google: Core updates User intent and search behaviour Search is driven by intent: people want answers, comparisons, reassurance, directions, or a product/service decision. A user-focused strategy starts by identifying the intent behind queries and aligning pages to satisfy it clearly and quickly. Informational intent: learning, researching, understanding. Commercial intent: evaluating options, comparing brands, checking reviews. Transactional intent: buying, booking, signing up. Navigational intent: finding a specific brand, tool, or website. When content matches intent well, it is more likely to earn engagement signals such as longer dwell time, deeper navigation, and more returning visits — outcomes that often align with stronger organic performance over time. Related official guidance: Google: Creating helpful, reliable, people-first content A test-and-learn approach Long-term SEO is typically most effective when treated as a continuous improvement cycle: Diagnose: find the highest-impact opportunities (technical, content, internal linking, SERP intent mismatch). Hypothesise: define what should change and why it should improve outcomes. Implement: ship changes in controlled batches (templates, clusters, or page types). Measure: review impressions, clicks, rankings, conversions, and engagement against a baseline. Iterate: keep what works, refine what is inconclusive, and stop what fails. Worked example (anonymised) Scenario: a set of category pages attracted impressions but low clicks. Observation: titles and headings were generic, and the SERP intent skewed towards comparisons and “best of” lists. Change: the pages were restructured to answer key comparison questions, improve internal links to subcategories, and clarify propositions in titles and introductions. Measurement: click-through rate improved first, then non-brand clicks increased over the following weeks as more pages were crawled and re-evaluated. This type of structured iteration is difficult to achieve with “one-and-done” SEO. Technical foundations and accessibility User-focused SEO requires pages to be accessible to both people and search engines. Technical foundations do not replace content quality, but they enable content to be discovered, understood, and rendered reliably across devices. Crawlability and indexation: ensure important pages are discoverable and not blocked unintentionally. Site architecture: clear category structure and internal linking to support discovery. Performance and UX: improve load performance and reduce friction, particularly on mobile. Structured data where appropriate: help search engines interpret entities and page types. Practical takeaway: technical health is an enabler of long-term growth, not a substitute for relevance and usefulness. Content quality, trust and E-E-A-T signals Search engines aim to surface helpful, reliable information. A user-focused strategy builds trust by demonstrating experience, subject knowledge, and transparency. Ways to improve perceived quality and credibility include: Clear authorship: name the author, include a bio, and link to credentials where relevant. Evidence and specificity: use examples, data, and measurable outcomes (even anonymised) rather than broad claims. Maintenance: keep content accurate and updated; remove or consolidate thin or redundant pages. Original insight: add perspective, frameworks, or methodologies that go beyond generic summaries. Editorial standard: avoid absolute statements that cannot be evidenced. Where outcomes vary by site or niche, use language such as “can”, “often”, or “tends to”. Avoiding shortcuts that create risk Shortcuts can produce temporary gains but create volatility and long-term risk. Google’s spam policies describe practices that can lead to ranking demotions or removal from results. Examples include (but are not limited to): cloaking doorway pages automatically generated spam content at scale without value link spam (including buying or exchanging links to manipulate rankings) Related official guidance: Google: Search spam policies Future-proofing: building resilience through value Search environments change — from interface updates to ranking system improvements to new ways people discover information. A resilient strategy is rooted in user value: clear positioning, content that satisfies intent, and a technically sound website. The most durable work tends to be: intent-led (built around what people genuinely want to achieve) maintained (reviewed and improved over time, not left to decay) credible (transparent sources, expertise, and editorial quality) accessible (easy to crawl, fast enough, and usable on mobile) That combination is difficult to replicate quickly, which is precisely why SEO works best as a long-term, user-focused strategy. FAQs How long does SEO take to work? It depends on the website’s baseline (technical health, authority, content gaps), the competitiveness of the niche, and how much is being changed. Many sites see early signals first (indexation, impressions, CTR changes), with more meaningful outcomes following as improvements compound. What should be measured to judge progress? Useful measures include organic impressions and clicks, click-through rate, rankings for priority query sets, conversions (where trackable), and engagement indicators such as returning users and depth of visit. Segmenting brand vs non-brand performance is often essential. Are Google core updates always bad for a website? No. Core updates are broad changes designed to improve results overall. Some sites may increase, some may decrease, and many may see little change. A quality-led approach tends to reduce reliance on fragile tactics. What is “people-first” content in practice? It is content created primarily to help searchers, demonstrating clear purpose, substance, and reliability. It avoids pages made mainly for search engines, and it prioritises clarity, accuracy, and usefulness. --- > Learn how to apply structured data to help grow you local visibility for different search terms that people use in Google! - Published: 2025-04-10 - Modified: 2025-04-15 - URL: https://stringerseo.co.uk/content/how-to-use-structured-data-for-local-seo/ - Categories: Content Estate agents that advertise property to rent or for sale can appear for search terms such as “property to rent in Hove” in Google. Other small businesses can also do the same for different search terms that are relevant such as "electricians in Brighton". One of the ways of trying to improve local visibility in Google Search is by implementing structured data on your website despite it not being a direct ranking factor. This article provides an overview of structured data and how it can be used to help local search visibility. What Is Structured Data? Structured data is a type of code that helps search engines better understand the content on a webpage. When implemented correctly, it helps Google to display rich results that can include additional information such as business hours, location, reviews, and services. For local businesses, using structured data means your website has a better chance of appearing in the local pack, Google Maps results, and relevant organic listings. Structured Data for Local SEO Google uses structured data to deliver more relevant results for location-based searches. If someone types “property to rent in Hove”, Google displays a list of properties and agents or businesses operating in that area. Marking up your business location, contact details, and service areas with structured data on your website, helps signal to Google exactly where you're based and what you offer. Using the LocalBusiness Schema Google recommends using the LocalBusiness schema, which is part of the broader Schema. org vocabulary. This schema allows you to define key details about your business such as: Business name Address (with full postal address) Opening hours Phone number Website URL Geo-coordinates Area served Logo Social profiles This information is added to websites using JSON-LD. This is the suggested format by Google. Here's an example for a letting agency based in Hove: { "@context": "https://schema. org", "@type": "RealEstateAgent", "name": "Hove Property Rentals", "image": "https://example. com/logo. png", "@id": "https://example. com", "url": "https://example. com", "telephone": "+44 1273 123456", "address": { "@type": "PostalAddress", "streetAddress": "10 Western Road", "addressLocality": "Hove", "addressRegion": "East Sussex", "postalCode": "BN3 1AE", "addressCountry": "GB" }, "geo": { "@type": "GeoCoordinates", "latitude": 50. 8278, "longitude": -0. 1707 }, "openingHoursSpecification": , "opens": "09:00", "closes": "17:30" } ], "sameAs": } How to Use LocalBusiness Schema Place the JSON-LD in the of the HTML or right before the closing tag. Keep the information consistent with what's shown on Google Business Profile. Use the Rich Results Test to validate your structured data: https://search. google. com/test/rich-results Submit the page in Search Console after implementation. Using structured data on your website can help you appear for relevant local keyworda that people search in Google. It also helps generate rich results in Google which is suppose to help grow click-through-rates and thus traffic to your website from search terms that are used by targeted audiences. --- > Learn how to extract canonical URL in Google Sheets using python. Useful for when IMPORTXML is being clunky for large URL lists! - Published: 2025-03-23 - Modified: 2025-03-23 - URL: https://stringerseo.co.uk/technical/how-to-extract-canonical-urls-in-google-sheets-using-python/ - Categories: Technical If you're managing a large list of URLs and want to quickly check their canonical URLs but do not have access to a website crawler such as Screaming Frog and the IMPORTXML function is taking ages or failing then you may want to consider doing this with Python... This guide will walk through how to connect Python to a Google Sheet, scan through a column of URLs, extract each page’s canonical URL, and write the results back to the sheet in one click of a button. What it Does Connect to your Google Sheet using the Google Sheets API Scan an entire column for valid URLs Fetch and parse each URL’s HTML to extract the tag Write results into a new column next to your URLs Automatically handle errors, headers, and batching to avoid API limits What You Need A Google Cloud project with Sheets API enabled A service account key (JSON file) Python 3 installed with the following libraries: pip install gspread google-auth requests beautifulsoup4 Setting Up the Google Sheets API Go to the Google Cloud Console. Create a new project or select an existing one. Navigate to APIs & Services > Library, and enable Google Sheets API. Go to Credentials > Create credentials > Service account. Generate a key as a JSON file. Share your Google Sheet with the service account’s email (e. g. my-bot@project-id. iam. gserviceaccount. com) with Editor access. Python Script to Fetch Canonical URLs Here’s the full working script: import gspread from google. oauth2. service_account import Credentials import requests from bs4 import BeautifulSoup from datetime import datetime # Google Sheets API Setup SERVICE_ACCOUNT_FILE = "path/to/your-service-account. json" # --- > Learn how to optimise and create a product data feed following Google's best practice guidelines from merchant center. - Published: 2025-03-10 - Modified: 2025-03-10 - URL: https://stringerseo.co.uk/content/how-to-optimise-a-product-data-feed-for-google-merchant-center/ - Categories: Content, Technical Google Merchant Center is a tool for eCommerce websites and it can help you increase their visibility on Google Shopping, Search, and Ads. Your product data feed must be correctly formatted and optimised according to Google’s requirements. This article outlines the essential fields, common mistakes, and key enhancements needed to ensure a product feed meets Google's standards. Why it Matters A well-structured product feed can help improve: Better visibility in Google Shopping and Search results. Potential conversion through accurate product listings. Compliance with Google’s policies, avoiding feed disapprovals. Improved ad performance by providing structured data for campaign optimisation. Product Feed Fields Required by Google Google Merchant Center mandates the inclusion of several key fields to ensure product eligibility. Field NameRequirementPurposeidRequiredUnique identifier for each product. titleRequiredConcise yet descriptive product name with key attributes (brand, color, size). descriptionRequiredDetailed explanation of the product’s features, specifications, and benefits. linkRequiredDirect URL to the product page on your website. image_linkRequiredURL of the main product image. availabilityRequiredIndicates stock status (in stock, out of stock, preorder). priceRequiredProduct price with currency (e. g. , GBP, USD, EUR). brandRequiredManufacturer or brand name. gtinRequired if availableGlobal Trade Item Number (UPC, EAN, ISBN). mpnRequired if no GTINManufacturer Part Number. conditionRequiredSpecifies whether the product is new, refurbished, or used. Recommended & Optional Fields To enhance product visibility and improve targeting, consider including the following additional fields: Field NameRecommendationPurposesale_priceHighly RecommendedSpecifies a discounted price when applicable. google_product_categoryRecommendedClassifies the product according to Google’s taxonomy. product_typeRecommendedCustom category based on internal store taxonomy. additional_image_linkRecommendedAdds extra product images for variations and angles. sizeRecommended for apparelDefines product dimensions. colorRecommended for apparelSpecifies the primary product color. materialRecommendedIndicates the material composition. patternOptionalDescribes the pattern or texture of the product. shippingRecommendedDefines shipping costs and delivery estimates. shipping_weightRecommendedHelps determine shipping rates for bulky items. product_highlightOptionalBullet points summarizing key features. product_detailOptionalProvides detailed specifications. Common Mistakes 1. Using Incorrect URLs Ensure that the link field contains the correct public-facing URL. image_link should be a direct image URL without tracking parameters. 2. Missing GTIN or MPN Google prefers GTINs for accurate product matching. If unavailable, use mpn instead. 3. Inconsistent Pricing Data Ensure that price and sale_price values are correct and match the website. Google performs price checks and may disapprove mismatched listings. 4. Poorly Structured Product Titles Avoid generic titles like "Sofa" or "Table". Instead, format them as "Brand + Product Type + Key Attribute" (e. g. , "Velvet Chesterfield Sofa - Blue - Handmade"). 5. Not Using Google’s Product Categories Assigning the correct google_product_category improves targeting and performance in Shopping Ads. How to Format a Product Feed Your product data feed should be structured in either CSV, or XML, format. Below is an example CSV template: title,brand,price,sale_price,availability,google_product_category,link,image_link "Velvet Chesterfield Sofa - Blue","Luxury Sofas Ltd",1499. 99,1299. 99,"in stock","Furniture > Living Room > Sofas","https://www. example. com/sofa","https://www. example. com/images/sofa. jpg" Final Optimisation Tips Regularly update your feed to reflect stock and price changes. Use structured data markup (Schema. org). Ensure high-quality images with a clean, white background. Enable Google Shopping promotions to attract more clicks. Optimising your product data feed can help maximise visibility, improving ad performance, and driving potential sales. This is done by making sure that your product feed complise with Google’s requirements and implementing best practices. --- > Find out how to inject an author's biography underneath the H1 of an author archive page templates in WordPress! - Published: 2025-03-07 - Modified: 2025-03-07 - URL: https://stringerseo.co.uk/technical/how-to-add-an-author-bio-on-author-archives-in-wordpress/ - Categories: Technical Many WordPress themes do not automatically display an author biography on author archive pages. Here is an example of the one on this website that displays one https://stringerseo. co. uk/author/jonathan. For UX and EEAT purposes, you might want to consider displaying the author biography. However, many themes just offer one template for all type of archive pages such as categories or tags, and it usually does not have an author biography snippet... This guide shows you a way how to add the author biography on an author archive page by creating the logic in the functions. php file which will save time creating a specific template for author archive pages. Step 1: Adding the Author Bio We can use output buffering to modify the page content dynamically, ensuring the author's bio appears immediately after the first . Add This to functions. php File: function inject_author_bio_after_h1($content) { if (is_author) { $author_id = get_queried_object_id; $author_bio = get_the_author_meta('description', $author_id); if ($author_bio) { // Create the author bio HTML $bio_html = '' . esc_html($author_bio) . ''; // Inject the bio immediately after the first H1 $content = preg_replace('/(]*>. *? )/i', '$1' . $bio_html, $content, 1); } } return $content; } function start_buffer { if (is_author) { ob_start('inject_author_bio_after_h1'); } } function end_buffer { if (is_author) { ob_end_flush; } } add_action('template_redirect', 'start_buffer'); add_action('shutdown', 'end_buffer'); How This Works: The start_buffer function starts output buffering before the page is rendered. The inject_author_bio_after_h1 function finds the first and appends the author bio immediately after it. The end_buffer function ensures that the modified content is output correctly. Step 2: Remove "Author: " From the Archive Title By default, WordPress often prepends "Author: " to the author archive title. You can modify this using the get_the_archive_title filter. Add This to Your functions. php File: function remove_author_prefix_from_archive_title($title) { if (is_author) { // Remove 'Author: ' from the title $title = preg_replace('/^Author:\s*/', '', $title); } return $title; } add_filter('get_the_archive_title', 'remove_author_prefix_from_archive_title'); Alternative: Remove "Author: " from Hardcoded H1 Elements If your theme does not use get_the_archive_title but hardcodes the author title inside the page template, you can modify the output using output buffering: function modify_author_h1($content) { if (is_author) { // Find the first H1 and remove 'Author: ' $content = preg_replace('/(]*>)Author:\s*/i', '$1', $content, 1); } return $content; } function start_author_buffer { if (is_author) { ob_start('modify_author_h1'); } } function end_author_buffer { if (is_author) { ob_end_flush; } } add_action('template_redirect', 'start_author_buffer'); add_action('shutdown', 'end_author_buffer'); This injects the author bio directly beneath the first on the author archive page. It removes "Author: " from the archive page title dynamically, and works without modifying theme template files directly, making it update-safe. These tweaks improve both UX and EEAT, ensuring a clean author archive page layout. --- > If you have built a custom WordPress XML sitemap generator then this article shows you how to link to images from the post XML sitemap. - Published: 2025-03-06 - Modified: 2025-03-06 - URL: https://stringerseo.co.uk/technical/how-to-add-image-links-in-a-wordpress-xml-sitemap/ - Categories: Technical XML sitemaps play an important role in SEO. They help search engines discover and index different pages from your website. This is especially useful for blog posts, and including links to images from XML sitemaps can help improve their visibility in Google Image Search. This article shows you a way to modify a custom WordPress XML sitemap plugin to include links to images from blog posts. Why Include Images in Your Sitemap? Adding links to your images from your XML sitemap provides several benefits, two of them are: Search engines can find and understand your images more effectively. Image search results can drive additional traffic to your website. How to Modify the Custom XML Sitemap Plugin In our example, we have a WordPress plugin that generates a custom XML sitemap for posts. We want to extend it so that it includes images found within post content. Step 1: Extract Image URLs from Post Content To retrieve images from post content, we use a function that scans the post HTML and extracts tag src attributes. function extract_images_from_content($post_id) { $content = get_post_field('post_content', $post_id); preg_match_all('/]+src=(+)/i', $content, $matches); return array_unique($matches); } This function scans the post content and returns an array of unique image URLs. Step 2: Modify the Sitemap Generation Function We then modify the function that generates custom-post-sitemap. xml to include tags. function generate_custom_post_sitemap { global $wpdb; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; $site_url = get_site_url; $posts = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'post'"); foreach ($posts as $post) { $url = get_permalink($post->ID); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($post->post_modified_gmt)); echo "$url$lastmod"; $images = extract_images_from_content($post->ID); foreach ($images as $image) { echo "$image"; } echo ""; } echo ''; exit; } This function loops through all published posts, extracts images, and appends them to the sitemap using the tag. Step 3: Hook into WordPress Finally, we modify our WordPress plugin’s init action to ensure the sitemap includes images when requested. add_action('init', function { if (isset($_SERVER) && strpos($_SERVER, '/custom-post-sitemap. xml') === 0) { generate_custom_post_sitemap; exit; } }); This ensures that when /custom-post-sitemap. xml is accessed, the modified sitemap with images is generated. By adding links to image URLs to your custom XML sitemap, you can get your images indexed in Google Image Search. Implementing this in WordPress using a custom plugin ensures that every image that are used in blog posts can be discovered and indexed by search engines. --- > SEO Tips for small businesses. Find out how to structure and optimise a blog post to generate organic website traffic. - Published: 2025-03-04 - Modified: 2026-02-08 - URL: https://stringerseo.co.uk/content/how-to-structure-a-blog-post/ - Categories: Content Blogging can help a small business generate demand and improve brand awareness. A well-structured blog post can also support organic visibility and engagement when it answers a clear search intent, is easy to scan, and is technically accessible. Table of Contents 1. Titles, snippets, and what appears in Google How to write a title tag How to write a meta description Why Google may rewrite titles and snippets 2. Structure that supports readability and intent Introduction Headings and hierarchy (H1, H2, H3) Body copy and formatting Mini template: a simple post structure 3. Internal and external links (best practice) Internal links External links 4. Images and accessibility Alt text A few practical image SEO tips 5. A strong conclusion (and CTA) 6. Other SEO considerations Calls to action Social sharing Author bio and trust signals (E-E-A-T) Maintenance: keep posts accurate FAQs 1. Titles, snippets, and what appears in Google Titles and descriptions influence how a page appears in search results and browser tabs. They also help readers (and search engines) understand what the page is about. Official references: Google: Title links · Google: Snippets (meta descriptions) How to write a title tag Make it descriptive and specific: match the page topic and the likely intent. Write for humans first: clear language tends to outperform keyword-stuffed titles. Keep it concise: long titles can be truncated depending on device width. Include a differentiator: a year, a benefit, a “how-to”, a checklist, or a strong angle. Useful title formulas (examples): How to structure a blog post: a practical SEO-friendly template Blog post structure checklist: headings, links, and formatting How to write a blog post that ranks: structure + examples How to write a meta description Summarise the value: explain what the reader will learn or achieve. Stay natural: include key terms only where they fit the sentence. Support click intent: reinforce credibility (e. g. , “step-by-step”, “examples”, “checklist”). Expect truncation: Google may shorten or replace the snippet. Meta description examples: Learn a simple blog post structure that improves readability and supports SEO, with heading hierarchy, internal linking tips, and a copy template. A practical guide to structuring blog posts for search intent: titles, headings, formatting, links, images, and FAQs. Why Google may rewrite titles and snippets Google can display a different title link or snippet than the one set in HTML. This is common when on-page signals (H1, headings, or repeated boilerplate) suggest a clearer alternative, or when a different excerpt better matches the query. Titles and descriptions still matter, but alignment across the page matters more. Keep the H1 aligned with the title tag (same topic, similar wording). Avoid generic titles such as “Home”, “Blog”, or repeated templates. Ensure the opening paragraph matches the promise made in the title. 2. Structure that supports readability and intent A strong structure helps readers scan quickly, and it helps search engines understand the topic and subtopics. The goal is to satisfy search intent with minimal friction. Introduction Open with a clear intent match: a one-sentence summary of what the post delivers. Confirm the problem: the scenario the post addresses (e. g. , “structuring posts for organic traffic”). Preview the sections: set expectations and reduce bounce. Intro example:This guide explains a simple, repeatable blog post structure that supports SEO and readability, including titles, heading hierarchy, internal linking, image accessibility, and FAQs. Headings and hierarchy (H1, H2, H3) Use one H1: the page title. Use H2s for major sections: the main “chapters” of the post. Use H3s for detail: steps, examples, and subtopics that support the H2. Write headings for scanning: clear labels beat clever wording. Body copy and formatting Keep paragraphs short: especially on mobile. Use lists: bullet points and numbered steps reduce cognitive load. Add examples: templates, sample intros, or “good vs poor” snippets increase usefulness. Maintain topical flow: each section should naturally lead into the next. Mini template: a simple post structure H1: topic + angle (e. g. , “How to structure a blog post (with template)”) Intro: promise + what’s included H2: definitions / context (optional) H2: step-by-step method H2: examples / templates H2: common mistakes H2: FAQs Conclusion: recap + CTA 3. Internal and external links (best practice) Links support discovery, help readers navigate, and provide context. Link usage should always be reader-led, not decorative. Official reference: Google: Crawlable links Internal links Link to relevant pages: supporting articles, services, or glossary entries. Use descriptive anchor text: “keyword research process” beats “click here”. Place links where they help: near definitions, next steps, or referenced concepts. Build clusters: a core “pillar” page plus supporting articles often performs better than isolated posts. External links Cite authoritative sources: official documentation, standards, reputable publications. Link to evidence: data, definitions, and guidance that supports key claims. Avoid unnecessary new tabs: opening new tabs is not an SEO requirement. If new tabs are used, include rel="noopener" for security. 4. Images and accessibility Images can improve comprehension and keep pages engaging, particularly where visuals clarify steps, tools, or examples. Official reference: Google: Image SEO Alt text Describe the image: what it shows, in plain language. Keep it accurate: avoid stuffing keywords that do not describe the image. Skip alt text for decorative images: if an image adds no informational value, consider empty alt (implementation depends on theme/CMS). A few practical image SEO tips Use descriptive filenames: blog-post-structure-checklist. png is better than image-01. png. Place images near relevant text: context supports understanding. Compress images: performance affects experience and can impact search visibility via user satisfaction. 5. A strong conclusion (and CTA) Summarise the key points: a short recap reinforces the structure. State the takeaway: what matters most for performance. Add a CTA: invite the next logical step (read related guide, request help, subscribe). Conclusion example:A blog post structure that matches intent, uses clear headings, and supports navigation with strong internal links is easier to read and easier to understand. Add evidence-based external references, accessible images, and FAQs to strengthen usefulness and trust. 6. Other SEO considerations Calls to action Use one primary action: avoid competing CTAs. Match the reader stage: informational posts often convert better with softer CTAs (newsletter, guide, consultation). Place CTAs logically: end of post, and optionally after a key section. Social sharing Make sharing easy: buttons at the top and bottom can help. Use a strong featured image: improves share appearance on social platforms. Author bio and trust signals (E-E-A-T) E-E-A-T is best treated as a “trust lens”: demonstrate experience and credibility with practical evidence, not slogans. Official reference: Google: Creating helpful, reliable, people-first content Add an author box: relevant experience, role, and areas of specialism. Include a “last updated” date: particularly for SEO posts that can age quickly. Use citations: link to official sources for claims about Google behaviour. Show evidence: real examples, screenshots, or mini case studies where possible. Maintenance: keep posts accurate Review periodically: refresh outdated guidance, screenshots, and tool steps. Consolidate overlapping posts: reduce cannibalisation risk and improve topical clarity. FAQs What is the ideal length for a blog post? There is no universal “ideal” length. A post should be as long as necessary to satisfy the intent fully, with clear structure and minimal filler. Competing pages in the same SERP can be a useful benchmark for coverage depth. Is there a recommended character limit for title tags? There is no fixed character limit. Titles can be truncated depending on device width, and Google may rewrite title links. The best approach is a concise, descriptive title aligned with the on-page topic. Should meta descriptions be a specific length? There is no fixed length requirement. Meta descriptions can be truncated, and Google may show a different snippet. A clear summary that matches the page content is the priority. Should a post have only one H1? In most cases, a single H1 keeps structure clear and avoids confusion in templates and themes. Headings should be used in a logical hierarchy to support scanning and comprehension. How many internal links should be included? Include as many as are genuinely helpful. Internal links should guide readers to relevant next steps and support crawl discovery, using descriptive anchor text. Do external links help SEO? External links can improve usefulness and credibility when they cite authoritative sources or evidence. External linking is not a ranking “hack”, but it can support trust and reader satisfaction. Do images help rankings? Images can improve engagement and clarity, and they can also surface in image search when properly described and contextually relevant. Performance and accessibility matter as well. How can E-E-A-T be improved on a blog post? Use named authors, relevant credentials, accurate citations, and evidence of real experience (examples, screenshots, outcomes). Regular updates and clear accountability also help. --- > Learn how to do keyword research using Keywords Everywhere. A tool to help you understand what keywords are being searched for in Google. - Published: 2025-02-15 - Modified: 2025-04-01 - URL: https://stringerseo.co.uk/content/keyword-research-using-keywords-everywhere/ - Categories: Content One of the first things to understand before you start writing content is how your readers will connect with the information you are writing about. In SEO, this is a pivotal step in generating traffic to your website, and the process is called keyword research. Keywords help you understand what your audience is searching for, allowing you to shape your content around relevant topics. There are plenty of tools available to help gather this information. Keywords Everywhere is a cost-efficient and easy-to-use tool that helps you compile a list of keywords and influence the answers you write and incorporate into your content. These insights can be used to address the questions readers are asking and searching for on Google. Why Keyword Research Matters Search engines like Google display pages based on relevance for different keywords. If your content doesn’t align with what people are searching for, it won’t show up in search results. That’s why targeting the right keywords is crucial. According to a study by Ahrefs, 90. 63% of content gets no organic traffic from Google... this is often because it doesn’t target relevant keywords that people search for. With Keywords Everywhere, you can: Find search volume (how many people search for a term monthly) See competition levels (how hard it is to rank) Identify related keywords to expand your content strategy Step 1) Install Keywords Everywhere Go to Keywords Everywhere and install the extension compatible with your browser (Chrome, Firefox, or Edge). Step 2) Get and Activate Your API Key Sign Up: After installation, click on the Keywords Everywhere icon in your toolbar. Click on "Get API Key" in the dropdown and enter your email address. You'll receive an email containing your personal API key. Activate: Click the Keywords Everywhere icon again, choose "Settings," and paste your API key into the relevant box. Click "Validate" to enable. Step 3) Purchase Credits Access Purchase Options: Click "Purchase Credits" in the Keywords Everywhere dropdown. Choose a Plan: Choose a plan that works for you. Credits are used to get data and metrics from Google search results. Step 4) Configuring Settings Adjust Preferences: In the "Settings" menu, customise your experience by selecting preferred metrics, enabling or disabling features like "Related Keywords" or "People Also Search For," and choosing your target country for location-specific data. Step 5) How to do Keyword Research Perform a search on Google: Keywords Everywhere will display metrics directly below the search bar. They are marked red in the screenshot below. These metrics inform you of the search volume, CPC (cost-per-click), and competition. The metric that will inform you on how many people are searching for your keyword is the one under 'Volume' in the image above. The CPC metric is useful if you are contemplating a Google Ads campaign, and the competition metric gives you an indication if an other website is marketing to that audience in Google. The volume metric should act as your decision maker for the theme of the content you are planning to write. You will notice a box towards the right of the search result page. This box reports on the following metrics: SEO dificulty, Brand Query, Off-Page Difficulty, and On-Page dificulty. These metrics are usful to analyse later down the line after you have decided on the keywords and are not really that necessary to build out your initial keyword list. Scroll down the page: Analyse the data in the box on the right under the following titles; 'Trend Data For (your keyword) (GB)', 'People Also Ask', 'SERP Keywords', 'Related Keywords', and 'Long-Tail Keywords'The trends data can be used to influence when you should publish content as it tells you the time of when searches were made in Google for your keyword. This is useful for planning content in the long-term. The data under People Also Ask, SERP keywords, related keywords, and long-tail boxes is useful to build out your list of keywords that will influece the theme of the content that you are planning to write about. Click star to select your keywords and build out your list: Keep doing this as you analyse different search result pages and select all the keywords you want to write about. Click on the Keywords Everywhere icon in your toolbar and select 'My Favourite Keywords'. This will bring up a page listing all of your keywords and the respective data mentioned earlier. Notice how a content theme has been built out from a keyword list relevant to 'keyword research', 'keyword research tools' and 'keyword research in seo' in the screenshot below. This is the information that can be written about in a piece of content that is going to target the audiences searching for these keywords. Step 7) Export Data Download Insights: After putting your keyword list together, click the "Export" button to download the data in CSV format for further analysis. Step 8) Explore Additional Features Page Analysis: Use the "Analyse Page" function to evaluate the keyword density and on-page SEO of any webpage. This can help give you an idea of how to structure your own page or content. Trend Data: Have a look at historical search trends to understand keyword seasonality and popularity over time. This is useful to figure out when is the right time to publish the new page or content and even start a tailored campaign. This is how you can use Keywords Everywhere keyword research tool to identify the keywords that are being searched by your target audience audience. This information can then be used to structure a blog post or optimising certain elements of a page. --- - Published: 2025-02-14 - Modified: 2025-03-06 - URL: https://stringerseo.co.uk/technical/how-to-build-a-custom-xml-sitemap-in-wordpress/ - Categories: Technical XML sitemaps are important for SEO, it provides a list of pages for search engines to crawl from your website. They can also be used to help inform about the structure of your content. There are plugins out there such as Yoast SEO that can generate them automatically in WordPress but there might be certain cases where you need to build a custom sitemap. For example, you might have hosting restrictions, specific URL requirements, or using a reverse proxy. This guide explains how to create a custom XML sitemap in WordPress for a reverse proxy use case but can be adapted for other cases too... Step 1: Intercept Sitemap Requests Intercept requests for custom sitemap URLs (e. g. , /custom-sitemap_index. xml) and generate the required XML directly. Use the init hook to catch the requests early. add_action('init', function { if (isset($_SERVER)) { $custom_sitemaps = ; foreach ($custom_sitemaps as $path => $type) { if (strpos($_SERVER, $path) === 0) { if ($type === 'index') { generate_custom_sitemap_index; } else { generate_custom_sitemap($type); } exit; } } } }); This ensures WordPress processes the correct XML before default templates or redirects interfere. Step 2: Generate the Sitemap Index The sitemap index acts as a directory for individual content sitemaps (e. g. , posts, pages, categories). function generate_custom_sitemap_index { $sitemaps = ; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; foreach ($sitemaps as $slug => $lastmod) { $url = site_url($slug); echo ""; echo "$url"; echo "$lastmod"; echo ""; } echo ''; exit; } Step 3: Create Content-Specific Sitemaps For each content type (e. g. , posts, pages), generate a corresponding XML file. Replace the domain in URLs when needed, such as for reverse proxies. function generate_custom_sitemap($type) { global $wpdb; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; $site_url = get_site_url; switch ($type) { case 'post': $posts = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'post'"); foreach ($posts as $post) { $url = str_replace($site_url, 'https://proxy-domain. com', get_permalink($post->ID)); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($post->post_modified_gmt)); echo "$url$lastmod"; } break; case 'page': $pages = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'page'"); foreach ($pages as $page) { $url = str_replace($site_url, 'https://proxy-domain. com', get_permalink($page->ID)); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($page->post_modified_gmt)); echo "$url$lastmod"; } break; // Additional cases for categories and authors... } echo ''; exit; } Step 4: Prevent Trailing Slash Issues To avoid WordPress redirecting sitemap URLs to versions with trailing slashes, disable canonical redirects for sitemap paths. add_filter('redirect_canonical', function ($redirect_url, $requested_url) { $sitemap_paths = ; foreach ($sitemap_paths as $path) { if (strpos($requested_url, $path) ! == false) { return false; // Disable redirect } } return $redirect_url; }, 10, 2); Step 5: Add Rewrite Rules Register custom rewrite rules to make WordPress recognise sitemap URLs. function add_custom_sitemap_rewrite_rules { add_rewrite_rule('^custom-sitemap_index\. xml$', 'index. php', 'top'); add_rewrite_rule('^custom-post-sitemap\. xml$', 'index. php', 'top'); add_rewrite_rule('^custom-page-sitemap\. xml$', 'index. php', 'top'); // Additional cases for categories and authors... } add_action('init', 'add_custom_sitemap_rewrite_rules'); Flush the rewrite rules after activation by visiting Settings > Permalinks and clicking Save Changes. Testing Once the plugin is activated, verify the following URLs: /custom-sitemap_index. xml /custom-post-sitemap. xml /custom-page-sitemap. xml Check the tags in the XML files to ensure the URLs are correct, especially if a reverse proxy is in use. --- > Learn how to start an SEO strategy for your small business. Find out what SEO factors are important and help generate more website traffic. - Published: 2025-02-13 - Modified: 2025-04-20 - URL: https://stringerseo.co.uk/content/seo-tips-for-small-business-owners/ - Categories: Content, Distribution & Reputation, Technical Search engine optimisation (SEO) is a way small businesses can generate traffic to their website. A well-structured SEO campaign can become part of your day-to-day business management. This guide will take you through the basics of SEO and give you some tips to help you get started if you already have a website. What Is SEO? SEO is a process to improve a website's visibility in search engine results pages (SERPs). This is first done by understanding what content needs to be published on your website through keyword research. It can then consist of other activities such as optimising elements on a page, and growing the popularity of your website through links. Small business owners can get the pages on their website appearing for product or service related keywords that are searched by target audiences in Google. The main aim is to make it easier for potential customers to find your products or services. What Are SEO Ranking Factors? SEO ranking factors make up Google's algorithm to determine how relevant, credible, useful and popular a page is to display it in a search result for a particular keyword. Google uses crawlers, also known as spiders, that navigate the entire online ecosystem through links. They gather information from pages on different websites that are then indexed in search results. These crawlers tick-off specific checklist items before indexing and displaying the pages they have found in search results, such as: Does this page from this website have a title? If so, what is it and how does it relate to a keyword? Does it have clear headings and what do they say? Does it have a well-structured list of web pages to crawl? No one really knows how many search engine ranking factors there are but Google's SEO Starter Guide can help us make educated guesses. Taking on a test-and-learn approach to identify what works and what doesn't in an SEO strategy is often the one that will inform which factors can drive more traffic. Google regularly updates their algorithm. Staying informed about these changes is essential to minimise the risk of traffic declines after each update. Some key ranking factors for 2025 include: Helpful, and engaging content that answers user queries. Fast-loading pages that enhance user experience. Easy navigation and logical site structure. How to Start a Basic SEO Strategy SEO doesn’t need a large budget, it can be something done within your own time to help understand how it works first provided you already have a website and it's capturing a certain amount of traffic from search or other digital marketing channels. Adopting patience, and a test-and-learn mentality, as mentioned earlier, forms the path for long-term success in any SEO strategy. This can save time and costs. Step 1: Set-Up Analytics Determine what you want to achieve. Is it more website traffic, increased sales, or better local visibility? And make sure Google Analytics and Google Search console are set-up so that you can effectively measure these goals to guide your strategy. Step 2: Research keywords Identify keywords your audience is searching for and use this information to structure your website's content for services, products, and blog posts. Here are some other articles that will help you get started on how to do keyword research, and structuring a blog post. Step 3: Optimise on-page elements Most modern website content management systems such as WordPress, Shopify, ShopWired, or Wix, allow you to optimise different parts of your pages. If you use one of these then the parts of a web page that you want to write descriptive information that relate to your keywords are: Titles: Include keywords in your page titles, CTAs (call-to-actions), and your brand name. Try keep them between 50 and 60 characters. Meta descriptions: Write concise summaries about your web pages that invites the user into your website. Try keep this 150 characters max. Headings: Make sure they accurately describe the content that follows, think relevance, think users, think unique, and ask yourself, does this encourage users to continue reading the contents that follow? Make sure you have one H1 on a page and it follows a logical order, e. g. H1, H2, H3, H4, H5, and H6. All these items help generate the information that is indexed in search results and people read before clicking to go into your website as shown in the example below. Step 4: Improve technical SEO This involves fine-tuning your website to make it easier for users and search engines like Google to find and read your content. The main aim is to help your visitors find the content that interests them the most. Some key factors highlighted include: Clear architecture and structure: Does your website structure content in a logical sesnse and is it easy for your target audience to find the content that interests them the most? Duplicate content: Does your website have the same content on different pages? If so, make sure every page is unique and targets a specific topic relevant for your target audiences.   Mobile usability: Can users access your website from multiple devices? Mobile, desktops, and tablets. Making sure your content is accessible helps your business reach consumers through multiple touch points.   Fix broken links: Often overlooked but if users click on a broken link then it interrupts their journey and potential to buy your service or product... look for and fix broken internal or external links. Page speed: How long does it take for your webpages to load? Do you use heavy images? Is your website modern adhering to performance best practices? Implementing these best practices can help improve page speed metrics like loading speed (LCP), responsiveness (FID), and layout stability (CLS). Use HTTPS: Is your website secure when users buy a product or send a lead? If not, then protect your site with an SSL certificate to help keep user data safe. Step 5: Local SEO If your business serves a specific area, claim and optimise your Google Business Profile. This platform can compliment an SEO strategy. It can also drive brand awareness by helping you appear in Google Maps that take up a lot of real estate for location based keywords. Structured data can also help your website appear better for local search terms used by targeted audiences. How to Create a Google Business Profile Page Go to Google Business Profile and Click on the “Start Now” button. Login to Google & Add Your Business Enter your business name. If it doesn’t appear in the dropdown, click “Add your business to Google. ” Select the appropriate category for your business. Choose a Business Location If you have a physical location (like a shop or office), select “Yes” to add the address. If you serve customers in a specific area (e. g. plumbing services), choose “No” and set up service areas instead. Add Contact Details Input your phone number and website URL (if available). Verify Your Business Google will ask you to verify your business. Common methods: Postcard: Google sends a postcard to your business address with a code. Phone: Some businesses can verify via text or call. Email: If available for your business type. Optimise Your Profile Add the following information: Business hours Photos of your store, products, or services A detailed description of your business Enable messaging (optional). Publish and Manage Once verified, your profile will go live. You can manage and update it through the Google Business Profile Manager. You might want to consider reviewing and doing the following activities: NAP consistency: Ensure your Name, Address, and Phone number are accurate across all online listings. Photos: Include high-quality images of your location, products, or services. Encourage reviews from existing customers: Politely ask customers to leave reviews on Google and other platforms. Publish posts that links to your website: Create some content that interest your target audience and direct them towards your website with a link. Keep doing this probably around once or twice a month... it compliments your SEO strategy too so why not! SEO for Small Businesses SEO is long-term, and it offers benefits that make it a valuable investment for small businesses. To keep costs down, start small then scale. Data speaks for itself so measuring sales or lead performance will help you understand and make informed decisions on continuing or increasing your investment be it time or money.   By appearing and ranking higher in search results, businesses can: Increase brand awareness. Drive consistent, organic traffic. Build trust with potential customers. Think with Google reports that 59% of shoppers say they research a product before they plan to visit a shop and buy it in-person, or go to buy it online. This demonstrates how critical it is for businesses to appear in relevant search results, particularly when consumers are close to making a purchase decision. --- > URLs are a very important part of user experience and SEO. Here's a WP plugin to warn admin users about long URLs. - Published: 2025-02-10 - Modified: 2025-02-10 - URL: https://stringerseo.co.uk/technical/wordpress-plugin-warning-about-long-urls/ - Categories: Content, Technical URLs are a very important part of user experience and SEO. A long URL may confuse users, it will definitly make it harder for them to remember it, and search engines may take that into consideration when indexing websites. If you work alongside editors and copywriters who use a WordPress CMS then you can create a plugin that warns them if the URL of a post is too long. This can help mitigate the risk of publishing posts with URLs that have over 115 characters. This article will take you through some steps to do just that! Why Monitor URL Length? Managing URL length is important for several reasons: SEO Benefits: Short and clean URLs are easier for search engines to understand and rank. User Experience: Concise URLs are more user-friendly and memorable. Avoid Truncation: Long URLs may get truncated in search results or when shared on social media, reducing their effectiveness. By implementing a solution to monitor URL lengths, you can ensure your content adheres to best practices. Steps to Create the Plugin Follow these steps to create a plugin that warns content editors if the URL exceeds 115 characters. Step 1: Set Up the Plugin File Open your WordPress site’s /wp-content/plugins/ directory. Create a new folder named url-length-warning. Inside this folder, create a file named url-length-warning. php. Paste the following code into the file: (function { document. addEventListener('DOMContentLoaded', function { function checkLinkLength { // Locate the button that contains the permalink const linkButton = document. querySelector('. editor-post-url__panel-toggle'); if (linkButton) { const linkSpan = linkButton. querySelector('span'); if (linkSpan) { const permalink = linkSpan. textContent. trim; const warningId = 'url-length-warning'; const warningMessage = 'Warning: URL exceeds 115 characters. '; // Remove existing warning let existingWarning = document. getElementById(warningId); if (existingWarning) { existingWarning. remove; } // Add a warning if the URL exceeds 115 characters if (permalink. length > 115) { const warning = document. createElement('div'); warning. id = warningId; warning. style. color = 'red'; warning. style. marginTop = '5px'; warning. textContent = warningMessage; linkButton. parentNode. appendChild(warning); } } } } // Optimize MutationObserver to target only the Sidebar area const sidebar = document. querySelector('. edit-post-sidebar'); if (sidebar) { const observer = new MutationObserver( => { checkLinkLength; }); observer. observe(sidebar, { childList: true, subtree: true }); } // Check when the button is clicked to open the dropdown document. body. addEventListener('click', function (event) { if (event. target. closest('. editor-post-url__panel-toggle')) { checkLinkLength; } }); }); }); --- > Reddit is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. - Published: 2025-02-09 - Modified: 2025-05-08 - URL: https://stringerseo.co.uk/link-earning/why-brands-are-winning-with-reddit-marketing/ - Categories: Content, Distribution & Reputation Reddit usage in the UK has grown remarkably and just surged past X, formerly known as Twitter, as the UK's fifth-most-used social platform (The Guardian). For marketing, this will have significant effects on how marketers reach B2B and B2C targeted audiences that are really well-engaged. Why Marketers Should Have Reddit on their Radar Reddit’s Growth & Market Share It is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. (The Guardian). It holds a 2. 37% share of the UK social media market (StatCounter). 7. 33% of Reddit's users come from the UK, making the country the second-biggest home for the platform after the United States (Exploding Topics). Demographics: Who Uses Reddit in the UK? Understanding the user demographics of Reddit is important in targeting the right audience. Age: 64% of all Reddit users are between ages 18-29 years, hence the best platform that suits Millennials and Gen Z. (SocialChamp). 22% of the users fall within the 30-49 years age bracket. This is important for B2B and professional services. (SocialChamp). Gender: 63. 6% of users are male, while 35. 1% are female (Exploding Topics). Education & Income: 46% of Reddit users have a college degree; thus, it is a perfect platform that B2B marketers would wish to belong to. (SocialChamp). Users are very tech-savvy, research-driven, and have deep discussions before making decisions (Exploding Topics). Why Reddit Is Effective for B2B and B2C Marketing 1. High Engagement & Trust The average user spends 10 minutes per visit on Reddit; engagement goes deep in topic-based communities (Red Website Design). Reddit’s content ranks well on Google, providing long-term organic visibility.   2. Community-Driven Marketing Brands can reach out to niche communities via subreddits catering to their industry. For example, Fintech brands can engage with sub-threads such as r/UKPersonalFinance,, Gaming brands with r/Gaming and r/PS5, and B2B SaaS businesses can find conversation in r/EntrepreneurUK and r/Technology. Successful strategies have included AMAs, content-driven engagements, and targeted advertising on Reddit. 3. Advertising Potential Reddit’s ad platform allows for interest-based targeting, making it a viable alternative to Meta and Google Ads.   Cost-effective ad options mean brands can reach engaged audiences without the high CPC of Google. Image Credit: Media Shower Case Study: How The Economist Uses Reddit Marketing A great example of a brand succeeding on Reddit is The Economist. The publication actively engages with Reddit users through AMAs (Ask Me Anything) sessions, where journalists answer questions from the community (Media Shower). Key Takeaways for Marketers: Humanise your brand – Engage authentically and interact with users directly.   Encourage dialogue – Use AMAs and discussion threads to build trust.   Leverage long-form content: Redditors love great insights and well-researched replies. How to Build a Winning Reddit Marketing Strategy Identify Relevant Subreddits – Research where your audience engages. Engage Helpfully – Avoid selling; instead, provide value and contribute to discussions.   Use Reddit Ads Strategically – Target based on interests, locations, and subreddit activity. Leverage AMAs and Thought Leadership – Establish credibility by directly answering user questions. Monitor Trends and Feedback – Use Reddit as a real-time focus group to gain industry insights. Use Reddit for Content Ideation & Distribution Monitor discussions in relevant subreddits to identify trending topics and pain points. Repurpose high-performing Reddit conversations into blog posts, social media content, and newsletters. Engage with niche communities by sharing valuable insights, rather than overtly promoting products. Final Thoughts The continuous growth and active user base make Reddit a goldmine that remains untapped by marketers in the UK. The platform offers organic and paid marketing opportunities with their highly engaged, research-driven audience. Be it a B2C eCommerce brand or a B2B software company, the niche communities and open discussions on Reddit make it a powerhouse for brand awareness, lead generation, and customer engagement. Next Steps: Browse relevant subreddits. Create an engagement plan. Test the ad platform for targeted marketing on Reddit. Brands can reach highly targeted audiences, build brand credibility, and stay ahead of the competition by integrating Reddit into their digital marketing efforts. --- > Learn how to automate the process of organising issue reports in Google Sheets by running a simple python script! - Published: 2025-02-09 - Modified: 2025-02-09 - URL: https://stringerseo.co.uk/technical/how-to-automate-extract-screaming-frog-issues-into-google-sheets/ - Categories: Technical Screaming Frog is a well known and powerful tool for SEOs, it allows them to crawl and analyse websites deeply but using their some what old school user interface and managing the extensive data it generates can be underwhelming. So, here is a way to automate the process of organising issue reports in Google Sheets and creating hyperlinks for easy navigation, you can streamline your workflows and save significant time, and just spend time doing the fun stuff... analysing the data inline with Google Search Console indexing report and putting best practice suggestions together for you, your client, or engineers. This guide walks you through a Python-based solution to: Organise Screaming Frog issues data in Google Sheets. Link each issue in the Summary tab to its respective issue tab. Make data exploration fast, efficient, and user-friendly. Screaming Frog have published a relevant article teching you 'How To Automate Crawl Reports In Looker Studio', I suggest checking that out if you want to automate data it into a reporting dashboard. Step 1: Organise Screaming Frog Issue Data in Google Sheets The first step is to process the Issues Overview Report exported from Screaming Frog. This report contains a high-level summary of all issues found on the website, including issue names, descriptions, priority levels, and affected URLs. Our Python script will: Create a Summary tab in Google Sheets containing all rows and columns from the Issues Overview Report. Create individual tabs for each unique issue, with only the rows relevant to that specific issue. Step 2: Populate Tabs with Issue-Specific Data Screaming Frog allows you to export detailed CSV files for each issue. For instance: H1 Duplicate has a CSV file listing all URLs with duplicate H1 tags. Images Missing Alt Text has a CSV file with URLs for images missing alt attributes. The script processes these CSV files, matches them to their respective tabs in the Google Sheet, and pastes the data starting from row 4 to leave space for issues's overview information. Step 3: Add Hyperlinks in the Summary Tab To make navigation easier, we add hyperlinks in the "Issue Name" column of the Summary tab. Each hyperlink points to the corresponding tab for that issue, enabling quick access with a single click. For example: Clicking on "H1 Duplicate" in the Summary tab will take you to the H1 Duplicate tab, where all URLs for this issue are listed. How It Works 1. Process the Issues Overview Report The script first reads the issues_overview_report. csv file and processes it to: Create a Summary tab containing all columns and rows. Create tabs for each unique issue name, filtered to include only rows related to that issue. 2. Match Issue-Specific CSV Files Next, the script reads all issue-specific CSV files from a folder, matches them to the respective tabs using fuzzy matching, and pastes the data in the respective issues tab. 3. Add Hyperlinks to the Summary Tab Finally, the script scans the "Issue Name" column in the Summary tab and adds hyperlinks that point to the corresponding issue tabs. This makes it easy to navigate between the high-level overview and the detailed data for each issue. The Python Scripts Here are the scripts that automate these tasks. Script 1: Process Screaming Frog Data import osimport pandas as pdimport gspreadfrom fuzzywuzzy import processfrom oauth2client. service_account import ServiceAccountCredentialsimport timedef connect_to_google_sheets(sheet_name): scope = creds = ServiceAccountCredentials. from_json_keyfile_name("{REPLACE}-credentials. json", scope) client = gspread. authorize(creds) # Connect to an existing Google Sheet by name or create it if it doesn't exist try: sheet = client. open(sheet_name) print(f"Connected to existing Google Sheet: {sheet_name}") except gspread. exceptions. SpreadsheetNotFound: sheet = client. create(sheet_name) print(f"Created new Google Sheet: {sheet_name}") # Share the Google Sheet with an email address sheet. share('You@YourEmailAddress. com', perm_type='user', role='writer') sheet. share('YouColleague@TheirEmailAddress. com', perm_type='user', role='writer') return sheetdef clean_name(name): """Cleans and normalizes a name for matching. """ return name. strip. lower. replace("_", " "). replace("-", " "). replace(":", ""). replace(",", ""). replace(". csv", "")def write_to_sheet(sheet, tab_name, df, retries=3, start_row=1): df = df. fillna("") # Replace NaN values with empty strings try: # Check if the worksheet already exists worksheet = sheet. worksheet(tab_name) print(f"Worksheet '{tab_name}' already exists. Updating its contents. ") except gspread. exceptions. WorksheetNotFound: # Create the worksheet if it doesn't exist for attempt in range(retries): try: worksheet = sheet. add_worksheet(title=tab_name, rows="1000", cols="26") print(f"Created new worksheet '{tab_name}'. ") break except gspread. exceptions. APIError as e: if attempt < retries - 1: print(f"API error on attempt {attempt + 1}: {e}. Retrying... ") time. sleep(5) # Wait before retrying else: print(f"Failed to create worksheet '{tab_name}' after {retries} attempts. ") return # Write data in a single operation (clear + update in one go) try: worksheet. resize(rows=len(df) + start_row - 1, cols=len(df. columns)) values = + df. values. tolist worksheet. update(f"A{start_row}", values) except gspread. exceptions. APIError as e: print(f"Failed to update worksheet '{tab_name}': {e}") time. sleep(2) # Add delay to avoid hitting API rate limitsdef process_overview_report(issues_overview_path, sheet_name): # Load the Issues Overview Report issues_overview_df = pd. read_csv(issues_overview_path) # Connect to the existing Google Sheet sheet = connect_to_google_sheets(sheet_name) # Step 1: Write the Summary tab write_to_sheet(sheet, "Summary", issues_overview_df, start_row=1) print("Summary tab updated. ") # Step 2: Create a tab for each unique issue for issue_name in issues_overview_df. unique: # Filter rows for the specific issue issue_df = issues_overview_df == issue_name] # Write the filtered rows to a tab write_to_sheet(sheet, issue_name, issue_df, start_row=1) print(f"Created tab for issue: {issue_name}")def process_issue_csvs(folder_path, sheet_name): # Connect to the existing Google Sheet sheet = connect_to_google_sheets(sheet_name) # Cache tab names to avoid repeated API calls tab_cache = # Iterate through each CSV file in the folder for file_name in os. listdir(folder_path): if file_name. endswith(". csv"): # Clean the file name for matching issue_name = file_name. replace(". csv", ""). replace("_", " "). capitalize # Find an exact match in the tab names matched_tab_name = process. extractOne(issue_name, tab_cache, score_cutoff=70) if not matched_tab_name: print(f"Warning: No matching tab found for issue '{file_name}'. Skipping... ") continue matched_tab_name = matched_tab_name # Extract the matched tab name # Load the CSV file file_path = os. path. join(folder_path, file_name) issue_df = pd. read_csv(file_path) # Append data from the CSV file to the matched tab starting at row 4 write_to_sheet(sheet, matched_tab_name, issue_df, start_row=4) print(f"Processed and updated tab for issue: {matched_tab_name}") print(f"Google Sheet '{sheet_name}' updated with all issues. ")if __name__ == "__main__": # File path to the Screaming Frog Issues Overview Report (step 1 file) issues_overview_path = "issues_overview_report. csv" # Folder path to the issue-specific CSV files (step 2 files) issues_folder_path = "/{full-folder-path}/issues_reports" # Name of the Google Sheet (same for both steps) google_sheet_name = "Name Your Sheet" # Step 1: Process the Issues Overview Report process_overview_report(issues_overview_path, google_sheet_name) # Step 2: Process the issue-specific CSV files process_issue_csvs(issues_folder_path, google_sheet_name) This script connects you to GoogleSheets through the Google Sheets API, you'll need to create your own credentials. json file and store this in the same folder that the script is run from, and then it processes the Issues Overview Report and the issue-specific CSV files from Screaming Frog, matching them to their respective tabs. Head on over to https://medium. com/@a. marenkov/how-to-get-credentials-for-google-sheets-456b7e88c430#:~:text=Press%20'CREATE%20CREDENTIALS'%20and%20select,to%20the%20list%20of%20credentials. to find out how to create your own credentials. json file. Script 2: Add Hyperlinks to the Summary Tab import gspread from oauth2client. service_account import ServiceAccountCredentials import time def connect_to_google_sheets(sheet_name): scope = creds = ServiceAccountCredentials. from_json_keyfile_name("credentials. json", scope) client = gspread. authorize(creds) # Connect to an existing Google Sheet by name try: sheet = client. open(sheet_name) print(f"Connected to existing Google Sheet: {sheet_name}") return sheet except gspread. exceptions. SpreadsheetNotFound: print(f"Google Sheet '{sheet_name}' not found. ") return None def fetch_tab_gid(sheet): """Fetches the mapping of tab names to their gids. """ tabs = {} metadata = sheet. fetch_sheet_metadata for sheet_data in metadata: tab_name = sheet_data gid = sheet_data tabs = gid return tabs def add_hyperlinks_to_summary(sheet, summary_tab_name="Summary"): try: # Get the Summary tab worksheet = sheet. worksheet(summary_tab_name) print(f"Found worksheet '{summary_tab_name}'. ") except gspread. exceptions. WorksheetNotFound: print(f"Worksheet '{summary_tab_name}' not found. Exiting... ") return # Get all data from the Summary tab data = worksheet. get_all_values headers = data rows = data # Skip headers # Find the index of the "Issue Name" column try: issue_name_index = headers. index("Issue Name") except ValueError: print("'Issue Name' column not found in Summary tab. ") return # Fetch the gid for each tab in the sheet tabs_with_gid = fetch_tab_gid(sheet) print(f"Tabs and their gids: {tabs_with_gid}") # Add hyperlinks to each Issue Name for i, row in enumerate(rows, start=2): # Start from row 2 (to skip headers) issue_name = row if issue_name. strip and issue_name in tabs_with_gid: # Ensure it's not empty and tab exists # Generate the hyperlink pointing to the gid of the tab gid = tabs_with_gid formula = f'=HYPERLINK("#gid={gid}", "{issue_name}")' worksheet. update_cell(i, issue_name_index + 1, formula) print(f"Added hyperlink for '{issue_name}' pointing to gid '{gid}'. ") else: print(f"Skipping '{issue_name}' - Tab not found. ") # Add delay to avoid hitting API rate limits time. sleep(1) print("Hyperlinks added to the Summary tab. ") if __name__ == "__main__": # Name of the Google Sheet google_sheet_name = "NAME OF YOUR SHEET" # Connect to the Google Sheet sheet = connect_to_google_sheets(google_sheet_name) # Add hyperlinks to the Summary tab if sheet: add_hyperlinks_to_summary(sheet) This script adds clickable links in the Summary tab, directing users to the corresponding tabs for each issue. Benefits of This Automation Saves Time: Automates tedious tasks like creating tabs and linking them. Processes hundreds of rows in seconds. Improves Accuracy: Reduces human errors when manually copying and organising data. Streamlines Navigation: Hyperlinks in the Summary tab to help navigation. Customisable: Adapt the scripts to your specific needs, such as custom headers or formatting. How to Use the Scripts Create a credentials. json file for Google Sheets. Set Up the Environment: Install Python and the required libraries (pandas, gspread, fuzzywuzzy, etc. ). Download your Screaming Frog Issues Overview Report and issue-specific CSV files. Run the Scripts: Use Script 1 to process the data and organise it into tabs. Use Script 2 to add hyperlinks in the Summary tab. Verify the Output: Open your Google Sheet and confirm that: The Summary tab is complete. Each issue has its own tab with relevant data. Hyperlinks in the Summary tab point to the correct tabs. This script doesn't only simplify the management of Screaming Frog data but it also enhances the efficiency of your SEO workflows. By leveraging Python and Google Sheets, the old-school UI in Screaming Frog isn't a thing anymore, and their issue reports can be exported into a well-structured, easy-to-navigate resource that saves time and let's you get on with the analysis and audit. --- - Published: 2025-02-03 - Modified: 2025-02-09 - URL: https://stringerseo.co.uk/technical/how-to-modify-all-wordpress-links-for-a-reverse-proxy-setup/ - Categories: Technical If you are hosting your WordPress blog or website behind a reverse proxy under a different domain, you might experience a range of issues where internal links, canonical URLs, Open Graph metadata, JavaScript, CSS files, and other assets keep referring to the original WordPress subdomain. This may create problems for SEO, and can impact page speed performance, and consistency in the user experience. In this guide, we’ll walk through the process of rewriting all WordPress-generated links dynamically, ensuring that your site properly reflects the reverse proxy domain. Why Rewriting URLs is Necessary When setting up a reverse proxy, WordPress will still generate URLs pointing to its original domain unless explicitly instructed otherwise. This leads to inconsistencies, such as: Canonical URLs and meta tags still pointing to the old domain. Open Graph and structured data referencing incorrect URLs. RSS Feeds pointing to the wrong domain. Hardcoded JavaScript and CSS file links breaking due to CORS issues. Navigation links, widgets, and internal post links not updating. To resolve these, we need a global URL rewriting solution that modifies all links dynamically. How to Build a WordPress Plugin to Rewrite URLs in Links Instead of manually adjusting each template or modifying WordPress settings, we’ll use a custom WordPress plugin that automatically rewrites all URLs site-wide. Steps to Create the Plugin Create a new PHP file: Name it reverse-proxy-link-rewriter. php. Add the following PHP code to rewrite all WordPress-generated links dynamically. Place the file in /wp-content/plugins/. The PHP Plugin Code --- > HubSpot Blog Post Export: Find out how to clean the m up before importing into WordPress. This technique is useful for blog migratons. - Published: 2025-01-31 - Modified: 2025-11-26 - URL: https://stringerseo.co.uk/technical/hubspot-blog-posts/ - Categories: Technical HubSpot makes it fairly straightforward to export blog content, but the HTML that comes out is often full of inline styles, classes and IDs. When that HTML is pasted into WordPress, it can clash with the theme’s styles, create inconsistent typography and generally make templates harder to maintain. In this guide, a practical workflow is outlined for using Python, pandas, gspread and BeautifulSoup to clean HubSpot blog exports in bulk via Google Sheets. The script removes unwanted attributes from key text elements while leaving links and images intact, so content is ready to paste into WordPress with minimal manual editing. Contents Why clean HubSpot exports before moving to WordPress? What you’ll need Step 1 – Export blog posts from HubSpot Step 2 – Set up Google Sheets API access Step 3 – Add the Python script Step 4 – Run the script and review the output How the clean_html function works Performance and real-world usage tips Versions, assumptions and limitations Troubleshooting common errors Taking this further Why clean HubSpot exports before moving to WordPress? HubSpot’s blog editor applies its own classes, IDs and inline styles to headings, paragraphs and other elements. That works within HubSpot’s templates, but the same styling can: Override or clash with WordPress theme styles. Make typography and spacing inconsistent between migrated and native posts. Add unnecessary code bloat to the HTML. Cleaning exported HTML before migration helps: Keep styling under the control of the WordPress theme and block styles. Improve consistency across all posts. Reduce layout bugs caused by legacy HubSpot styling. Doing this by hand inside each post is time-consuming. A small Python script plus Google Sheets can automate most of the work. What you’ll need A HubSpot account with permission to export blog posts. A WordPress site where the posts will ultimately live. A Google account with access to Google Sheets. Python 3 (3. 10+ recommended) installed locally or on a server. The following Python packages: gspread – for Google Sheets access (gspread docs) pandas – for working with tabular data (pandas docs) beautifulsoup4 – for HTML parsing (BeautifulSoup docs) google-auth – for Google API authentication Install the packages with: pip install gspread pandas beautifulsoup4 google-auth Step 1 – Export blog posts from HubSpot The exact menu labels in HubSpot can change over time, but the export flow is broadly: In HubSpot, go to Content > Blog (or Marketing > Website > Blog, depending on the account layout). Choose the relevant blog if there are multiple. Use the actions menu to select Export blog posts. Select the fields needed (for example “Title”, “Content”, “Meta Description”). Export as a CSV file and download it. HubSpot’s official documentation on exporting blog content provides more detail and screenshots: HubSpot – Export your blog posts. Once the CSV has been downloaded, import it into a Google Sheet. A fresh Sheet with a tab named something like HubSpot Export keeps things organised. Step 2 – Set up Google Sheets API access The script uses a Google service account to read from and write to a Sheet. Google’s official quickstart shows the overall process in detail: Google Sheets API – Python quickstart. In summary: Create a Google Cloud project. Enable the Google Sheets API and (optionally) the Google Drive API. Create a service account for the project and generate a JSON key file, then download it as credentials. json into the project folder. In Google Sheets, share the Sheet with the service account’s email address (usually something like your-project-name@your-project-id. iam. gserviceaccount. com). This gives the Python script permission to read from the “HubSpot Export” tab and write cleaned content into a separate tab in the same Spreadsheet. Step 3 – Add the Python script The script below connects to Google Sheets, reads the exported HubSpot data, cleans the HTML in each cell and writes the results to a new worksheet. It implements: A safer header-handling pattern (the header row is kept as column names, not treated as data). Element-wise cleaning using DataFrame. applymap for broad pandas compatibility. Optional Drive scope to avoid issues when listing worksheets. import html import unicodedata import gspread import pandas as pd from bs4 import BeautifulSoup from google. oauth2. service_account import Credentials # === Configuration === SERVICE_ACCOUNT_FILE = "credentials. json" SCOPES = SPREADSHEET_ID = "YOUR_SPREADSHEET_ID" # Replace with your Sheet ID INPUT_SHEET_NAME = "HubSpot Export" OUTPUT_SHEET_NAME = "Cleaned Export" def clean_html(html_content): """ Clean a single cell of HTML. - Decodes HTML entities. - Normalises Unicode. - Strips non-breaking spaces. - Removes style, class and id attributes from common text elements. - Leaves links, images and structural elements untouched. """ # Skip non-strings and very large cells if not isinstance(html_content, str) or len(html_content) > 5000: return html_content # Decode HTML entities html_content = html. unescape(html_content) # Normalise Unicode html_content = unicodedata. normalize("NFKC", html_content) # Remove stray invalid bytes html_content = html_content. encode("utf-8", "ignore"). decode("utf-8") # Replace non-breaking spaces and trim html_content = html_content. replace("\xa0", " "). strip # If it does not look like HTML, return as-is if "" not in html_content: return html_content # Parse the HTML soup = BeautifulSoup(html_content, "html. parser") # Only clean common text elements; leave links/images/layout elements alone tags_to_clean = tags_to_clean += for tag in soup. find_all(tags_to_clean): for attr in : tag. attrs. pop(attr, None) return str(soup) def main: # Authenticate with the service account creds = Credentials. from_service_account_file( SERVICE_ACCOUNT_FILE, scopes=SCOPES, ) client = gspread. authorize(creds) # Open the spreadsheet spreadsheet = client. open_by_key(SPREADSHEET_ID) # Get the input worksheet input_worksheet = spreadsheet. worksheet(INPUT_SHEET_NAME) # Get or create the output worksheet worksheet_titles = if OUTPUT_SHEET_NAME in worksheet_titles: output_worksheet = spreadsheet. worksheet(OUTPUT_SHEET_NAME) else: output_worksheet = spreadsheet. add_worksheet( title=OUTPUT_SHEET_NAME, rows=1000, cols=50, ) # Read all data from the input sheet data = input_worksheet. get_all_values if not data: print("No data found in input sheet. ") return header, *rows = data if not rows: print("No data rows found in input sheet. ") return # Build a DataFrame with the header row as column names df = pd. DataFrame(rows, columns=header) # Apply cleaning to every cell # applymap is supported in more pandas versions than DataFrame. map df = df. applymap(clean_html) # Write the cleaned data back to the output sheet output_worksheet. clear output_worksheet. update( + df. values. tolist) print(f"Cleaning complete. Data written to worksheet: {OUTPUT_SHEET_NAME! r}") if __name__ == "__main__": main Step 4 – Run the script and review the output With the configuration updated (Spreadsheet ID and sheet names), run the script from the command line: python clean_hubspot_export. py When it completes, the Google Sheet will contain a new tab called something like Cleaned Export. This will have the same column structure as the original export, but with cleaned HTML in each cell. The next steps are: Open a post in WordPress. Switch to the HTML/Code editor (or use a Custom HTML block). Copy the cleaned HTML from the relevant cell in the Sheet. Paste it into WordPress and preview it in the front-end theme. Headings, paragraphs and emphasis should now respect the theme’s styling, without legacy HubSpot inline styles overriding anything. How the clean_html function works The clean_html function is deliberately conservative so that content is made safer for WordPress without breaking layout or media. It decodes HTML entities using Python’s html. unescape, so characters such as   and ’ become plain Unicode text. It normalises Unicode with the standard library’s unicodedata. normalize to reduce odd character variants that sometimes appear after exports. It removes non-breaking spaces (\xa0) and trims whitespace to tidy paragraph text. It only cleans specific tags:Headings –Paragraphs Emphasis and strong tags: , , , , For these tags it strips style, class and id attributes. It leaves links, images and layout elements such as , , , , and untouched. This helps preserve structure and media. It skips obviously non-HTML content and very large cells (over 5,000 characters) to avoid wasting time on plain text columns and to guard against pathological cases. This approach follows typical patterns described in the BeautifulSoup documentation for cleaning attributes from tags while preserving the underlying HTML: see the Modifying the tree section in the official docs for more examples. Performance and real-world usage tips The example above applies clean_html to every cell in the Sheet. For small to medium exports, that is perfectly acceptable. For larger datasets or very wide sheets, performance can be improved by: Restricting cleaning to the column that contains post body HTML, for example: content_column = "Content" # change to match the export column name df = df. apply(clean_html) Splitting extremely large exports across multiple worksheets, then running the script per worksheet. Running the script on a machine with a stable connection to Google’s APIs and avoiding very aggressive reruns (for example, not running the entire migration every few minutes). Versions, assumptions and limitations To keep the example focused, a few assumptions are made: Python version: any reasonably up-to-date Python 3 version should work. Python 3. 10+ is recommended. pandas version: the script uses DataFrame. applymap, which is available in mainstream pandas releases. If working with a very old pandas version, it is worth checking the official pandas applymap documentation for any behavioural differences. Google access: the service account must have access to the Sheet, and the Sheets API must be enabled in the Google Cloud project. HTML scope: only a specific set of text-related tags is cleaned. If HubSpot adds important styling to other elements (for example custom cards or layout blocks), extra rules may be needed. Length cap: the 5,000-character length check is a pragmatic safeguard. For extremely long posts stored in a single cell, this value can be increased. For a detailed view of the authentication and authorisation flow used by this pattern, the official Google Sheets API documentation is the best reference point. Troubleshooting common errors When working with Google Sheets and external libraries, a few common issues tend to appear. Below are some quick checks that reflect real-world experience with this kind of script: SpreadsheetNotFound or similar errors Check that the correct SPREADSHEET_ID has been copied from the Sheet URL. Confirm that the Sheet has been shared with the service account’s email address. Verify that the service account JSON file path in SERVICE_ACCOUNT_FILE is correct. WorksheetNotFound for the input worksheet Make sure that the tab name in Google Sheets matches INPUT_SHEET_NAME exactly, including spaces and capital letters. If the export tab has a different name, either rename it in Sheets or adjust the configuration in the script. AttributeError related to applymap If using a very old version of pandas, double check that applymap is available on DataFrame. If not, upgrading pandas to a current version is usually the simplest fix. Slow performance or suspected rate limits For very large sheets, consider cleaning only the content column instead of every cell. Batching work across multiple runs or worksheets can help avoid hitting Google API quotas in a single burst. For detailed exceptions and usage examples, the gspread documentation is a useful companion to this script. Taking this further The basic pattern here is flexible. It can be extended to handle: Extra attribute cleaning for other tags such as and , if layout classes are not required in WordPress. Custom find-and-replace operations for HubSpot-specific markup patterns. Automated checks for heading levels, empty paragraphs or legacy shortcodes. Because the heavy lifting is in Python, changes can be tested on a subset of posts first, then rolled out to an entire export with confidence. Combined with a clear internal process for redirects and URL mapping, this kind of cleaning step helps make HubSpot-to-WordPress migrations both cleaner and more predictable from a technical SEO perspective. --- > Streamline your SEO migration with this Python script. Automate URL mapping using page titles, the body of content and fuzzy matching. - Published: 2025-01-20 - Modified: 2025-11-26 - URL: https://stringerseo.co.uk/technical/seo-migration-automate-url-mapping-with-python/ - Categories: Technical In any SEO and website migration project, accurately mapping old URLs to new ones is critical for maintaining search engine rankings and minimising disruptions. Google’s own guidance on site moves with URL changes stresses the importance of carefully planned redirects when URLs change. The following Python scripts show how to streamline this process by comparing page titles or on-page content for similarity and generating a URL mapping file that can feed into a 301 redirect plan. They are designed as practical helpers – not full replacements for human review – so that SEO specialists and developers can focus their time on edge cases and strategy rather than manual copying and pasting. Table of contents Script overview: page title matching for URL mapping Key features Code implementation Steps to use the title-matching script Extending the script for content matching Limitations, performance and best practice Frequently asked questions External resources and further reading Script overview: page title matching for URL mapping This Python script is designed to match old URLs to new URLs based on page titles using fuzzy matching techniques. By automating this part of a migration, it saves time and reduces the risk of manual errors when you are mapping hundreds of similar-looking URLs. Key features Fetch page titles: Uses requests and BeautifulSoup to extract page titles from provided URLs. For more detail, see the Requests quickstart and the Beautiful Soup documentation. Fuzzy matching: Leverages the RapidFuzz library’s fuzz. ratio to calculate similarity scores between old and new page titles. See the RapidFuzz fuzz. ratio docs for details. CSV input/output: Reads old and new URLs from a CSV file and generates a mapped output file with match scores using pandas. read_csv and DataFrame. to_csv. You can find more on this in the pandas read_csv documentation. Custom threshold: Allows users to set a similarity threshold (0–100) for better control over matches, so only strong title matches are used. Code implementation Below is the script for page title-based URL mapping: import requests from bs4 import BeautifulSoup import pandas as pd from rapidfuzz import fuzz def fetch_page_title(url): """ Fetches the page title for a given URL. Uses the Requests library to retrieve the HTML and BeautifulSoup to parse the element. For production use you may want to: - Add a custom User-Agent header - Handle retries / backoff for transient errors - Respect robots. txt and crawl-delay settings """ try: response = requests. get(url, timeout=10) response. raise_for_status soup = BeautifulSoup(response. text, "html. parser") title = soup. title. string. strip if soup. title else None return title except Exception as e: print(f"Error fetching title for {url}: {e}") return None def map_urls_by_titles(old_urls, new_urls, threshold=80): """ Maps old URLs to new URLs based on their page titles using fuzzy matching. Parameters: old_urls (list): List of old URLs to map from. new_urls (list): List of new URLs to map to. threshold (int): Minimum similarity score to consider a match (0-100). Returns: DataFrame: A mapping of old URLs to new URLs based on page titles and match scores. Notes: - This script works well for small to medium sets of URLs (hundreds to a few thousand). - For very large migrations, consider batching, caching, or more advanced RapidFuzz APIs to improve performance. """ old_titles = {url: fetch_page_title(url) for url in old_urls} new_titles = {url: fetch_page_title(url) for url in new_urls} mappings = # Create mapping by fuzzy matching titles for old_url, old_title in old_titles. items: best_match = None highest_score = 0 for new_url, new_title in new_titles. items: if old_title and new_title: # Calculate similarity score (0-100) score = fuzz. ratio(old_title, new_title) if score > highest_score: highest_score = score best_match = new_url # Add match details to the mapping mappings. append( { "Old URL": old_url, "Old Title": old_title, "New URL": best_match if highest_score >= threshold else None, "New Title": new_titles. get(best_match, None) if best_match else None, "Match Score": highest_score, } ) return pd. DataFrame(mappings) def read_urls_from_csv(file_path): """ Reads old and new URLs from a CSV file. The CSV should have two columns: 'Old URL' and 'New URL'. Example format: Old URL,New URL https://oldsite. com/page1,https://newsite. com/page-a https://oldsite. com/page2,https://newsite. com/page-b """ try: data = pd. read_csv(file_path) old_urls = data. dropna. tolist new_urls = data. dropna. tolist return old_urls, new_urls except Exception as e: print(f"Error reading URLs from CSV: {e}") return , if __name__ == "__main__": # Input and output file paths input_csv = "urls. csv" # Replace with your CSV file path output_csv = "url_mapping. csv" # Read URLs from the CSV file old_urls, new_urls = read_urls_from_csv(input_csv) if not old_urls or not new_urls: print("No URLs found in the input file. Please check the CSV format. ") else: # Generate the URL mapping url_mapping = map_urls_by_titles(old_urls, new_urls, threshold=80) # Save the mapping to a CSV file url_mapping. to_csv(output_csv, index=False) print(f"URL mapping saved to {output_csv}") Steps to use the title-matching script Install required librariesInstall the Python libraries if they are not already available in your environment: pip install requests beautifulsoup4 pandas rapidfuzz Prepare the input CSVCreate a CSV file (for example urls. csv) with two columns: Old URL and New URL. Each row should represent a potential mapping candidate between an old URL and the new URL that may replace it: Old URL,New URL https://oldsite. com/page1,https://newsite. com/page-a https://oldsite. com/page2,https://newsite. com/page-b https://oldsite. com/page3,https://newsite. com/page-c The script will use the titles of these URLs to suggest the best match for each old URL. Run the scriptSave the script to a file, for example seo_migration_titles. py, in the same directory as your CSV file, and run: python seo_migration_titles. py Review the outputThe script generates a file such as url_mapping. csv containing:Old URLOld TitleNew URL (if a match passes the threshold)New TitleMatch Score (0–100)Use this as a starting point for your 301 redirect rules. Make sure a human reviews the mappings before deploying them to production. Why might some URLs be missing from the output? Threshold too high: If the similarity score does not meet the threshold (default is 80), the New URL will be None. Lower the threshold slightly (for example to 70–75) if you want to see more candidate matches, then manually review. Titles not found: If the script fails to fetch page titles for the URLs (due to timeouts, blocked requests, incorrect URLs or empty titles), matching cannot proceed. Check those URLs manually or adjust your timeout / error handling. Extending the script for content matching To go beyond page titles, the script can also compare other on-page elements such as meta descriptions, H1s, body text, and image alt attributes. This can be useful when titles are generic or have been rewritten during a redesign. The following script demonstrates a more in-depth content comparison. Note that this version calculates a composite similarity score by summing the similarity of several components; it is not restricted to a 0–100 range. That means the threshold is a minimum total score rather than a simple percentage. import requests from bs4 import BeautifulSoup import pandas as pd from rapidfuzz import fuzz def fetch_page_content(url): """ Fetches on-page content for a given URL, including title, meta description, H1, body text, and image alt attributes. """ try: response = requests. get(url, timeout=10) response. raise_for_status soup = BeautifulSoup(response. text, "html. parser") # Extract relevant content title = soup. title. string. strip if soup. title else None meta_description = soup. find("meta", attrs={"name": "description"}) meta_description = ( meta_description. strip if meta_description else None ) h1 = soup. find("h1") h1 = h1. text. strip if h1 else None body_text = " ". join images = for img in soup. find_all("img", alt=True)] return { "title": title, "meta_description": meta_description, "h1": h1, "body_text": body_text, "images": images, } except Exception as e: print(f"Error fetching content for {url}: {e}") return None def calculate_similarity(old_content, new_content): """ Calculates a composite similarity score between two sets of page content. The score is a sum of: - Title similarity (0-100, if present) - Meta description similarity (0-100, if present) - H1 similarity (0-100, if present) - Body text similarity (0-100, if present) - Image alt text similarity (sum of best matches per image) As a result, the total score can exceed 100. In practice, strong matches often end up in the low hundreds, depending on how many components are present and how similar they are. """ total_score = 0 components = # Compare textual components for component in components: old = old_content. get(component, "") new = new_content. get(component, "") if old and new: total_score += fuzz. ratio(old, new) # Compare images (using alt text similarity) old_images = old_content. get("images", ) new_images = new_content. get("images", ) image_score = 0 if old_images and new_images: for old_img in old_images: best_img_score = max(fuzz. ratio(old_img, new_img) for new_img in new_images) image_score += best_img_score total_score += image_score return total_score def map_urls_by_content(old_urls, new_urls, threshold=200): """ Maps old URLs to new URLs based on the closest match of on-page content. Parameters: old_urls (list): List of old URLs to map from. new_urls (list): List of new URLs to map to. threshold (int): Minimum composite similarity score to consider a match. Returns: DataFrame: A mapping of old URLs to new URLs based on content similarity scores. Notes: - Because the similarity score is a sum of multiple components, typical "good" matches may land somewhere between 200-400, depending on how much content is on the page. - Start with a threshold around 200-250, inspect the results, then adjust up or down based on your site. """ old_contents = {url: fetch_page_content(url) for url in old_urls} new_contents = {url: fetch_page_content(url) for url in new_urls} mappings = for old_url, old_content in old_contents. items: best_match = None highest_score = 0 for new_url, new_content in new_contents. items: if old_content and new_content: score = calculate_similarity(old_content, new_content) if score > highest_score: highest_score = score best_match = new_url # Add match details to the mapping mappings. append( { "Old URL": old_url, "New URL": best_match if highest_score >= threshold else None, "Similarity Score": highest_score, } ) return pd. DataFrame(mappings) def read_urls_from_csv(file_path): """ Reads old and new URLs from a CSV file. The CSV should have two columns: 'Old URL' and 'New URL'. """ try: data = pd. read_csv(file_path) old_urls = data. dropna. tolist new_urls = data. dropna. tolist return old_urls, new_urls except Exception as e: print(f"Error reading URLs from CSV: {e}") return , if __name__ == "__main__": # Input and output file paths input_csv = "urls. csv" # Replace with your CSV file path output_csv = "url_mapping_content. csv" # Read URLs from the CSV file old_urls, new_urls = read_urls_from_csv(input_csv) if not old_urls or not new_urls: print("No URLs found in the input file. Please check the CSV format. ") else: # Generate the URL mapping url_mapping = map_urls_by_content(old_urls, new_urls, threshold=200) # Save the mapping to a CSV file url_mapping. to_csv(output_csv, index=False) print(f"URL mapping saved to {output_csv}") By analysing these additional components, SEO specialists can ensure even closer matches between old and new URLs during migrations, particularly on content-heavy sites where titles alone are not enough. Limitations, performance and best practice These scripts are intended as practical aids to accelerate URL mapping, not as fully autonomous redirect engines. To keep migrations safe and aligned with best practice, keep the following in mind: Always perform human QA: Treat the output CSV as a set of suggestions. Review high-value and borderline mappings manually before implementing redirects, and spot-check samples from lower-traffic areas. Scale and performance: Both approaches do pairwise comparisons between old and new URLs, which means they are roughly O(n²). They are usually fine for hundreds or a few thousand URLs, but for very large sites you may need: More efficient matching strategies (for example, RapidFuzz’s process helpers or blocking by directory / section). Caching of fetched HTML to avoid repeated downloads. Batching the migration by directory or site section. Respect crawling etiquette: When fetching pages at scale, make sure you: Respect the site’s robots. txt and any crawl-delay guidance. Use a reasonable timeout and rate limiting. Send an appropriate User-Agent string and avoid overwhelming servers. Redirect implementation: Once mappings have been reviewed, implement 301 redirects in line with Google’s site move and redirect guidance: Avoid redirect chains and loops where possible. Keep old URLs redirecting for a suitable period after the migration. Monitor Search Console for crawl errors and coverage changes. Frequently asked questions Does this script replace a manual redirect audit? No. The scripts are designed to reduce repetitive work and highlight likely matches, but a human still needs to review critical mappings, handle edge cases, and decide how to treat URLs that have no clear destination. What similarity threshold should I use? For the title-based script, a threshold of around 80 is a good starting point. If you find too few matches, experiment with 70–75 and review additional candidates manually. For the content-based script, the score is a sum across multiple components, so typical strong matches may land in the 200–400 range. Start around 200–250, inspect the results, then adjust the threshold up or down based on how conservative you want to be. Can I use these scripts for very large websites? In principle, yes – but out of the box they are best suited to small-to-medium migrations. For large sites (tens of thousands of URLs or more), consider processing one section at a time, caching responses, and using more advanced RapidFuzz utilities to avoid doing every possible pairwise comparison. Where should I implement the redirects? That depends on your stack. Common options include web server configuration (for example, Apache or Nginx), a reverse proxy, a CMS-level redirect manager, or edge-worker logic on a CDN. Whatever the mechanism, make sure redirects are tested in a staging environment before going live. External resources and further reading Requests quickstart – official docs Beautiful Soup 4 documentation pandas. read_csv – official docs RapidFuzz fuzz. ratio usage Google Search Central – Site moves with URL changes Google Search Central – Redirects and Google Search --- ---