# STRINGERSEO > 20-years of experience delivering SEO and content marketing strategies. Bespoke SEO and digital marketing solutions! Get in touch to find out more. --- ## Pages - [Content Strategy Services.](https://stringerseo.co.uk/expertise/content-strategy/): Bespoke content strategy & production for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - [Technical SEO Services.](https://stringerseo.co.uk/expertise/technical-seo/): Bespoke technical SEO & GEO services for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - [SEO Services in West Sussex, United Kingdom.](https://stringerseo.co.uk/expertise/seo-west-sussex-united-kingdom/): SEO training, consultancy and content services in West Sussex, United Kingdom. Get in touch today and find out more! - [SEO Consultant in Brighton, United Kingdom.](https://stringerseo.co.uk/expertise/seo-brighton-united-kingdom/): Customer-focused SEO, and content strategy consultancy services in Brighton, East Sussex. Get in touch today! - [SEO Services in Worthing, West Sussex.](https://stringerseo.co.uk/expertise/seo-worthing-west-sussex/): SEO training, consultancy and content services in Worthing, West Sussex. Get in touch today and find out more! - [SEO Services in Shoreham-by-Sea, West Sussex.](https://stringerseo.co.uk/expertise/seo-shoreham-by-sea/): SEO training, consultancy, and content services in Shoreham-by-Sea, West Sussex. Digital marketing consultancy. Get in touch to learn more. - [Services.](https://stringerseo.co.uk/expertise/): SEO consultancy, STIRNGERSEO, is now offering PPC campaign management and web development services for SBs, and SMEs! Get in touch today! - [Sitemap.](https://stringerseo.co.uk/sitemap/): Discover all the pages and blog posts on https://stringerseo.co.uk. - [About.](https://stringerseo.co.uk/jonathan-stringer/): Jonathan Stringer is an SEO professional tailoring bespoke digital marketing campaigns for SBs, SMEs, and LEs in the UK. Get in touch today! - [Contact.](https://stringerseo.co.uk/contact/): Get in touch if you would like to explore tailored SEO, PPC & Web build solutions. Fill the form below or email jonathan@stringerseo.co.uk. - [SEO Guides & Resources.](https://stringerseo.co.uk/guides-resources/): Explore a wealth of in-depth articles, comprehensive guides, and real-world case studies to help you keep up with SEO. - [SEO Case Studies & Results.](https://stringerseo.co.uk/case-studies/): SEO case studies about Jonathan Stringer collaborating with businesses to grow organic traffic. Get in touch for SEO consultancy! - [STRINGERSEO Ltd Cookie Policy.](https://stringerseo.co.uk/cookie-policy-uk/): This Cookie Policy was last updated on 19 March 2024 and applies to citizens and legal permanent residents of the United Kingdom. --- ## Posts - [How to Use SERP API for Keyword Research (Step-by-Step Guide with Examples)](https://stringerseo.co.uk/content/how-to-use-serpapi-in-keyword-research/): See an example of a python script snippet that calls SERPAPI, and learn how you can use it in your SEO keyword research work-flow. - [How to Automate Keyword Research with Keywords Everywhere and Google Sheets](https://stringerseo.co.uk/technical/automate-keyword-research/): If you’re working on keyword research in Google Sheets, you know how tedious it can be to copy‐and‐paste keywords and... - [10 Steps to Evaluate Your SEO Agency](https://stringerseo.co.uk/seo-ai/marketing/10-steps-to-evaluate-your-seo-agency/): Discover 10 practical steps to audit SEO agencies' work, set clear KPIs, demand meaningful reports and decide whether to find a new agency. - [SEO Website & Blog Migrations: Checklist & Best Practice Guide](https://stringerseo.co.uk/technical/seo-blog-migrations/): Find out how to do a SEO blog or website migration. Simple process with a detailed checklist of actions to help nurture organic traffic. - [Semantic SEO: What is it and How to Optimise for it?](https://stringerseo.co.uk/content/semantic-seo/): Find out what Semantic SEO is, and how you can optimize for it to achieve greater visibility in search results and AI Answer Engines. - [How to Find Keyword Opportunities Using Google Search Console](https://stringerseo.co.uk/content/how-to-find-keyword-opportunities-on-google-search-console/): Find out how to use Google Search Console for keyword research to unlock opportunities for content optimisation or to create new pages. - [How is SEO a Long-Term User-Focused Strategy?](https://stringerseo.co.uk/seo-ai/how-is-seo-a-long-term-user-focused-strategy/): Learn how SEO is a user-focused strategy, what techniques to avoid, and what questions help you understand if your content is helpful. - [How to Use Structured Data for Local SEO](https://stringerseo.co.uk/content/how-to-use-structured-data-for-local-seo/): Learn how to apply structured data to help grow you local visibility for different search terms that people use in Google! - [How to Extract Canonical URLs in Google Sheets Using Python](https://stringerseo.co.uk/technical/how-to-extract-canonical-urls-in-google-sheets-using-python/): Learn how to extract canonical URL in Google Sheets using python. Useful for when IMPORTXML is being clunky for large URL lists! - [How to Optimise a Product Data Feed for Google Merchant Center](https://stringerseo.co.uk/content/how-to-optimise-a-product-data-feed-for-google-merchant-center/): Learn how to optimise and create a product data feed following Google's best practice guidelines from merchant center. - [How to Add an Author Bio on Author Archives in WordPress](https://stringerseo.co.uk/technical/how-to-add-an-author-bio-on-author-archives-in-wordpress/): Find out how to inject an author's biography underneath the H1 of an author archive page templates in WordPress! - [How to Add Image Links in a Custom WordPress XML Sitemap](https://stringerseo.co.uk/technical/how-to-add-image-links-in-a-wordpress-xml-sitemap/): If you have built a custom WordPress XML sitemap generator then this article shows you how to link to images from the post XML sitemap. - [How to Structure a Blog Post](https://stringerseo.co.uk/content/how-to-structure-a-blog-post/): SEO Tips for small businesses. Find out how to structure and optimise a blog post to generate organic website traffic. - [How To Use Keywords Everywhere for Keyword Research](https://stringerseo.co.uk/content/keyword-research-using-keywords-everywhere/): Learn how to do keyword research using Keywords Everywhere. A tool to help you understand what keywords are being searched for in Google. - [How to Build a Custom XML Sitemap in WordPress](https://stringerseo.co.uk/technical/how-to-build-a-custom-xml-sitemap-in-wordpress/): XML sitemaps are important for SEO, it provides a list of pages for search engines to crawl from your website.... - [How to Start a Basic SEO Strategy in 5 Steps](https://stringerseo.co.uk/content/seo-tips-for-small-business-owners/): Learn how to start an SEO strategy for your small business. Find out what SEO factors are important and help generate more website traffic. - [How to Create a WordPress Plugin to Warn Content Editors About Long URLs](https://stringerseo.co.uk/technical/wordpress-plugin-warning-about-long-urls/): URLs are a very important part of user experience and SEO. Here's a WP plugin to warn admin users about long URLs. - [How & Why Brands Are Winning With Reddit Marketing ](https://stringerseo.co.uk/link-earning/why-brands-are-winning-with-reddit-marketing/): Reddit is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. - [How to Automate & Extract Screaming Frog Issues into Google Sheets](https://stringerseo.co.uk/technical/how-to-automate-extract-screaming-frog-issues-into-google-sheets/): Learn how to automate the process of organising issue reports in Google Sheets by running a simple python script! - [How to Modify All WordPress Links for a Reverse Proxy Setup](https://stringerseo.co.uk/technical/how-to-modify-all-wordpress-links-for-a-reverse-proxy-setup/): If you are hosting your WordPress blog or website behind a reverse proxy under a different domain, you might experience... - [HubSpot Blog Post Exports: How to Clean Them Up for WordPress Using Google Sheets](https://stringerseo.co.uk/technical/hubspot-blog-posts/): HubSpot Blog Post Export: Find out how to clean the m up before importing into WordPress. This technique is useful for blog migratons. - [URL Redirect Mapping with Python for Website Migrations](https://stringerseo.co.uk/technical/seo-migration-automate-url-mapping-with-python/): Streamline your SEO migration with this Python script. Automate URL mapping using page titles, the body of content and fuzzy matching. --- # # Detailed Content ## Pages > Bespoke content strategy & production for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - Published: 2025-06-25 - Modified: 2025-06-25 - URL: https://stringerseo.co.uk/expertise/content-strategy/ why it matters. The best written content can sit unread if it doesn’t resonate with a target audience or if it fails to guide them toward taking action. Our human-first content strategy targets search intent, and helps foster decisions with an aim to turn visitors into customers. what we've achieved. Leading end-to-end SEO & content strategies for Purpl Discounts, we grew organic traffic by 51. 9%, and conversions by 38. 8% in the first half of 2025. find out more → 20 years experience. user & data driven. measurable results. what you'll gain. content roadmap. A customised calendar of topics, formats and channels mapped to each stage of your customer journey and seasonal factors. user-focused content. Jargon-free copy that engages readers and drives them to engage with your brand or services. competitive edge. In-depth gap analysis to fill topics and niches your competitors missed in order to claim untapped search visibility. measurement & optimisation. Monthly updates and tests to keep your content fresh, relevant and aligned with evolving user needs & algorithms. agile content process. what we offer. research & discovery. This phase digs into the keywords people are actually searching for, not just the obvious ones, but the ones that signals real intent. We use our own AI tools to size up the competition and figure out where the gaps are. strategic planning. We’ll build out a clear content calendar with priorities, deadlines, and formats that make sense for your goals. Each brief includes target keywords, suggested headlines, and calls to action that aren’t just ticking boxes, they’re there to drive action. We also look closely at who we’re talking to. That means proper audience and demographic research, so the tone, topics and angles actually land. We’ll group themes into clusters and pillars to build momentum over time. creation, integration, optimisation & distribution. We publish a mix of formats, from practical guides and listicles to trending news and narrative-led content, all designed to support your broader goals. Every asset is optimised for search, covering the essentials like title tags, meta descriptions, internal links and schema markup to maximise visibility. We also integrate rich media to drive deeper engagement like custom imagery, infographics or video content that adds value and strengthens SEO performance. Content isn’t just created and left to sit. We distribute it across paid, owned and earned channels to help boost awareness, attract links, and drive meaningful interaction. And if there’s existing content that performed well in the past? We’ll look at ways to refresh or repurpose it so it continues to support your growth targets and KPIs. measurement & reporting. You get a monthly dashboard showing what’s working, traffic, visibility, engagement, and where to tweak. From there, we make small but meaningful changes that keep performance moving in the right direction. frequently asked questions. FAQs. What makes an SEO content strategist different from a writer? An SEO content strategist does deep keyword and search intent research for content planning and implements optimisation best practices to help make sure content appears in organic search results, resonates with audiences and converts. Is this service right for my business? We tailor content strategies to your goals and budget, focusing on the topics and keywords that drive the most impact whether you’re a startup, small business, small medium enterprise, or a global brand. What are the 4 steps of content strategy? A straightforward four-step framework is: 1. Define your goals and audience. 2. Map topics to users and channels. 3. Launch content and distribute across paid, owned, and earned channels. 4. Measure and revise based on performance data. --- > Bespoke technical SEO & GEO services for small businesses, SMEs and large brands. Contact us for a complementary SEO growth blueprint! - Published: 2025-06-24 - Modified: 2025-06-26 - URL: https://stringerseo.co.uk/expertise/technical-seo/ why it matters. Even the most compelling content can underperform if search engine crawlers hit roadblocks or if your site loads slowly for users. Technical SEO lays a rock-solid foundation so Google (and your users) can reach every page, fast. what we've achieved. After our technical SEO migration and strategy, My Bespoke Room experienced a 21% average MoM growth in organic traffic in 2025. find out more → 20 years experience. user & data driven. boost search visibility. what you'll gain. index more pages. Prevent lost opportunities: reduce indexing errors so every key piece of content, and page, appears in organic search. faster page loads. Load in under 2 seconds on desktop and mobile to help keep bounce rates down and engagement up on both devices. SEO & GEO visibility. Gain visibility in Google's AI Overviews, and generative AI platforms by implementing a mix of technical and content marketing strategies. improved experiences. Hit Core Web Vitals thresholds on all the important KPIs for smooth performance across devices. eliminate crawl errors. what we offer. comprehensive website audit. We’ll audit deep into your site with industry-leading tools to identify crawl errors, broken links and any blockers hiding in your websites code. Then we’ll hand you a clear, prioritised list of fixes alongside strategic tips for bigger-picture improvements so you can start winning quick SEO gains today and plan for long-term health tomorrow. user-first architecture. We'll work on your content and URL hierarchy, implement a pillar and cluster content strategy to drive sentiment, and fine-tune your internal links so users and crawlers alike can navigate through your site. We aim to achieve deeper crawl reach, smoother navigation and a happier audience that sticks around longer. speed & performance tuning. From smart image compression and lazy-loading to caching and server-side tweaks, we’ll work towards reducing your load times, and aim for under 2 seconds for users that are either on a desktop or mobile device. Faster pages mean fewer bounces, and more engagement. core web vitals optimisation. We provide suggestions to fine-tune your CSS, JavaScript and resource loading patterns to meet Google’s thresholds for Largest Contentful Paint, First Input Delay and Cumulative Layout Shift. By delivering a smooth experience on smartphones and tablets, you’ll keep users engaged regardless of device. structured data. We’ll craft and test the right schema markup, Product, Review, FAQ and more, so Google, and generative AI platforms, really “understand” your content and serves it for relevant queries. website & blog migrations. Move domains or platforms without losing hard-won SEO equity. We plan and execute every 301 redirect, update your crawl map, validate canonical tags and run full staging tests so you retain rankings and traffic through launch and beyond. on-going monitoring & reporting. With continuous scans, real-time alerts and a live dashboard, we catch crawl spikes, Core Web Vitals regressions and broken links before they hurt you. Every month, you’ll get a concise report that maps technical health directly to your business goals. frequently asked questions. FAQs. What exactly is Technical SEO? It’s the practice of optimising your site’s infrastructure, crawling, indexing, load speed and user experience, to help improve how much content is indexed and ranking in search engine results. How much will this cost? Bespoke. This really depends on the size of your website and how long it will take to complete the audit. We work on a bespoke model and will see what is possible with the budget that has been allocated. We use AI to save time and costs where possible, and aim to still deliver the best value to our customers. How soon will I see results? Most clients notice a drop in crawl errors and speed improvements within 30 days, with search visibility gains becoming evident by month two or three. However, this really depends on how long it takes to implement the technical audit. --- > Discover all the pages and blog posts on https://stringerseo.co.uk. - Published: 2025-02-05 - Modified: 2025-02-08 - URL: https://stringerseo.co.uk/sitemap/ Browse https://stringerseo. co. uk PagesAbout. Contact. SEO Case Studies & Results. SEO Consultant in Brighton, United Kingdom. SEO Guides & Resources. SEO Services in Shoreham-by-Sea, West Sussex. SEO Services in West Sussex, United Kingdom. SEO Services in Worthing, West Sussex. Services. Sitemap. STRINGERSEO Ltd Cookie Policy. Blog PostsHow & Why Brands Are Winning With Reddit Marketing How is SEO a Long-Term User-Focused Strategy? How to Add an Author Bio on Author Archives in WordPressHow to Add Image Links in a Custom WordPress XML SitemapHow to Automate & Extract Screaming Frog Issues into Google SheetsHow to Build a Custom XML Sitemap in WordPressHow to Clean Up HubSpot Blog Post Exports for WordPress Using Google SheetsHow to Create a WordPress Plugin to Warn Content Editors About Long URLsHow to Extract Canonical URLs in Google Sheets Using PythonHow to Find Keyword Opportunities Using Google Search ConsoleHow to Modify All WordPress Links for a Reverse Proxy SetupHow to Optimise a Product Data Feed for Google Merchant CenterHow to Start a Basic SEO Strategy in 5 StepsHow to Structure a Blog PostHow To Use Keywords Everywhere for Keyword ResearchHow to Use Structured Data for Local SEOSemantic SEO: What is it and How to Optimise for it? URL Redirect Mapping with Python for Website MigrationsWhat are Canonical Tags and Why Use them? --- > Get in touch if you would like to explore tailored SEO, PPC & Web build solutions. Fill the form below or email jonathan@stringerseo.co.uk. - Published: 2025-02-05 - Modified: 2025-02-08 - URL: https://stringerseo.co.uk/contact/ Get in touch by filling out the form below or email jonathan@stringerseo. co. uk. * Contact form secured by JetPack. --- > Explore a wealth of in-depth articles, comprehensive guides, and real-world case studies to help you keep up with SEO. - Published: 2025-02-05 - Modified: 2025-03-06 - URL: https://stringerseo.co.uk/guides-resources/ Explore articles and guides designed to empower you with the knowledge and confidence to tackle SEO, content, and digital marketing projects. Whether you're a beginner looking for step-by-step instructions or an experienced enthusiast seeking expert insights, these resources will help you master new skills, troubleshoot challenges, and bring your ideas to life. --- > SEO case studies about Jonathan Stringer collaborating with businesses to grow organic traffic. Get in touch for SEO consultancy! - Published: 2024-03-15 - Modified: 2025-06-25 - URL: https://stringerseo.co.uk/case-studies/ Working with a diverse range of businesses, from ambitious start-ups, small local businesses (SBs) and dynamic small-to-medium enterprises (SMEs) to well-established large enterprises (LEs). STRINGERSEO's experience spans across industry-leading brands such as Zoopla, Hamptons International, Pepsi Max, Land Rover, Mazda, Honda, Valusys, REKKI, Dexerto, and many more. 2025, My Bespoke Room Developed a new blog for My Bespoke Room and led the SEO migration from a subdomain to the main website. The development and migration process took two months, resulting in an initial 6% uplift and then a 21% MoM uplift in organic traffic post migration. MyBespokeRoom. com 2024 to 2025, Purpl Discounts Leading end-to-end SEO strategy, content, technical and editorial, to enhance accessibility and conversions for disabled consumers. Grew organic traffic by 51. 9%, and conversions by 38. 8% in the first half of 2025. Purpldiscounts. com 2025, KaizenJDM Web development in progress. Kaizenjdm. com 2024, STL Training Worked alongside the digital marketing team at STL Training to ensure the website adhered to 2024's new data privacy laws for third-party cookies. STL-Training. co. uk 2023 to 2024, eventbrite Worked as part of the Foundation Marketing team and put together a strategy to recover performance after the core algorithmic and helpful content updates in 2023. Eventbrite. com 2022 to 2023, eBay We launched the eBay sneaker platform in the USA as a POC (proof of concept) for future content development. This drove a 5x increase in organic visits from the end of May-23. ebay. com/sneakers 2022 to 2023, PHIN SEO consultancy that focused on optimising content for audiences that are searching for private healthcare information. This grew clicks at an average rate of 10% MoM (month-on-month) in H2 2022. In 2023, worked on improving user experience metrics. PHIN. org. uk 2022, Dexerto We set an EEAT and technical SEO strategy that improved performance by 22% comparing the before and after Google releases core, helpful and product reviews algorithm updates in September 2022. This contributed to a record number of visits ahead of Black Friday 2022. Dexerto. com 2021, Boomin At the time of when the business was trading, we got the website SEO ready technically and wrote evergreen content before it's launch. This grew traffic at an average rate of 6% DoD (day-on-day) on launch. Boomin. com 2020 to 2021, REKKI We were able to connect with new audiences by pragmatically publishing new content on a UK & USA location-level. Performance improved by +85% YoY in just 3-months. Rekki. com/food-wholesalers 2019 to 2020, MyVoucherCodes We restored and published new content to recover organic search visibility after a 2 year decline period. Search rankings improved by 60% in 7-months and traffic was up 12% MoM on average throughout this period. Myvouchercodes. co. uk 2016 to 2019, Zoopla Property Group Traffic grew by 6% for Zoopla. co. uk, 41% for Primelocation. com, and 62% for Smartnewhomes. com by integrating product, engineering, and marketing into a cross-functional team. Working closely with social, content, and PR teams, we learnt what mattered to our audience by understanding their pain points in the property journey, and how they searched for information. This allowed us to create location-specific content that genuinely helped users, leading to stronger engagement. Zoopla. co. uk --- --- ## Posts > See an example of a python script snippet that calls SERPAPI, and learn how you can use it in your SEO keyword research work-flow. - Published: 2025-08-28 - Modified: 2025-08-29 - URL: https://stringerseo.co.uk/content/how-to-use-serpapi-in-keyword-research/ - Categories: Content Keyword research is the backbone of any strong SEO strategy, but doing it manually can be time-consuming and limited in scope. This is where SERP API comes in. SERP API allows users to query Google’s search results programmatically, providing structured data such as organic listings, People Also Ask questions, and related searches. For keyword research, this means automation, scalability, and real-time accuracy without the headaches of scraping or dealing with CAPTCHAs. In this guide, you’ll learn how to set up SERP API, make your first requests, and apply the results to practical keyword research workflows. 1. What is SERP API and Why Use It for Keyword Research? In simple terms, SERP API is a tool that lets you fetch Google Search results in JSON format. Instead of manually running searches and copying data into a spreadsheet, SERP API does the heavy lifting and delivers everything directly to your code or database. Key benefits: Automation – run hundreds or thousands of queries at scale. Accuracy – real-time results straight from Google. Efficiency – avoid manual scraping issues and CAPTCHAs. Compared with traditional SEO tools such as Ahrefs or SEMrush, SERP API doesn’t provide search volumes, but it gives unmatched flexibility for extracting SERP features and building custom keyword workflows. 2. Setting Up SERP API Step 1: Sign up and get an API key Visit serpapi. com and create an account. You’ll be issued an API key which is required for all requests. Step 2: Install the libraries For Python: pip install google-search-results For Node. js: npm install google-search-results-nodejs Step 3: Test your first query Here’s a simple Python example: from serpapi import GoogleSearch params = { "engine": "google", "q": "best SEO tools", "api_key": "your_api_key" } search = GoogleSearch(params) results = search. get_dict print(results) This will return a JSON object containing organic results, related searches, and more. 3. Practical Keyword Research Workflows with SERP API a) Extract “People Also Ask” Questions SERP API can return the “People Also Ask” box, which is a goldmine for long-tail keyword ideas. Example JSON snippet: { "people_also_ask": } These can be repurposed as blog topics or FAQs targeting informational queries. b) Collect “Related Searches” Google often displays related search suggestions at the bottom of the SERP. Python snippet: related = results. get("related_searches", ) for r in related: print(r) Use this to identify semantic variations and cluster them into topical maps. c) Analyse Competitor Rankings SERP API makes it easy to extract the top organic results for a keyword. organic_results = results. get("organic_results", ) for res in organic_results: print(res, res) This allows you to track which competitors consistently appear and reverse-engineer their content strategy. d) Automating Keyword Clusters A powerful workflow is clustering keywords based on SERP overlap. Pseudocode: For each keyword: Pull top 10 results via SERP API Compare overlap with other keywords Group into clusters if 60%+ of results match This helps build content silos around keyword groups rather than isolated terms. 4. Visualising the Results SERP API data is best presented in structured formats. For example: People Also Ask extracted questions (table or list). Related searches grouped by intent (spreadsheet). Keyword clusters displayed as a chart. 5. Tips & Best Practices Respect rate limits – avoid hitting the API too frequently without batching. Store results – save outputs in a database or Google Sheets for later use. Combine with other tools – enrich SERP API data with search volume or CPC data from Google Ads or Keywords Everywhere. 6. Limitations of SERP API for Keyword Research Cost – usage is based on credits, so large-scale projects can become expensive. No search volumes – combine with third-party APIs for complete keyword metrics. Technical setup – requires coding knowledge to get started effectively. Wrapping Up SERP API is a powerful tool for keyword research, offering scalable, automated, and highly flexible access to real-time Google data. By following the steps in this guide, you can: Extract “People Also Ask” questions. Gather related search terms. Track competitor rankings. Build automated keyword clusters. Try out the code examples above with your own seed keywords and start building smarter keyword strategies today. References Official SERP API Documentation SERP API GitHub Repository --- - Published: 2025-07-01 - Modified: 2025-07-01 - URL: https://stringerseo.co.uk/technical/automate-keyword-research/ - Categories: Technical If you’re working on keyword research in Google Sheets, you know how tedious it can be to copy‐and‐paste keywords and search volume data manually especially if you need to switch between Chrome tabs and third-party tools. This guide show's you how to link Google Sheets to the Keywords Everywhere API using Python, fetch search volumes, and automatically paste the output onto your Google Sheet. There’s a no-code solution using Make to achieve the same or a similar output with various other keyword research tools, but we’ll save explaining that for a rainy day. What You’ll Learn How to read keywords from a Sheet with gspread How to call the Keywords Everywhere API using form-encoded requests How to write the results back into your spreadsheet How to share your Sheet with a service account—manually and programmatically What You Need Python 3. 7 or higher A Keywords Everywhere account and valid API key A Google Cloud service account JSON file (with Sheets API enabled) A Google Sheet containing your list of keywords Grant Your Service Account Access Open your credentials. json and copy the client_email value (e. g. my-sa@project. iam. gserviceaccount. com). Open your Google Sheet in the browser and click Share (top right). Paste that service account email, set its role to Editor, and click Send. Automated Sharing (Advanced) If you prefer code to clicks, enable the Drive API and add this at the top of your script: from googleapiclient. discovery import build from google. oauth2 import service_account # Load Drive API credentials SCOPES = creds_drive = service_account. Credentials. from_service_account_file( 'credentials. json', scopes=SCOPES) drive_service = build('drive', 'v3', credentials=creds_drive) SPREADSHEET_ID = 'YOUR_SHEET_ID' SA_EMAIL = 'my-sa@project. iam. gserviceaccount. com' permission = { 'type': 'user', 'role': 'writer', 'emailAddress': SA_EMAIL } drive_service. permissions. create( fileId=SPREADSHEET_ID, body=permission, fields='id', sendNotificationEmail=False ). execute The Python Script Save this as keywords_to_gsheets. py and update the configuration at the top. import gspread from oauth2client. service_account import ServiceAccountCredentials import requests, urllib. parse, time # === CONFIGURATION === SPREADSHEET_NAME = "Your Google Sheet Name" WORKSHEET_NAME = "Automation" INPUT_RANGE = "A2:A100" OUTPUT_START_CELL = "B2" API_KEY = "YOUR_REAL_API_KEY" DELAY_SECONDS = 1 # === Authenticate with Sheets === scope = creds = ServiceAccountCredentials. from_json_keyfile_name( "credentials. json", scope) client = gspread. authorize(creds) sheet = client. open(SPREADSHEET_NAME). worksheet(WORKSHEET_NAME) # === Read keywords === cells = sheet. range(INPUT_RANGE) keywords = # === Helper: cell label → row/col === def cell_to_coords(label): col_str = ''. join(filter(str. isalpha, label)). upper row = int(''. join(filter(str. isdigit, label))) col = 0 for ch in col_str: col = col * 26 + (ord(ch) - ord('A') + 1) return row, col start_row, start_col = cell_to_coords(OUTPUT_START_CELL) # === Fetch volume from Keywords Everywhere === def get_search_volume(keyword): url = "https://api. keywordseverywhere. com/v1/get_keyword_data" headers = { "Authorization": f"Bearer {API_KEY}", "Accept": "application/json", "Content-Type": "application/x-www-form-urlencoded" } params = { "kw": keyword. encode("utf-8"), "country": "gb", "currency": "gbp", "dataSource":"cli" } data = urllib. parse. urlencode(params) resp = requests. post(url, headers=headers, data=data) resp. raise_for_status result = resp. json if result. get("data") and isinstance(result, list): return result. get("vol", 0) return 0 # === Main loop === for idx, kw in enumerate(keywords): vol = get_search_volume(kw) print(f"{kw}: {vol}") sheet. update_cell(start_row + idx, start_col, vol) time. sleep(DELAY_SECONDS) print("All done: volumes are in Google Sheets. ") Sample Google Sheet Layout Keyword (Column A)Search Volume (Column B)seo keyword research tools2,400automate keyword research140keyword research3,600 Troubleshooting Tips 401 Unauthorized: Make sure you use Authorization: Bearer YOUR_REAL_API_KEY. All volumes zero: Confirm you’re parsing data, not a dict lookup. Encoding errors: Always encode keywords as UTF-8 in form data. Future Enhancements Write Month-on-Month (MoM), CPC and competition into adjacent columns Cache results locally to save API credits Use aiohttp for parallel requests Get related/suggested keywords and people also search for data Get domain and URL keywords for competitors How this Script Helps You This set up will automate keyword volume lookups, feed data into keyword research reports, and free up time for higher-value analysis. Hope you enjoy it! --- > Discover 10 practical steps to audit SEO agencies' work, set clear KPIs, demand meaningful reports and decide whether to find a new agency. - Published: 2025-06-30 - Modified: 2025-06-30 - URL: https://stringerseo.co.uk/seo-ai/marketing/10-steps-to-evaluate-your-seo-agency/ - Categories: Marketing When you hand over your website’s SEO to an agency, you’re not only investing your marketing budget into the agency, but placing your hopes for growth in their hands. It’s only natural to feel uneasy if you’re not seeing obvious progress or if the work sounds like a load of jargon. A recent Reddit post from a marketing manager proved this worry: “Should I fire my agency? I’m not satisfied with the deliverables but also don’t know enough about SEO to know if I’m in the right. What questions should I be asking them? ” This article breaks down how you can find out what’s really happening under the bonnet, and decide whether it’s time to keep going or politely wave the marketing agency a goodbye. 1. Nail down exactly what you expected Before you decide to part ways, get crystal clear on what was promised. Most SEO contracts factor these core areas: Technical audits: Does your website load fast, can search engines crawl it, and do pages link to one another? On-page optimisations: Are titles, headings and meta descriptions relevant for target audiences who use search engines? Content plans: Road map of blog posts, landing-page copy or different content formats. Off-page work: Link-building, outreach, multi-channel content distribution and partnerships. Reporting & analysis: Monthly snapshots of traffic, rankings, and conversions. Review your original proposal or SOW (scope of work) and tick off each deliverable. If there is anything missing then that’s where your conversation should start. 2. Turn vague hopes into solid KPIs “I want more traffic,” isn’t a KPI, it’s a desire. Good SEO feels part art, part science, but it's performance must be based on measurable goals an metrics. Ask your agency: What exactly are we targeting? – “Grow organic sessions by 20% in six months,” for example. Which metrics matter? – Organic clicks, click-through rate (CTR), goal completions in Google Analytics. How often will we review progress? – Monthly calls and reports are the norm. Tools like Google Search Console (GSC) and Google Analytics 4 (GA4) can really help you here. GSC shows you how many times your site appears in Google, how often people click, and where you rank for key terms. GA4 tracks user journeys and helps tie organic visits back to business goals, like leads or sales. 3. Insist on meaningful reports A slide deck full of “Rankings: Up 2 spots” is not an actionable insight. Your reports should cover: Traffic trends: Users, sessions and pageviews over time. Keyword movement: Wins and losses on your priority terms. Technical health: PageSpeed scores, usability errors, crawl issues. Content stats: Which pages got the most eyeballs and engagement. Backlink data: New links, lost links, and the authority of linking domains. Conversions: Leads, sales or any action you care about. If you don’t see these, ask for a report template that does. Better yet, agree on a fixed delivery date like the first week of each month so everyone’s aligned. 4. Own your own data Nothing feels more helpless than watching your agency click through your Google Analytics dashboard while you hover in the background. Get direct access: Google Search Console: Grant your team at least “Editor” rights. GA4: Make yourself an “Editor” or “Administrator. ” CMS, hosting and domain registrar: Keep those credentials in a shared vault you control. SEO tools: If they’re using Ahrefs, SEMrush or Moz, ask for multi-user access. When the data’s in your hands, you’re less dependent, and you’ll spot any gaps in their work faster. 5. Do a quick technical health-check You don’t need to be a dev expert to spot glaring problems. Find an agency that is offering a complementary audit to identify issues such as: Broken links/redirect chains Missing meta tags Uncrawlable pages Slow-loading pages Absent or incorrect schema markup Google’s own SEO Starter Guide covers these basics and your agency should have walked you through each fix they implemented or identified. If they can’t, that’s a red flag. 6. Peek under the bonnet of your content Good writing is non-negotiable. If you’re seeing page-after-page of me-too blog posts, ask: How are keywords chosen? – And what search intent do they serve? Can I review outlines before drafting starts? – A quick glance at H2s and meta descriptions can save you from rewrites. What’s the approval workflow? – Editorial control is your safety net. Remember: stuffing a page with keywords without satisfying user needs won’t stick. As Google puts it, “use words that people would use to look for your content, and place those words in prominent locations on the page”. 7. Scrutinise link-building methods Backlinks help move the SEO needle but only if they’re high-quality and earned legitimately. Probe your agency: Which sites are they targeting? – Industry blogs, local media, reputable directories? What’s their outreach approach? – Personalised emails, content partnerships, sponsorships? How do they vet link quality? – Domain Authority, relevance, traffic metrics? If they hint at bulk buying or private-blog networks, walk away. Google’s guidelines are crystal: “Buying or selling links for ranking purposes ... is considered link spam”. 8. Benchmark against competitors A good agency doesn’t just work in a vacuum, they know how you stack up. Ask for a competitor analysis that shows: Keyword gaps: Terms they rank for that you don’t. Content gaps: Topics they cover, and you’re missing. Backlink comparison: How many and how authoritative their links are. Technical edge: Who loads faster, who’s more mobile-friendly. No benchmark? No strategy. 9. Clear up communication Frequent, two-way chat keeps projects on track. Agree on: Update cadence: Weekly emails? Bi-weekly calls? Monthly deep-dives? Primary contacts: Who does what on their side and who on yours. Escalation steps: If something goes wrong, who’s the backup? If you’re waiting days for a simple status update, they’re not making you a priority. 10. Decide and move forward Armed with these questions and checks, you’re ready: Raise your concerns: A confident agency will welcome feedback. Set a corrective window: “Show improvement in 30–60 days, or we part ways. ” Plan your exit: Know your notice period, data ownership and any fees. If they still fall short, and you’ve documented everything, you can end the relationship cleanly, without burning bridges. Handy links for DIY or verification Google SEO Starter Guide (official basics) Search Console Help (performance reports) Google’s Link Schemes Policy (avoid penalties) SEO doesn’t have to be a black box. By asking the right questions, insisting on transparency and keeping a close eye on data, you’ll transform that uneasy gut feeling into a clear path forward whether that leads to renewed collaboration or a fresh start. --- > Find out how to do a SEO blog or website migration. Simple process with a detailed checklist of actions to help nurture organic traffic. - Published: 2025-06-24 - Modified: 2025-11-28 - URL: https://stringerseo.co.uk/technical/seo-blog-migrations/ - Categories: Technical A blog or website migration either runs smoothly or turns into a headache. When you move to a new content management system, consolidate domains, or overhaul your site architecture, you take on a project that demands careful planning. Handle the change well and you improve user experience, preserve rankings, and set the stage for future growth. Rush it and you risk lost traffic, broken links, and a drop in search visibility. This guide supports digital marketing managers, developers, and website owners. It outlines a clear, actionable SEO migration framework based on industry best practices. Every step follows Google’s official guidance to help you manage a smooth transition and keep search performance on track. Key points Plan the migration around a clear keyword strategy and content map before changing any URLs or templates. Give every important page on the old site a clear destination on the new site and use 301 or 308 redirects for permanent moves. Allow genuinely retired pages to return a 404 or 410 status instead of forcing irrelevant redirects. Clean up legacy redirect chains so every redirect jumps straight to the final URL. Update XML sitemaps, canonicals, and internal links so they reflect the new URL structure from launch day. Migrate images, metadata, and structured data carefully to protect visibility in Google Search and Google Images. Review and, where possible, update high-value backlinks so they point directly at the new URLs. Monitor Google Search Console and analytics closely for several weeks after launch and fix issues as soon as they appear. Table of contents Revisit your keyword research Identify long-tail opportunities Migrate blog posts from HubSpot to WordPress Build a full URL mapping strategy Audit existing redirects Decide on your AMP strategy Fix redirect chains before launch Update your XML sitemap Migrate and optimise image URLs Restructure URLs with SEO in mind Find and fix 404 errors Update your most valuable backlinks Post launch: monitor and optimise Final checklist Revisit your keyword research Before you change anything, revisit your keyword strategy. A migration gives you the perfect moment to check whether current targeting still aligns with business goals. Run an audit that covers: Top-performing pages and terms: Which keywords currently drive traffic and conversions? Keyword gaps: Which topics and questions sit in the backlog with no content? Business alignment: Does your targeting still match the right intent and audience segments? Use tools such as Google Search Console, Ahrefs, and SEMrush to gather data. Map each primary keyword to a page on the new site so you preserve visibility when URLs change. Identify long-tail opportunities Long-tail keywords usually bring lower competition and higher intent. They often deliver quick wins that support traffic growth while the migration settles. Look for: Under-optimised pages or blog posts with steady impressions but weak click-through rates. Pages that lack clear metadata, headings, or internal links. FAQs and support content that you can structure more clearly or expand into guides. When you target these opportunities early, you build momentum after migration and support your wider SEO roadmap. Migrate blog posts to WordPress If you plan a platform switch, such as moving from HubSpot to WordPress, treat the blog as a priority. Blog content can often generate a large share of organic sessions and assists with connecting with your target audience to drive conversions. Use this process to handle the move: Export all content, including metadata, authorship, categories, tags, and publish dates. Rebuild formatting inside the new theme so readers still enjoy a clear, consistent layout. Update internal links so they match the new permalink structure. Refresh meta titles and descriptions for each post where performance looks weak. Set up 301 or 308 redirects from each old HubSpot URL to the correct WordPress equivalent. Check internal linking carefully and keep anchor text relevant. Strong internal links protect user journeys and help search engines rediscover key posts quickly. Build a full URL mapping strategy Plan the redirect map before you touch anything in production. Give every important page on the old site a clear destination on the new site so you protect rankings and send users to the right content. Shape your redirect map so it: Covers all live URLs that still offer value, not just the top performers. Uses 301 or 308 redirects (permanent) when you move content for the long term. Reflects the new site structure and folder hierarchy. When a page no longer has a useful equivalent, let that URL return a 404 or 410 status rather than forcing a redirect to something unrelated. This approach lines up with Google’s guidance on 404 pages and soft errors. If your blog content now lives on a new domain, point redirects straight to the new location rather than through intermediary URLs or catch-all pages. Audit existing redirects Many sites already carry several layers of redirects from previous restructures. If you ignore them, they slow down responses and dilute link equity. Use a crawler such as Screaming Frog to: Identify all existing redirects and the chains they create. Update internal links so they point directly at the final live URL. Remove or consolidate redundant redirects where they no longer serve a purpose. This work gives the new site a clean redirect structure and helps search engines crawl more quickly. Decide on your AMP strategy If the current setup includes Accelerated Mobile Pages (AMP), decide how you want to handle them on the new platform before you build templates. You can: Migrate AMP content to WordPress and keep AMP templates that still follow AMP HTML requirements. Retire AMP and redirect AMP URLs to the canonical responsive version instead. In both cases, work through these steps: Audit all AMP URLs and note which ones still matter for organic performance. Implement 301 or 308 redirects from each AMP URL to its final destination. Use the Google AMP validator to confirm that any remaining AMP pages pass validation. Keep canonical tags consistent and clear so Google understands which version of a page you want to show in search. Fix redirect chains before launch Redirect chains drain crawl budget and slow down users. Before launch, run tests and make sure every redirect jumps straight to the final URL. Look for: Redirects that hop through several URLs before they settle. Mixed redirect types, such as a 302 that points to a 301 target. Internal links that still reference old URLs instead of the final ones. Clean chains wherever you spot them. When every redirect moves in a single hop, you improve crawl efficiency, response times, and user experience. Update your XML sitemap As soon as the new site goes live, adjust XML sitemaps so they match the new URL structure. Treat the sitemap as a live reflection of your canonical URLs, not as a historical record. Work through this checklist: Remove old, redirected, or 404 URLs from all XML sitemaps. Keep only URLs that return a 200 status and sit in an indexable state. Check canonical tags so each sitemap URL declares the correct canonical version. Submit the new sitemap in Google Search Console. Use the Page Indexing report in Search Console to track indexation trends in the weeks after launch. This report highlights coverage issues, soft 404s, and other crawl problems that need attention. Migrate and optimise image URLs Images influence search visibility, user engagement, and perceived quality. Treat them as part of the SEO migration, not as an afterthought. During the migration, you: Map old image file paths to new file paths so you avoid broken images. Update all HTML, CSS, and JavaScript references so they point at the new URLs. Reapply descriptive alt text and image-related structured data (for example ImageObject inside Article or Product schema). Create and submit an image sitemap if images play a major role in discovery. Google Image Search often delivers incremental traffic and assists conversions, so this work supports more than accessibility and design. Restructure URLs with SEO in mind If you plan to refactor URL patterns, use the migration window to introduce cleaner, more descriptive slugs. Clear URLs help users and search engines understand content quickly. Follow these principles: Use lowercase, hyphenated words in every slug. Strip out unnecessary numbers, parameters, tracking codes, or session IDs. Reflect the main topic of the page inside the slug. For example: example. com/blog-post-38294-xyz/ example. com/inspiration/bedroom-design-ideas/ Once you finalise the new structure, update redirects, sitemaps, internal links, and structured data so they all reference the new patterns. Find and fix 404 errors After launch, expect to uncover a few broken URLs. Treat 404s as signals, not as automatic failures. Use a crawler to: Identify 404 responses that affect pages you still want to serve. Create new redirects for any missed pages that have obvious equivalents. Update internal links that still point to deleted or moved URLs. Allow 404s for content that you retired intentionally and that no longer deserves a replacement. Google treats normal 404s as part of the web. Use Google Search Console’s Page Indexing and crawl reports to spot new 404 issues that appear after launch and decide which ones need action. Update your most valuable backlinks External links still act as one of the strongest signals in organic search. A migration gives you a reason to tidy them up. Start by: Exporting backlink data from tools such as Ahrefs or Google Search Console. Identifying links that still hit old URLs, especially those that now redirect. Prioritising high-authority referrers and pages that drive engaged traffic or conversions. Reach out to those partners and request updates so their links point directly to the new URLs. Redirects will still catch legacy traffic, but direct links send a stronger signal and often speed up reindexing of key pages. Post launch: monitor and optimise A successful migration continues beyond launch day. Over the following weeks, track performance and fix any issues as soon as they appear. Monitor: Crawling and indexing behaviour in Google Search Console. Organic traffic, engagement, and conversions in GA4. Keyword performance using Search Console and rank-tracking tools. Errors or warnings related to response codes, structured data, and Core Web Vitals. You usually see some volatility in the first two to four weeks after launch. Treat this timeframe as a rule of thumb rather than a promise. Focus on resolving issues quickly and continue to optimise the new setup as fresh data arrives. Example: On a recent 10,000-URL blog migration, a clear redirect map, early 404 fixes, and weekly Search Console checks helped maintain around 95% of organic traffic within six weeks, with several long-tail pages improving on previous rankings. Useful Google resources SEO Starter Guide Best practices for site moves Guidance on 404 pages Page Indexing report in Search Console AMP validator tool Final checklist Keyword audit completed and mapped to the new site. Long-tail opportunities identified and prioritised. Blog content migrated, reformatted, and re-optimised. Redirects planned and implemented for all important URLs. Legacy redirect chains reviewed and cleaned. AMP strategy agreed and implemented. XML sitemaps updated, submitted, and monitored. Image URLs migrated with alt text and structured data reapplied. URL structures refactored with SEO-friendly slugs. High-priority 404s fixed and irrelevant legacy URLs allowed to retire. Key backlinks reviewed and updated where possible. Post-launch metrics tracked and optimisation work scheduled. Website migrations always introduce risk, but they also create a chance to strengthen your SEO foundation. With deliberate preparation, a clear redirect strategy, and disciplined post-launch monitoring, you can move to a better platform and keep organic performance moving in the right direction. If you want support with the strategy or implementation, get in touch and we can walk through a migration plan that matches your stack, timelines, and targets. --- > Find out what Semantic SEO is, and how you can optimize for it to achieve greater visibility in search results and AI Answer Engines. - Published: 2025-05-29 - Modified: 2025-05-29 - URL: https://stringerseo.co.uk/content/semantic-seo/ - Categories: Content, SEO & AI Semantic SEO is about the contextual relevance of different terms relating to a given topic. It has become vital for businesses and marketers wanting to improve their visibility in organic search. It all starts with making sure your content targets the context around a specific keyword or topic. For example, say your target keyword is “cat food”. Rather than simply creating a page that lists cat food, you’d expand your content to cover the surrounding context. You can either add answers to questions such as “What is the healthiest cat food in the UK? ”, “How much wet food should I feed my kitten? ” and “Can cats live on dry food alone? ” on that page, or create a separate blog post. What I like to call supplemental content. With Google relying on large language models such as BERT and MUM to parse meaning and intent in AI-driven search results, it’s more important than ever to maximise your supplemental content around target topics. This article shows you how to leverage and execute a semantic SEO strategy to optimise your content more effectively and ensure it’s relevant for the people reading it. What Is Semantic SEO? Semantic SEO goes beyond traditional keyword-based strategies. It’s about understanding the intent behind keywords and delivering content that satisfies that intent in the most helpful manner. Instead of merely targeting specific keywords, it focuses on the broader context and meaning of the terms users are searching for. Expanding on the example above, an audience searching for “cat food” might be interested in answers for questions such as “What is the healthiest cat food? ” or “Can cats live on dry food only? ”. If your website incorporates relevant subjects and questions around a certain. topic in a semantic way, the likelihood of achieving strong visibility for keywords that your target audiences may search for at different stages of their path to purchace is much higher. Why Does Semantic SEO Matter? With search engines like Google prioritising experience, websites must produce content that connects with audiences on a deeper level. Semantic SEO helps improve relevance and enhances the likelihood of achieving greater visibility in search results. When you craft content that resonates with user intent, you're not just satisfying algorithms; you're genuinely meeting the needs of audiences. With the appearance of AI-driven features such as AI Overviews and AI Mode, it is important that your content resonates with target audiences at all stages of the funnel and incorporates content that serves different intents, such as navigational and informational, rather than purely transactional or commercially oriented queries. What to Consider in a Semantic SEO Strategy 1. Understanding User Intent Different types of searches reflect different intents for example, they can be informational, navigational, transactional, or commercial. Tailoring your content to meet these intents ensures that you address the needs of your audience at different stages of the funnel. A good place to start here is to understand what type of content, webpages, and related queries (People Also Ask) Google is displaying for a keyword. For example, you may find that AI overviews, wiki's, social conversational hubs like reddit, publishers, or blogs, have greater visibility for informational keywords while ecommerce websites have greater visibility for transactional keywords. Understanding the user intent can heavily inform how to shape the user experience (UX) layout of the page, and what type of content to write when you are planning to target a keyword. 2. Entity-Based Content Entities are the things or concepts that words refer to, and recognising them allows search engines to gain more understanding of your content. Publishing “entity-based content” helps search engines such as Google match user queries to the correct concept. Structured data available from Schema. org can help search engines, and LLM's understand your content from an entity perspective. Keywords are the exact words or phrases people type in search, whereas entities embody broader concepts and link to other related ideas. For example, "Amazon" as a keyword could mean the Rain Forest, the ecommerce brand, or even a member of a legendary race of female warriors believed by the ancient Greeks. However, as an entity, the intended meaning becomes clear from it's context. Google Cloud Natural Language Processing API helps you understand what entities Google analyses in your content. Just copy and paste your content then click 'Analyze' in the 'Demo' section. 3. Natural Language Processing (NLP) Leveraging NLP techniques can boost your content’s relevance. It’s about using language that mirrors how people actually search, using synonyms and related phrases to enrich the context. That’s why it’s vital to adopt the very wording users enter into Google. By incoporating the language used in search throughout your copy, you’ll not only help Google serve your page for the right queries but also help create a deeper connection with your audience. 4. Topic Clusters Organising content into clusters not only aids in navigation but also reinforces your site's pillar and cluster specific topics. This approach strengthens your content's semantic relevance, and architects your website in a way that is most helpful for audiences. For example, a website targeting an audience that owns cats might architect the topic clusters on the website as per the diagram found below: The sections that fit into each of the respective topic clusters target different search intents, entitites, and language used in search. 5. Structured Data Think of structured data as a clear handshake between your content and search engines. Schema markup (for example, schema. org) lets you tag the entities in your copy with explicit labels. You can choose from schemas like Person, Place, Organization (and many more) to define exactly who or what you’re talking about. It helps search engines map out the relationships between your entities, and it can boosts your chances of turning up in rich snippets, Knowledge Graph panels and other standout results. By sprinkling the right schema tags throughout your content, you’re not just talking to Google, you’re guiding it directly to what matters. How to Develop a Semantic SEO Strategy Start with comprehensive keyword research that considers user intent. Use tools such as Keywords Everywhere, Google Search Console, and Google Trends to identify related terms and concepts to enrich your content. I recommend a three-phase approach: Deep keyword research that surfaces related concepts Intent-driven content mapping Ongoing optimisation to keep pace with user intent 1. Keyword and Topic Research Google Autocomplete: Start typing your primary term and note the suggested queries. Related Searches: Scroll to the bottom of the search results page for related keywords and long-tail variations. People Also Ask: Mine the questions that appear and weave their answers into your copy. Topic-modelling: Pinpoint semantically related terms to layer into your content. Google Trends: Spot emerging topics and seasonal shifts to inform your content calendar and prioritise blog posts for keywords that are spiking in demand. 2. Content Mapping Create or expand pages that not only target your main keyword but also resolve the related queries you’ve uncovered. Use clear subheads or sections framed as those exact questions and phrases to reinforce contextual relevance. 3. Regular Updates & Generate Demand Revisit existing posts and pages to tweak headings, insert fresh Q&As or swap in newly trending terms. Regularly publish supplemental content such as blog posts or guides that are relevant to the latest search trends and interests. Keep an eye on performance in Search Console and refresh any under-performing content with up-to-date insights or popular new keywords. Distribute content on social platforms, and via email by leveraging your CRM to generate demand. This is helpful if you are creating content for untapped keywords. These three phases help Google and AI Answer Engines such as ChatGPT recognise that your site truly speaks your audience’s language, keeping you visible in today’s AI-driven search landscape. Why is Semantic SEO Important? Google is continuously updating its algorithms to prioritise user intent and contextual relevance especially around AI-driven search results. ChatGPT and other AI models are also doing the same. Implementing a semantic SEO strategy can really benefit your visibility not only in organic search results but also on AI answer engines. It also enables you to stay one step ahead in an evolving industry. Focusing on the meaning behind the searches and delivering user-focused content will not only improve your online visibility but also create a more engaging experience for your audience, and deliver incremental brand awareness. --- > Find out how to use Google Search Console for keyword research to unlock opportunities for content optimisation or to create new pages. - Published: 2025-05-06 - Modified: 2025-05-07 - URL: https://stringerseo.co.uk/content/how-to-find-keyword-opportunities-on-google-search-console/ - Categories: Content, Marketing A Simple GSC Method to Uncover High-Demand Keyword Opportunities. When it comes to finding new keyword opportunities, most marketers jump straight into third-party tools. But if you've already got content on your site in the form of blog posts, guides, or product pages, it may be worth starting with your own data. Google Search Console (GSC) gives you everything you need to surface keywords with proven demand. These are terms that are already triggering your content in the SERPs... They are just not quite ranking well enough to earn a high volume of clicks yet. The goal here is to find those mid-ranking terms and give them the final push. Here’s a step-by-step approach I’ve used successfully across different types of websites to find high-impact opportunities you can optimise for. This is done by either optimising existing content or creating a new page. Select Three Months of Search Analytics Data Go to the Search Analytics report in GSC and pull data for the last three months or the past month. The time period really depends on how many queries you want to find. The smaller the time period then the amount of queries you will find is less but more real-time and relevant to trends. The metrics that will be looked are listed below: Query Page Country Impressions Clicks Average Position Segment Non-brand Data Exclude any branded keywords by applying a segment on a query-level. Define the Target-Market Select the market by clicking on the country filter and choosing the country where your target audience is from. Filter by Impressions (500+) We’re targeting queries with decent demand, not scraping the long tail. Apply a filter to only show keywords with 500+ impressions over the period. These are terms that users are actively searching and that Google is already associating with your content, even if you’re not seeing traffic yet. Filter by Position (11+) Next, restrict the dataset to average positions above 11, your page 2 and beyond rankings. These are the keywords your website is visible for, but not converting many clicks. At this point, you can safely ignore click data. The fact these keywords are getting a low volume of clicks is exactly the point. You should see a similar interface as the example image shown below after following these steps: Validate Search Volumes & SERP Competitiveness If you have keywords everywhere installed on your browser then it will report on the search volume, competition, trend, and CPC of all the queries that are reported on Google Search Console as shown in the example image that's shown below. The competition and trend data can help you prioritise. Not every keyword will be worth chasing. Focus on those that are trending and where you can realistically compete. This data can also be used to influence which keyword needs their intent analysed to identify where competitors are ranking with weak or misaligned content. Use the ‘Pages’ Tab for Mapping Intentand Checking for Cannibalisation Before rushing into optimisation or creating a new page, make sure you’re not competing with yourself. Run a check to see if multiple URLs are ranking for the same or very similar queries by selecting a query and then viewing the 'Pages' tab. If there is more than one page reported then you might be better off consolidating pages or refining their intent to avoid cannibalisation. Here are some good questions to ask yourself in this step: Does the content genuinely match the user’s intent? Is it the right page to be ranking? Could another, more suitable page rank better with minor improvements? If the current page is a mismatch, consider redirecting or building a new asset entirely. Find Related Queries and Questions Once you’ve shortlisted target keywords, you can consider going deeper by using tools to identify related subtopics or questions that can be incorporated into your new content. The tools listed below can help you do just that: AlsoAsked Keywords Everywhere Semrush Keyword Magic Tool Identifying related queries and questions will help you increase topical authority for a given subject. Optimise or Create New Content Based on your findings, take action: Optimise existing pages by refining headers, updating intro copy, improving internal linking, and ensuring the keyword and its related terms are properly covered. Create a new page if the intent requires a standalone guide, product, or landing page. Avoid forcing keywords into pages that don’t suit them — match structure to intent. Monitor Post-Optimisation Performance Once changes are live, track performance in GSC over the next few weeks. Focus on: Movement in average position CTR improvements Any increase in impressions and clicks Set a reminder to re-run the process quarterly, or even monthly, new queries will emerge, and existing ones may drop or climb based on SERP volatility. Consider Pulling Data from Google Search Console API It's important to note that connecting to Google Search Console's API and storing the data in a warehouse would enlargen the query and page-level datasets. This is suggested for larger websites. Stay tuned for a guide that will show you how to do that but in the mean time, if you are exploring this option, then have a read of https://developers. google. com/webmaster-tools. How This Process Helps This process is about using your own Search data to build on what’s already working (or nearly working). GSC gives you clear signals about what Google already associates with your site. This process helps you spot the keywords that just need a bit more effort to move up the rankings. This process is quick, scalable, and highly actionable, and best of all, it helps you optimise around real user demand. --- > Learn how SEO is a user-focused strategy, what techniques to avoid, and what questions help you understand if your content is helpful. - Published: 2025-04-17 - Modified: 2025-05-06 - URL: https://stringerseo.co.uk/seo-ai/how-is-seo-a-long-term-user-focused-strategy/ - Categories: Marketing, SEO & AI Search Engine Optimisation (SEO) is not a quick fix but a strategic commitment that must keep up with changing user behaviours and search engine algorithms. The website owners and content editors who understand this are the ones who tend to achieve sustainable results in organic search visibility. User Behaviour & Experience Drives SEO User Behaviour and experience has become the foundations of modern SEO, most importantly as AI strategies explode. Consider these facts: Search engines like Google and Bing want to deliver the most relevant results to users. What "relevant" means has changed as algorithms have become smarter at understanding user intent. Historical data shows that search engines reward websites that provide genuine value to users by producing content that helps and reonates with individuals. Websites that implement manipulative tactics suffer when algorithms update, while those delivering exceptional user experiences continue to thrive. Google considerings the following tactics as manipulative: doorway pages, cloaking, spammed content, and buying links from spammy networks with the intent to improve rankings. Google have produced a guide that further explains this, you can read it on Google Search Central. The factors that matter most closely align with user satisfaction and publishing helpful content, not content only for ranking purposes. Google encourages website owners, and content editors to ask themselves the following questions to assess the quality of their content: Does the content provide original information, reporting, research, or analysis? Does the content provide a substantial, complete, or comprehensive description of the topic? Does the content provide insightful analysis or interesting information that is beyond the obvious? If the content draws on other sources, does it avoid simply copying or rewriting those sources, and instead provide substantial additional value and originality? Does the main heading or page title provide a descriptive, helpful summary of the content? Does the main heading or page title avoid exaggerating or being shocking in nature? Is this the sort of page you'd want to bookmark, share with a friend, or recommend? Would you expect to see this content in or referenced by a printed magazine, encyclopedia, or book? Does the content provide substantial value when compared to other pages in search results? Does the content have any spelling or stylistic issues? Is the content produced well, or does it appear sloppy or hastily produced? Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don't get as much attention or care? For more information on Google's guidance, check out https://developers. google. com/search/docs/fundamentals/creating-helpful-content#content-and-quality-questions. The Test-and-Learn Approach Google's Algorithm updates are not random events but deliberate enhancement to better serve user needs for different search terms. Each major update provides valuable insights into Google's understanding of quality content. My suggestion for successful SEO is that it must must adopt a systematic approach by: Establishing baseline metrics across key performance indicators Implementing targeted changes based on current best practices and user feedback Measuring impact with statistical significance Refining strategy based on results Documenting learnings to build institutional knowledge This method usually helps to prevent overreaction to algorithm fluctuations and builds a foundation for proven techniques specific to different industries. The Future-Proof SEO Mindset The most resilient SEO strategies share common characteristics: User-centricity: Prioritise solving user problems over manipulating algorithms. Content that genuinely addresses user needs briefly mentioned above has consistently performed well through every major algorithm update. Technical excellence: Maintain impeccable technical SEO fundamentals. Site speed, accessibility, and structured data implementation remain critical, as they directly impact user experience. Content authority: Demonstrate genuine expertise in your field. Search engines increasingly differentiate between surface-level content and truly authoritative resources. Adaptability: View SEO as an ongoing process of refinement rather than a one-time implementation. The ability to quickly test, learn, and adapt provides a significant competitive advantage. SEO success tends to be measured over a period of time, not days, hours, or even minutes. The brands and businesses that embrace SEO as a long-term commitment to serving users, while systematically adapting to algorithm changes, will continue to capture valuable organic traffic regardless of how search evolves around AI. Remember that behind every algorithm update is the same objective: to better understand and serve user needs. When your SEO strategy aligns with this principle, you're positioning yourself for sustainable growth no matter how search engines update and change. --- > Learn how to apply structured data to help grow you local visibility for different search terms that people use in Google! - Published: 2025-04-10 - Modified: 2025-04-15 - URL: https://stringerseo.co.uk/content/how-to-use-structured-data-for-local-seo/ - Categories: Content Estate agents that advertise property to rent or for sale can appear for search terms such as “property to rent in Hove” in Google. Other small businesses can also do the same for different search terms that are relevant such as "electricians in Brighton". One of the ways of trying to improve local visibility in Google Search is by implementing structured data on your website despite it not being a direct ranking factor. This article provides an overview of structured data and how it can be used to help local search visibility. What Is Structured Data? Structured data is a type of code that helps search engines better understand the content on a webpage. When implemented correctly, it helps Google to display rich results that can include additional information such as business hours, location, reviews, and services. For local businesses, using structured data means your website has a better chance of appearing in the local pack, Google Maps results, and relevant organic listings. Structured Data for Local SEO Google uses structured data to deliver more relevant results for location-based searches. If someone types “property to rent in Hove”, Google displays a list of properties and agents or businesses operating in that area. Marking up your business location, contact details, and service areas with structured data on your website, helps signal to Google exactly where you're based and what you offer. Using the LocalBusiness Schema Google recommends using the LocalBusiness schema, which is part of the broader Schema. org vocabulary. This schema allows you to define key details about your business such as: Business name Address (with full postal address) Opening hours Phone number Website URL Geo-coordinates Area served Logo Social profiles This information is added to websites using JSON-LD. This is the suggested format by Google. Here's an example for a letting agency based in Hove: { "@context": "https://schema. org", "@type": "RealEstateAgent", "name": "Hove Property Rentals", "image": "https://example. com/logo. png", "@id": "https://example. com", "url": "https://example. com", "telephone": "+44 1273 123456", "address": { "@type": "PostalAddress", "streetAddress": "10 Western Road", "addressLocality": "Hove", "addressRegion": "East Sussex", "postalCode": "BN3 1AE", "addressCountry": "GB" }, "geo": { "@type": "GeoCoordinates", "latitude": 50. 8278, "longitude": -0. 1707 }, "openingHoursSpecification": , "opens": "09:00", "closes": "17:30" } ], "sameAs": } How to Use LocalBusiness Schema Place the JSON-LD in the of the HTML or right before the closing tag. Keep the information consistent with what's shown on Google Business Profile. Use the Rich Results Test to validate your structured data: https://search. google. com/test/rich-results Submit the page in Search Console after implementation. Using structured data on your website can help you appear for relevant local keyworda that people search in Google. It also helps generate rich results in Google which is suppose to help grow click-through-rates and thus traffic to your website from search terms that are used by targeted audiences. --- > Learn how to extract canonical URL in Google Sheets using python. Useful for when IMPORTXML is being clunky for large URL lists! - Published: 2025-03-23 - Modified: 2025-03-23 - URL: https://stringerseo.co.uk/technical/how-to-extract-canonical-urls-in-google-sheets-using-python/ - Categories: Technical If you're managing a large list of URLs and want to quickly check their canonical URLs but do not have access to a website crawler such as Screaming Frog and the IMPORTXML function is taking ages or failing then you may want to consider doing this with Python... This guide will walk through how to connect Python to a Google Sheet, scan through a column of URLs, extract each page’s canonical URL, and write the results back to the sheet in one click of a button. What it Does Connect to your Google Sheet using the Google Sheets API Scan an entire column for valid URLs Fetch and parse each URL’s HTML to extract the tag Write results into a new column next to your URLs Automatically handle errors, headers, and batching to avoid API limits What You Need A Google Cloud project with Sheets API enabled A service account key (JSON file) Python 3 installed with the following libraries: pip install gspread google-auth requests beautifulsoup4 Setting Up the Google Sheets API Go to the Google Cloud Console. Create a new project or select an existing one. Navigate to APIs & Services > Library, and enable Google Sheets API. Go to Credentials > Create credentials > Service account. Generate a key as a JSON file. Share your Google Sheet with the service account’s email (e. g. my-bot@project-id. iam. gserviceaccount. com) with Editor access. Python Script to Fetch Canonical URLs Here’s the full working script: import gspread from google. oauth2. service_account import Credentials import requests from bs4 import BeautifulSoup from datetime import datetime # Google Sheets API Setup SERVICE_ACCOUNT_FILE = "path/to/your-service-account. json" # --- > Learn how to optimise and create a product data feed following Google's best practice guidelines from merchant center. - Published: 2025-03-10 - Modified: 2025-03-10 - URL: https://stringerseo.co.uk/content/how-to-optimise-a-product-data-feed-for-google-merchant-center/ - Categories: Content, Technical Google Merchant Center is a tool for eCommerce websites and it can help you increase their visibility on Google Shopping, Search, and Ads. Your product data feed must be correctly formatted and optimised according to Google’s requirements. This article outlines the essential fields, common mistakes, and key enhancements needed to ensure a product feed meets Google's standards. Why it Matters A well-structured product feed can help improve: Better visibility in Google Shopping and Search results. Potential conversion through accurate product listings. Compliance with Google’s policies, avoiding feed disapprovals. Improved ad performance by providing structured data for campaign optimisation. Product Feed Fields Required by Google Google Merchant Center mandates the inclusion of several key fields to ensure product eligibility. Field NameRequirementPurposeidRequiredUnique identifier for each product. titleRequiredConcise yet descriptive product name with key attributes (brand, color, size). descriptionRequiredDetailed explanation of the product’s features, specifications, and benefits. linkRequiredDirect URL to the product page on your website. image_linkRequiredURL of the main product image. availabilityRequiredIndicates stock status (in stock, out of stock, preorder). priceRequiredProduct price with currency (e. g. , GBP, USD, EUR). brandRequiredManufacturer or brand name. gtinRequired if availableGlobal Trade Item Number (UPC, EAN, ISBN). mpnRequired if no GTINManufacturer Part Number. conditionRequiredSpecifies whether the product is new, refurbished, or used. Recommended & Optional Fields To enhance product visibility and improve targeting, consider including the following additional fields: Field NameRecommendationPurposesale_priceHighly RecommendedSpecifies a discounted price when applicable. google_product_categoryRecommendedClassifies the product according to Google’s taxonomy. product_typeRecommendedCustom category based on internal store taxonomy. additional_image_linkRecommendedAdds extra product images for variations and angles. sizeRecommended for apparelDefines product dimensions. colorRecommended for apparelSpecifies the primary product color. materialRecommendedIndicates the material composition. patternOptionalDescribes the pattern or texture of the product. shippingRecommendedDefines shipping costs and delivery estimates. shipping_weightRecommendedHelps determine shipping rates for bulky items. product_highlightOptionalBullet points summarizing key features. product_detailOptionalProvides detailed specifications. Common Mistakes 1. Using Incorrect URLs Ensure that the link field contains the correct public-facing URL. image_link should be a direct image URL without tracking parameters. 2. Missing GTIN or MPN Google prefers GTINs for accurate product matching. If unavailable, use mpn instead. 3. Inconsistent Pricing Data Ensure that price and sale_price values are correct and match the website. Google performs price checks and may disapprove mismatched listings. 4. Poorly Structured Product Titles Avoid generic titles like "Sofa" or "Table". Instead, format them as "Brand + Product Type + Key Attribute" (e. g. , "Velvet Chesterfield Sofa - Blue - Handmade"). 5. Not Using Google’s Product Categories Assigning the correct google_product_category improves targeting and performance in Shopping Ads. How to Format a Product Feed Your product data feed should be structured in either CSV, or XML, format. Below is an example CSV template: title,brand,price,sale_price,availability,google_product_category,link,image_link "Velvet Chesterfield Sofa - Blue","Luxury Sofas Ltd",1499. 99,1299. 99,"in stock","Furniture > Living Room > Sofas","https://www. example. com/sofa","https://www. example. com/images/sofa. jpg" Final Optimisation Tips Regularly update your feed to reflect stock and price changes. Use structured data markup (Schema. org). Ensure high-quality images with a clean, white background. Enable Google Shopping promotions to attract more clicks. Optimising your product data feed can help maximise visibility, improving ad performance, and driving potential sales. This is done by making sure that your product feed complise with Google’s requirements and implementing best practices. --- > Find out how to inject an author's biography underneath the H1 of an author archive page templates in WordPress! - Published: 2025-03-07 - Modified: 2025-03-07 - URL: https://stringerseo.co.uk/technical/how-to-add-an-author-bio-on-author-archives-in-wordpress/ - Categories: Technical Many WordPress themes do not automatically display an author biography on author archive pages. Here is an example of the one on this website that displays one https://stringerseo. co. uk/author/jonathan. For UX and EEAT purposes, you might want to consider displaying the author biography. However, many themes just offer one template for all type of archive pages such as categories or tags, and it usually does not have an author biography snippet... This guide shows you a way how to add the author biography on an author archive page by creating the logic in the functions. php file which will save time creating a specific template for author archive pages. Step 1: Adding the Author Bio We can use output buffering to modify the page content dynamically, ensuring the author's bio appears immediately after the first . Add This to functions. php File: function inject_author_bio_after_h1($content) { if (is_author) { $author_id = get_queried_object_id; $author_bio = get_the_author_meta('description', $author_id); if ($author_bio) { // Create the author bio HTML $bio_html = '' . esc_html($author_bio) . ''; // Inject the bio immediately after the first H1 $content = preg_replace('/(]*>. *? )/i', '$1' . $bio_html, $content, 1); } } return $content; } function start_buffer { if (is_author) { ob_start('inject_author_bio_after_h1'); } } function end_buffer { if (is_author) { ob_end_flush; } } add_action('template_redirect', 'start_buffer'); add_action('shutdown', 'end_buffer'); How This Works: The start_buffer function starts output buffering before the page is rendered. The inject_author_bio_after_h1 function finds the first and appends the author bio immediately after it. The end_buffer function ensures that the modified content is output correctly. Step 2: Remove "Author: " From the Archive Title By default, WordPress often prepends "Author: " to the author archive title. You can modify this using the get_the_archive_title filter. Add This to Your functions. php File: function remove_author_prefix_from_archive_title($title) { if (is_author) { // Remove 'Author: ' from the title $title = preg_replace('/^Author:\s*/', '', $title); } return $title; } add_filter('get_the_archive_title', 'remove_author_prefix_from_archive_title'); Alternative: Remove "Author: " from Hardcoded H1 Elements If your theme does not use get_the_archive_title but hardcodes the author title inside the page template, you can modify the output using output buffering: function modify_author_h1($content) { if (is_author) { // Find the first H1 and remove 'Author: ' $content = preg_replace('/(]*>)Author:\s*/i', '$1', $content, 1); } return $content; } function start_author_buffer { if (is_author) { ob_start('modify_author_h1'); } } function end_author_buffer { if (is_author) { ob_end_flush; } } add_action('template_redirect', 'start_author_buffer'); add_action('shutdown', 'end_author_buffer'); This injects the author bio directly beneath the first on the author archive page. It removes "Author: " from the archive page title dynamically, and works without modifying theme template files directly, making it update-safe. These tweaks improve both UX and EEAT, ensuring a clean author archive page layout. --- > If you have built a custom WordPress XML sitemap generator then this article shows you how to link to images from the post XML sitemap. - Published: 2025-03-06 - Modified: 2025-03-06 - URL: https://stringerseo.co.uk/technical/how-to-add-image-links-in-a-wordpress-xml-sitemap/ - Categories: Technical XML sitemaps play an important role in SEO. They help search engines discover and index different pages from your website. This is especially useful for blog posts, and including links to images from XML sitemaps can help improve their visibility in Google Image Search. This article shows you a way to modify a custom WordPress XML sitemap plugin to include links to images from blog posts. Why Include Images in Your Sitemap? Adding links to your images from your XML sitemap provides several benefits, two of them are: Search engines can find and understand your images more effectively. Image search results can drive additional traffic to your website. How to Modify the Custom XML Sitemap Plugin In our example, we have a WordPress plugin that generates a custom XML sitemap for posts. We want to extend it so that it includes images found within post content. Step 1: Extract Image URLs from Post Content To retrieve images from post content, we use a function that scans the post HTML and extracts tag src attributes. function extract_images_from_content($post_id) { $content = get_post_field('post_content', $post_id); preg_match_all('/]+src=(+)/i', $content, $matches); return array_unique($matches); } This function scans the post content and returns an array of unique image URLs. Step 2: Modify the Sitemap Generation Function We then modify the function that generates custom-post-sitemap. xml to include tags. function generate_custom_post_sitemap { global $wpdb; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; $site_url = get_site_url; $posts = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'post'"); foreach ($posts as $post) { $url = get_permalink($post->ID); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($post->post_modified_gmt)); echo "$url$lastmod"; $images = extract_images_from_content($post->ID); foreach ($images as $image) { echo "$image"; } echo ""; } echo ''; exit; } This function loops through all published posts, extracts images, and appends them to the sitemap using the tag. Step 3: Hook into WordPress Finally, we modify our WordPress plugin’s init action to ensure the sitemap includes images when requested. add_action('init', function { if (isset($_SERVER) && strpos($_SERVER, '/custom-post-sitemap. xml') === 0) { generate_custom_post_sitemap; exit; } }); This ensures that when /custom-post-sitemap. xml is accessed, the modified sitemap with images is generated. By adding links to image URLs to your custom XML sitemap, you can get your images indexed in Google Image Search. Implementing this in WordPress using a custom plugin ensures that every image that are used in blog posts can be discovered and indexed by search engines. --- > SEO Tips for small businesses. Find out how to structure and optimise a blog post to generate organic website traffic. - Published: 2025-03-04 - Modified: 2025-04-29 - URL: https://stringerseo.co.uk/content/how-to-structure-a-blog-post/ - Categories: Content Blogging can be an effective way for a small business to generate demand and drive their brand awareness. A blog post can help drive organic traffic, and boost engagement from targeted audiences if it's useful and relates to their search intent. But how do you ensure your content is both engaging and relevant for keywords that are searched by your target audience? Read the guide below to learn about the key elements of structuring a blog post. 1. Titles & Descriptions Titles and Descriptions are pieces of information about your blog post that people see in search engine result pages, and whatever browser they are using. Search engines read this information when crawling the HTML of your blog post. Here is an example of a search result page showing the titles and descriptions of pages that appear for 'url mapping with python' keyword: How to Write a Title Tag Be relevant, concise and natural. Keep it between 50-60 characters. Include the main keyword and branding. Make it clear and engaging, possibly with a call to action (CTA). How to Write a Meta Description Summarise the post in 150-160 characters. Include the target keyword naturally. Make it compelling enough to encourage clicks. 2. Structuring Your Blog Post A well-structured post keeps readers engaged and it helps search engines understand the contents on the page. Introduction copy Start with a question, fact, or bold statement to grab attention. Quickly introduce the topic and why it matters. Give a brief overview of what readers can expect. Headings & Subheadings (H1, H2, H3) Use H1 for the title, H2 for main sections, and H3 for subsections. Keep headings concise, clear, and keyword-relevant. Content Body Break it into short paragraphs for readability. Use bullet points and numbered lists to improve clarity. Include images, infographics, and other visuals to enhance engagement. Ensure smooth transitions topically between sections. 3. Using Internal & External Links Effectively Internal Links Link to relevant blog posts or website pages to keep readers engaged. External Links Use authoritative sources to back up claims and build credibility. Ensure links open in a new tab so readers don’t leave your site. 4. Writing a Strong Conclusion Summarise key points in a natural way. Reinforce the main takeaway. End with a clear CTA. 5. Other SEO Considerations for Blog Posts Call to Actions (CTAs) Encourage readers to take action (e. g. , sign up, comment, share). Keep them clear and natural, placed at the end or within the post. Social Sharing Buttons Place at the top and bottom of the article for easy access. Author Bio & EEAT Compliance Readers connect better when they know who wrote the article. Include a brief bio, expertise, and a headshot. Google values expertise, experience, authority, and trustworthiness (EEAT). Image ALT Tags Describe images naturally for accessibility. Avoid keyword stuffing—keep descriptions clear and accurate. When structuring a blog post to generate traffic and connect with your target audience, it’s important to plan and optimise it for both readers and search engines. By following a clear structure and incorporating SEO best practices, your post has a strong chance of being indexed and ranking for relevant keywords. However, its performance will also depend on how well your website meets other SEO ranking factors. --- > Learn how to do keyword research using Keywords Everywhere. A tool to help you understand what keywords are being searched for in Google. - Published: 2025-02-15 - Modified: 2025-04-01 - URL: https://stringerseo.co.uk/content/keyword-research-using-keywords-everywhere/ - Categories: Content One of the first things to understand before you start writing content is how your readers will connect with the information you are writing about. In SEO, this is a pivotal step in generating traffic to your website, and the process is called keyword research. Keywords help you understand what your audience is searching for, allowing you to shape your content around relevant topics. There are plenty of tools available to help gather this information. Keywords Everywhere is a cost-efficient and easy-to-use tool that helps you compile a list of keywords and influence the answers you write and incorporate into your content. These insights can be used to address the questions readers are asking and searching for on Google. Why Keyword Research Matters Search engines like Google display pages based on relevance for different keywords. If your content doesn’t align with what people are searching for, it won’t show up in search results. That’s why targeting the right keywords is crucial. According to a study by Ahrefs, 90. 63% of content gets no organic traffic from Google... this is often because it doesn’t target relevant keywords that people search for. With Keywords Everywhere, you can: Find search volume (how many people search for a term monthly) See competition levels (how hard it is to rank) Identify related keywords to expand your content strategy Step 1) Install Keywords Everywhere Go to Keywords Everywhere and install the extension compatible with your browser (Chrome, Firefox, or Edge). Step 2) Get and Activate Your API Key Sign Up: After installation, click on the Keywords Everywhere icon in your toolbar. Click on "Get API Key" in the dropdown and enter your email address. You'll receive an email containing your personal API key. Activate: Click the Keywords Everywhere icon again, choose "Settings," and paste your API key into the relevant box. Click "Validate" to enable. Step 3) Purchase Credits Access Purchase Options: Click "Purchase Credits" in the Keywords Everywhere dropdown. Choose a Plan: Choose a plan that works for you. Credits are used to get data and metrics from Google search results. Step 4) Configuring Settings Adjust Preferences: In the "Settings" menu, customise your experience by selecting preferred metrics, enabling or disabling features like "Related Keywords" or "People Also Search For," and choosing your target country for location-specific data. Step 5) How to do Keyword Research Perform a search on Google: Keywords Everywhere will display metrics directly below the search bar. They are marked red in the screenshot below. These metrics inform you of the search volume, CPC (cost-per-click), and competition. The metric that will inform you on how many people are searching for your keyword is the one under 'Volume' in the image above. The CPC metric is useful if you are contemplating a Google Ads campaign, and the competition metric gives you an indication if an other website is marketing to that audience in Google. The volume metric should act as your decision maker for the theme of the content you are planning to write. You will notice a box towards the right of the search result page. This box reports on the following metrics: SEO dificulty, Brand Query, Off-Page Difficulty, and On-Page dificulty. These metrics are usful to analyse later down the line after you have decided on the keywords and are not really that necessary to build out your initial keyword list. Scroll down the page: Analyse the data in the box on the right under the following titles; 'Trend Data For (your keyword) (GB)', 'People Also Ask', 'SERP Keywords', 'Related Keywords', and 'Long-Tail Keywords'The trends data can be used to influence when you should publish content as it tells you the time of when searches were made in Google for your keyword. This is useful for planning content in the long-term. The data under People Also Ask, SERP keywords, related keywords, and long-tail boxes is useful to build out your list of keywords that will influece the theme of the content that you are planning to write about. Click star to select your keywords and build out your list: Keep doing this as you analyse different search result pages and select all the keywords you want to write about. Click on the Keywords Everywhere icon in your toolbar and select 'My Favourite Keywords'. This will bring up a page listing all of your keywords and the respective data mentioned earlier. Notice how a content theme has been built out from a keyword list relevant to 'keyword research', 'keyword research tools' and 'keyword research in seo' in the screenshot below. This is the information that can be written about in a piece of content that is going to target the audiences searching for these keywords. Step 7) Export Data Download Insights: After putting your keyword list together, click the "Export" button to download the data in CSV format for further analysis. Step 8) Explore Additional Features Page Analysis: Use the "Analyse Page" function to evaluate the keyword density and on-page SEO of any webpage. This can help give you an idea of how to structure your own page or content. Trend Data: Have a look at historical search trends to understand keyword seasonality and popularity over time. This is useful to figure out when is the right time to publish the new page or content and even start a tailored campaign. This is how you can use Keywords Everywhere keyword research tool to identify the keywords that are being searched by your target audience audience. This information can then be used to structure a blog post or optimising certain elements of a page. --- - Published: 2025-02-14 - Modified: 2025-03-06 - URL: https://stringerseo.co.uk/technical/how-to-build-a-custom-xml-sitemap-in-wordpress/ - Categories: Technical XML sitemaps are important for SEO, it provides a list of pages for search engines to crawl from your website. They can also be used to help inform about the structure of your content. There are plugins out there such as Yoast SEO that can generate them automatically in WordPress but there might be certain cases where you need to build a custom sitemap. For example, you might have hosting restrictions, specific URL requirements, or using a reverse proxy. This guide explains how to create a custom XML sitemap in WordPress for a reverse proxy use case but can be adapted for other cases too... Step 1: Intercept Sitemap Requests Intercept requests for custom sitemap URLs (e. g. , /custom-sitemap_index. xml) and generate the required XML directly. Use the init hook to catch the requests early. add_action('init', function { if (isset($_SERVER)) { $custom_sitemaps = ; foreach ($custom_sitemaps as $path => $type) { if (strpos($_SERVER, $path) === 0) { if ($type === 'index') { generate_custom_sitemap_index; } else { generate_custom_sitemap($type); } exit; } } } }); This ensures WordPress processes the correct XML before default templates or redirects interfere. Step 2: Generate the Sitemap Index The sitemap index acts as a directory for individual content sitemaps (e. g. , posts, pages, categories). function generate_custom_sitemap_index { $sitemaps = ; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; foreach ($sitemaps as $slug => $lastmod) { $url = site_url($slug); echo ""; echo "$url"; echo "$lastmod"; echo ""; } echo ''; exit; } Step 3: Create Content-Specific Sitemaps For each content type (e. g. , posts, pages), generate a corresponding XML file. Replace the domain in URLs when needed, such as for reverse proxies. function generate_custom_sitemap($type) { global $wpdb; header('Content-Type: application/xml; charset=utf-8'); echo ''; echo ''; $site_url = get_site_url; switch ($type) { case 'post': $posts = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'post'"); foreach ($posts as $post) { $url = str_replace($site_url, 'https://proxy-domain. com', get_permalink($post->ID)); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($post->post_modified_gmt)); echo "$url$lastmod"; } break; case 'page': $pages = $wpdb->get_results("SELECT ID, post_modified_gmt FROM {$wpdb->posts} WHERE post_status = 'publish' AND post_type = 'page'"); foreach ($pages as $page) { $url = str_replace($site_url, 'https://proxy-domain. com', get_permalink($page->ID)); $lastmod = gmdate('Y-m-d\TH:i:s+00:00', strtotime($page->post_modified_gmt)); echo "$url$lastmod"; } break; // Additional cases for categories and authors... } echo ''; exit; } Step 4: Prevent Trailing Slash Issues To avoid WordPress redirecting sitemap URLs to versions with trailing slashes, disable canonical redirects for sitemap paths. add_filter('redirect_canonical', function ($redirect_url, $requested_url) { $sitemap_paths = ; foreach ($sitemap_paths as $path) { if (strpos($requested_url, $path) ! == false) { return false; // Disable redirect } } return $redirect_url; }, 10, 2); Step 5: Add Rewrite Rules Register custom rewrite rules to make WordPress recognise sitemap URLs. function add_custom_sitemap_rewrite_rules { add_rewrite_rule('^custom-sitemap_index\. xml$', 'index. php', 'top'); add_rewrite_rule('^custom-post-sitemap\. xml$', 'index. php', 'top'); add_rewrite_rule('^custom-page-sitemap\. xml$', 'index. php', 'top'); // Additional cases for categories and authors... } add_action('init', 'add_custom_sitemap_rewrite_rules'); Flush the rewrite rules after activation by visiting Settings > Permalinks and clicking Save Changes. Testing Once the plugin is activated, verify the following URLs: /custom-sitemap_index. xml /custom-post-sitemap. xml /custom-page-sitemap. xml Check the tags in the XML files to ensure the URLs are correct, especially if a reverse proxy is in use. --- > Learn how to start an SEO strategy for your small business. Find out what SEO factors are important and help generate more website traffic. - Published: 2025-02-13 - Modified: 2025-04-20 - URL: https://stringerseo.co.uk/content/seo-tips-for-small-business-owners/ - Categories: Content, Distribution & Reputation, Technical Search engine optimisation (SEO) is a way small businesses can generate traffic to their website. A well-structured SEO campaign can become part of your day-to-day business management. This guide will take you through the basics of SEO and give you some tips to help you get started if you already have a website. What Is SEO? SEO is a process to improve a website's visibility in search engine results pages (SERPs). This is first done by understanding what content needs to be published on your website through keyword research. It can then consist of other activities such as optimising elements on a page, and growing the popularity of your website through links. Small business owners can get the pages on their website appearing for product or service related keywords that are searched by target audiences in Google. The main aim is to make it easier for potential customers to find your products or services. What Are SEO Ranking Factors? SEO ranking factors make up Google's algorithm to determine how relevant, credible, useful and popular a page is to display it in a search result for a particular keyword. Google uses crawlers, also known as spiders, that navigate the entire online ecosystem through links. They gather information from pages on different websites that are then indexed in search results. These crawlers tick-off specific checklist items before indexing and displaying the pages they have found in search results, such as: Does this page from this website have a title? If so, what is it and how does it relate to a keyword? Does it have clear headings and what do they say? Does it have a well-structured list of web pages to crawl? No one really knows how many search engine ranking factors there are but Google's SEO Starter Guide can help us make educated guesses. Taking on a test-and-learn approach to identify what works and what doesn't in an SEO strategy is often the one that will inform which factors can drive more traffic. Google regularly updates their algorithm. Staying informed about these changes is essential to minimise the risk of traffic declines after each update. Some key ranking factors for 2025 include: Helpful, and engaging content that answers user queries. Fast-loading pages that enhance user experience. Easy navigation and logical site structure. How to Start a Basic SEO Strategy SEO doesn’t need a large budget, it can be something done within your own time to help understand how it works first provided you already have a website and it's capturing a certain amount of traffic from search or other digital marketing channels. Adopting patience, and a test-and-learn mentality, as mentioned earlier, forms the path for long-term success in any SEO strategy. This can save time and costs. Step 1: Set-Up Analytics Determine what you want to achieve. Is it more website traffic, increased sales, or better local visibility? And make sure Google Analytics and Google Search console are set-up so that you can effectively measure these goals to guide your strategy. Step 2: Research keywords Identify keywords your audience is searching for and use this information to structure your website's content for services, products, and blog posts. Here are some other articles that will help you get started on how to do keyword research, and structuring a blog post. Step 3: Optimise on-page elements Most modern website content management systems such as WordPress, Shopify, ShopWired, or Wix, allow you to optimise different parts of your pages. If you use one of these then the parts of a web page that you want to write descriptive information that relate to your keywords are: Titles: Include keywords in your page titles, CTAs (call-to-actions), and your brand name. Try keep them between 50 and 60 characters. Meta descriptions: Write concise summaries about your web pages that invites the user into your website. Try keep this 150 characters max. Headings: Make sure they accurately describe the content that follows, think relevance, think users, think unique, and ask yourself, does this encourage users to continue reading the contents that follow? Make sure you have one H1 on a page and it follows a logical order, e. g. H1, H2, H3, H4, H5, and H6. All these items help generate the information that is indexed in search results and people read before clicking to go into your website as shown in the example below. Step 4: Improve technical SEO This involves fine-tuning your website to make it easier for users and search engines like Google to find and read your content. The main aim is to help your visitors find the content that interests them the most. Some key factors highlighted include: Clear architecture and structure: Does your website structure content in a logical sesnse and is it easy for your target audience to find the content that interests them the most? Duplicate content: Does your website have the same content on different pages? If so, make sure every page is unique and targets a specific topic relevant for your target audiences.   Mobile usability: Can users access your website from multiple devices? Mobile, desktops, and tablets. Making sure your content is accessible helps your business reach consumers through multiple touch points.   Fix broken links: Often overlooked but if users click on a broken link then it interrupts their journey and potential to buy your service or product... look for and fix broken internal or external links. Page speed: How long does it take for your webpages to load? Do you use heavy images? Is your website modern adhering to performance best practices? Implementing these best practices can help improve page speed metrics like loading speed (LCP), responsiveness (FID), and layout stability (CLS). Use HTTPS: Is your website secure when users buy a product or send a lead? If not, then protect your site with an SSL certificate to help keep user data safe. Step 5: Local SEO If your business serves a specific area, claim and optimise your Google Business Profile. This platform can compliment an SEO strategy. It can also drive brand awareness by helping you appear in Google Maps that take up a lot of real estate for location based keywords. Structured data can also help your website appear better for local search terms used by targeted audiences. How to Create a Google Business Profile Page Go to Google Business Profile and Click on the “Start Now” button. Login to Google & Add Your Business Enter your business name. If it doesn’t appear in the dropdown, click “Add your business to Google. ” Select the appropriate category for your business. Choose a Business Location If you have a physical location (like a shop or office), select “Yes” to add the address. If you serve customers in a specific area (e. g. plumbing services), choose “No” and set up service areas instead. Add Contact Details Input your phone number and website URL (if available). Verify Your Business Google will ask you to verify your business. Common methods: Postcard: Google sends a postcard to your business address with a code. Phone: Some businesses can verify via text or call. Email: If available for your business type. Optimise Your Profile Add the following information: Business hours Photos of your store, products, or services A detailed description of your business Enable messaging (optional). Publish and Manage Once verified, your profile will go live. You can manage and update it through the Google Business Profile Manager. You might want to consider reviewing and doing the following activities: NAP consistency: Ensure your Name, Address, and Phone number are accurate across all online listings. Photos: Include high-quality images of your location, products, or services. Encourage reviews from existing customers: Politely ask customers to leave reviews on Google and other platforms. Publish posts that links to your website: Create some content that interest your target audience and direct them towards your website with a link. Keep doing this probably around once or twice a month... it compliments your SEO strategy too so why not! SEO for Small Businesses SEO is long-term, and it offers benefits that make it a valuable investment for small businesses. To keep costs down, start small then scale. Data speaks for itself so measuring sales or lead performance will help you understand and make informed decisions on continuing or increasing your investment be it time or money.   By appearing and ranking higher in search results, businesses can: Increase brand awareness. Drive consistent, organic traffic. Build trust with potential customers. Think with Google reports that 59% of shoppers say they research a product before they plan to visit a shop and buy it in-person, or go to buy it online. This demonstrates how critical it is for businesses to appear in relevant search results, particularly when consumers are close to making a purchase decision. --- > URLs are a very important part of user experience and SEO. Here's a WP plugin to warn admin users about long URLs. - Published: 2025-02-10 - Modified: 2025-02-10 - URL: https://stringerseo.co.uk/technical/wordpress-plugin-warning-about-long-urls/ - Categories: Content, Technical URLs are a very important part of user experience and SEO. A long URL may confuse users, it will definitly make it harder for them to remember it, and search engines may take that into consideration when indexing websites. If you work alongside editors and copywriters who use a WordPress CMS then you can create a plugin that warns them if the URL of a post is too long. This can help mitigate the risk of publishing posts with URLs that have over 115 characters. This article will take you through some steps to do just that! Why Monitor URL Length? Managing URL length is important for several reasons: SEO Benefits: Short and clean URLs are easier for search engines to understand and rank. User Experience: Concise URLs are more user-friendly and memorable. Avoid Truncation: Long URLs may get truncated in search results or when shared on social media, reducing their effectiveness. By implementing a solution to monitor URL lengths, you can ensure your content adheres to best practices. Steps to Create the Plugin Follow these steps to create a plugin that warns content editors if the URL exceeds 115 characters. Step 1: Set Up the Plugin File Open your WordPress site’s /wp-content/plugins/ directory. Create a new folder named url-length-warning. Inside this folder, create a file named url-length-warning. php. Paste the following code into the file: (function { document. addEventListener('DOMContentLoaded', function { function checkLinkLength { // Locate the button that contains the permalink const linkButton = document. querySelector('. editor-post-url__panel-toggle'); if (linkButton) { const linkSpan = linkButton. querySelector('span'); if (linkSpan) { const permalink = linkSpan. textContent. trim; const warningId = 'url-length-warning'; const warningMessage = 'Warning: URL exceeds 115 characters. '; // Remove existing warning let existingWarning = document. getElementById(warningId); if (existingWarning) { existingWarning. remove; } // Add a warning if the URL exceeds 115 characters if (permalink. length > 115) { const warning = document. createElement('div'); warning. id = warningId; warning. style. color = 'red'; warning. style. marginTop = '5px'; warning. textContent = warningMessage; linkButton. parentNode. appendChild(warning); } } } } // Optimize MutationObserver to target only the Sidebar area const sidebar = document. querySelector('. edit-post-sidebar'); if (sidebar) { const observer = new MutationObserver( => { checkLinkLength; }); observer. observe(sidebar, { childList: true, subtree: true }); } // Check when the button is clicked to open the dropdown document. body. addEventListener('click', function (event) { if (event. target. closest('. editor-post-url__panel-toggle')) { checkLinkLength; } }); }); }); --- > Reddit is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. - Published: 2025-02-09 - Modified: 2025-05-08 - URL: https://stringerseo.co.uk/link-earning/why-brands-are-winning-with-reddit-marketing/ - Categories: Content, Distribution & Reputation Reddit usage in the UK has grown remarkably and just surged past X, formerly known as Twitter, as the UK's fifth-most-used social platform (The Guardian). For marketing, this will have significant effects on how marketers reach B2B and B2C targeted audiences that are really well-engaged. Why Marketers Should Have Reddit on their Radar Reddit’s Growth & Market Share It is the fastest-growing social media platform in the UK, with a user base that has grown 47% year-on-year as of 2024. (The Guardian). It holds a 2. 37% share of the UK social media market (StatCounter). 7. 33% of Reddit's users come from the UK, making the country the second-biggest home for the platform after the United States (Exploding Topics). Demographics: Who Uses Reddit in the UK? Understanding the user demographics of Reddit is important in targeting the right audience. Age: 64% of all Reddit users are between ages 18-29 years, hence the best platform that suits Millennials and Gen Z. (SocialChamp). 22% of the users fall within the 30-49 years age bracket. This is important for B2B and professional services. (SocialChamp). Gender: 63. 6% of users are male, while 35. 1% are female (Exploding Topics). Education & Income: 46% of Reddit users have a college degree; thus, it is a perfect platform that B2B marketers would wish to belong to. (SocialChamp). Users are very tech-savvy, research-driven, and have deep discussions before making decisions (Exploding Topics). Why Reddit Is Effective for B2B and B2C Marketing 1. High Engagement & Trust The average user spends 10 minutes per visit on Reddit; engagement goes deep in topic-based communities (Red Website Design). Reddit’s content ranks well on Google, providing long-term organic visibility.   2. Community-Driven Marketing Brands can reach out to niche communities via subreddits catering to their industry. For example, Fintech brands can engage with sub-threads such as r/UKPersonalFinance,, Gaming brands with r/Gaming and r/PS5, and B2B SaaS businesses can find conversation in r/EntrepreneurUK and r/Technology. Successful strategies have included AMAs, content-driven engagements, and targeted advertising on Reddit. 3. Advertising Potential Reddit’s ad platform allows for interest-based targeting, making it a viable alternative to Meta and Google Ads.   Cost-effective ad options mean brands can reach engaged audiences without the high CPC of Google. Image Credit: Media Shower Case Study: How The Economist Uses Reddit Marketing A great example of a brand succeeding on Reddit is The Economist. The publication actively engages with Reddit users through AMAs (Ask Me Anything) sessions, where journalists answer questions from the community (Media Shower). Key Takeaways for Marketers: Humanise your brand – Engage authentically and interact with users directly.   Encourage dialogue – Use AMAs and discussion threads to build trust.   Leverage long-form content: Redditors love great insights and well-researched replies. How to Build a Winning Reddit Marketing Strategy Identify Relevant Subreddits – Research where your audience engages. Engage Helpfully – Avoid selling; instead, provide value and contribute to discussions.   Use Reddit Ads Strategically – Target based on interests, locations, and subreddit activity. Leverage AMAs and Thought Leadership – Establish credibility by directly answering user questions. Monitor Trends and Feedback – Use Reddit as a real-time focus group to gain industry insights. Use Reddit for Content Ideation & Distribution Monitor discussions in relevant subreddits to identify trending topics and pain points. Repurpose high-performing Reddit conversations into blog posts, social media content, and newsletters. Engage with niche communities by sharing valuable insights, rather than overtly promoting products. Final Thoughts The continuous growth and active user base make Reddit a goldmine that remains untapped by marketers in the UK. The platform offers organic and paid marketing opportunities with their highly engaged, research-driven audience. Be it a B2C eCommerce brand or a B2B software company, the niche communities and open discussions on Reddit make it a powerhouse for brand awareness, lead generation, and customer engagement. Next Steps: Browse relevant subreddits. Create an engagement plan. Test the ad platform for targeted marketing on Reddit. Brands can reach highly targeted audiences, build brand credibility, and stay ahead of the competition by integrating Reddit into their digital marketing efforts. --- > Learn how to automate the process of organising issue reports in Google Sheets by running a simple python script! - Published: 2025-02-09 - Modified: 2025-02-09 - URL: https://stringerseo.co.uk/technical/how-to-automate-extract-screaming-frog-issues-into-google-sheets/ - Categories: Technical Screaming Frog is a well known and powerful tool for SEOs, it allows them to crawl and analyse websites deeply but using their some what old school user interface and managing the extensive data it generates can be underwhelming. So, here is a way to automate the process of organising issue reports in Google Sheets and creating hyperlinks for easy navigation, you can streamline your workflows and save significant time, and just spend time doing the fun stuff... analysing the data inline with Google Search Console indexing report and putting best practice suggestions together for you, your client, or engineers. This guide walks you through a Python-based solution to: Organise Screaming Frog issues data in Google Sheets. Link each issue in the Summary tab to its respective issue tab. Make data exploration fast, efficient, and user-friendly. Screaming Frog have published a relevant article teching you 'How To Automate Crawl Reports In Looker Studio', I suggest checking that out if you want to automate data it into a reporting dashboard. Step 1: Organise Screaming Frog Issue Data in Google Sheets The first step is to process the Issues Overview Report exported from Screaming Frog. This report contains a high-level summary of all issues found on the website, including issue names, descriptions, priority levels, and affected URLs. Our Python script will: Create a Summary tab in Google Sheets containing all rows and columns from the Issues Overview Report. Create individual tabs for each unique issue, with only the rows relevant to that specific issue. Step 2: Populate Tabs with Issue-Specific Data Screaming Frog allows you to export detailed CSV files for each issue. For instance: H1 Duplicate has a CSV file listing all URLs with duplicate H1 tags. Images Missing Alt Text has a CSV file with URLs for images missing alt attributes. The script processes these CSV files, matches them to their respective tabs in the Google Sheet, and pastes the data starting from row 4 to leave space for issues's overview information. Step 3: Add Hyperlinks in the Summary Tab To make navigation easier, we add hyperlinks in the "Issue Name" column of the Summary tab. Each hyperlink points to the corresponding tab for that issue, enabling quick access with a single click. For example: Clicking on "H1 Duplicate" in the Summary tab will take you to the H1 Duplicate tab, where all URLs for this issue are listed. How It Works 1. Process the Issues Overview Report The script first reads the issues_overview_report. csv file and processes it to: Create a Summary tab containing all columns and rows. Create tabs for each unique issue name, filtered to include only rows related to that issue. 2. Match Issue-Specific CSV Files Next, the script reads all issue-specific CSV files from a folder, matches them to the respective tabs using fuzzy matching, and pastes the data in the respective issues tab. 3. Add Hyperlinks to the Summary Tab Finally, the script scans the "Issue Name" column in the Summary tab and adds hyperlinks that point to the corresponding issue tabs. This makes it easy to navigate between the high-level overview and the detailed data for each issue. The Python Scripts Here are the scripts that automate these tasks. Script 1: Process Screaming Frog Data import osimport pandas as pdimport gspreadfrom fuzzywuzzy import processfrom oauth2client. service_account import ServiceAccountCredentialsimport timedef connect_to_google_sheets(sheet_name): scope = creds = ServiceAccountCredentials. from_json_keyfile_name("{REPLACE}-credentials. json", scope) client = gspread. authorize(creds) # Connect to an existing Google Sheet by name or create it if it doesn't exist try: sheet = client. open(sheet_name) print(f"Connected to existing Google Sheet: {sheet_name}") except gspread. exceptions. SpreadsheetNotFound: sheet = client. create(sheet_name) print(f"Created new Google Sheet: {sheet_name}") # Share the Google Sheet with an email address sheet. share('You@YourEmailAddress. com', perm_type='user', role='writer') sheet. share('YouColleague@TheirEmailAddress. com', perm_type='user', role='writer') return sheetdef clean_name(name): """Cleans and normalizes a name for matching. """ return name. strip. lower. replace("_", " "). replace("-", " "). replace(":", ""). replace(",", ""). replace(". csv", "")def write_to_sheet(sheet, tab_name, df, retries=3, start_row=1): df = df. fillna("") # Replace NaN values with empty strings try: # Check if the worksheet already exists worksheet = sheet. worksheet(tab_name) print(f"Worksheet '{tab_name}' already exists. Updating its contents. ") except gspread. exceptions. WorksheetNotFound: # Create the worksheet if it doesn't exist for attempt in range(retries): try: worksheet = sheet. add_worksheet(title=tab_name, rows="1000", cols="26") print(f"Created new worksheet '{tab_name}'. ") break except gspread. exceptions. APIError as e: if attempt < retries - 1: print(f"API error on attempt {attempt + 1}: {e}. Retrying... ") time. sleep(5) # Wait before retrying else: print(f"Failed to create worksheet '{tab_name}' after {retries} attempts. ") return # Write data in a single operation (clear + update in one go) try: worksheet. resize(rows=len(df) + start_row - 1, cols=len(df. columns)) values = + df. values. tolist worksheet. update(f"A{start_row}", values) except gspread. exceptions. APIError as e: print(f"Failed to update worksheet '{tab_name}': {e}") time. sleep(2) # Add delay to avoid hitting API rate limitsdef process_overview_report(issues_overview_path, sheet_name): # Load the Issues Overview Report issues_overview_df = pd. read_csv(issues_overview_path) # Connect to the existing Google Sheet sheet = connect_to_google_sheets(sheet_name) # Step 1: Write the Summary tab write_to_sheet(sheet, "Summary", issues_overview_df, start_row=1) print("Summary tab updated. ") # Step 2: Create a tab for each unique issue for issue_name in issues_overview_df. unique: # Filter rows for the specific issue issue_df = issues_overview_df == issue_name] # Write the filtered rows to a tab write_to_sheet(sheet, issue_name, issue_df, start_row=1) print(f"Created tab for issue: {issue_name}")def process_issue_csvs(folder_path, sheet_name): # Connect to the existing Google Sheet sheet = connect_to_google_sheets(sheet_name) # Cache tab names to avoid repeated API calls tab_cache = # Iterate through each CSV file in the folder for file_name in os. listdir(folder_path): if file_name. endswith(". csv"): # Clean the file name for matching issue_name = file_name. replace(". csv", ""). replace("_", " "). capitalize # Find an exact match in the tab names matched_tab_name = process. extractOne(issue_name, tab_cache, score_cutoff=70) if not matched_tab_name: print(f"Warning: No matching tab found for issue '{file_name}'. Skipping... ") continue matched_tab_name = matched_tab_name # Extract the matched tab name # Load the CSV file file_path = os. path. join(folder_path, file_name) issue_df = pd. read_csv(file_path) # Append data from the CSV file to the matched tab starting at row 4 write_to_sheet(sheet, matched_tab_name, issue_df, start_row=4) print(f"Processed and updated tab for issue: {matched_tab_name}") print(f"Google Sheet '{sheet_name}' updated with all issues. ")if __name__ == "__main__": # File path to the Screaming Frog Issues Overview Report (step 1 file) issues_overview_path = "issues_overview_report. csv" # Folder path to the issue-specific CSV files (step 2 files) issues_folder_path = "/{full-folder-path}/issues_reports" # Name of the Google Sheet (same for both steps) google_sheet_name = "Name Your Sheet" # Step 1: Process the Issues Overview Report process_overview_report(issues_overview_path, google_sheet_name) # Step 2: Process the issue-specific CSV files process_issue_csvs(issues_folder_path, google_sheet_name) This script connects you to GoogleSheets through the Google Sheets API, you'll need to create your own credentials. json file and store this in the same folder that the script is run from, and then it processes the Issues Overview Report and the issue-specific CSV files from Screaming Frog, matching them to their respective tabs. Head on over to https://medium. com/@a. marenkov/how-to-get-credentials-for-google-sheets-456b7e88c430#:~:text=Press%20'CREATE%20CREDENTIALS'%20and%20select,to%20the%20list%20of%20credentials. to find out how to create your own credentials. json file. Script 2: Add Hyperlinks to the Summary Tab import gspread from oauth2client. service_account import ServiceAccountCredentials import time def connect_to_google_sheets(sheet_name): scope = creds = ServiceAccountCredentials. from_json_keyfile_name("credentials. json", scope) client = gspread. authorize(creds) # Connect to an existing Google Sheet by name try: sheet = client. open(sheet_name) print(f"Connected to existing Google Sheet: {sheet_name}") return sheet except gspread. exceptions. SpreadsheetNotFound: print(f"Google Sheet '{sheet_name}' not found. ") return None def fetch_tab_gid(sheet): """Fetches the mapping of tab names to their gids. """ tabs = {} metadata = sheet. fetch_sheet_metadata for sheet_data in metadata: tab_name = sheet_data gid = sheet_data tabs = gid return tabs def add_hyperlinks_to_summary(sheet, summary_tab_name="Summary"): try: # Get the Summary tab worksheet = sheet. worksheet(summary_tab_name) print(f"Found worksheet '{summary_tab_name}'. ") except gspread. exceptions. WorksheetNotFound: print(f"Worksheet '{summary_tab_name}' not found. Exiting... ") return # Get all data from the Summary tab data = worksheet. get_all_values headers = data rows = data # Skip headers # Find the index of the "Issue Name" column try: issue_name_index = headers. index("Issue Name") except ValueError: print("'Issue Name' column not found in Summary tab. ") return # Fetch the gid for each tab in the sheet tabs_with_gid = fetch_tab_gid(sheet) print(f"Tabs and their gids: {tabs_with_gid}") # Add hyperlinks to each Issue Name for i, row in enumerate(rows, start=2): # Start from row 2 (to skip headers) issue_name = row if issue_name. strip and issue_name in tabs_with_gid: # Ensure it's not empty and tab exists # Generate the hyperlink pointing to the gid of the tab gid = tabs_with_gid formula = f'=HYPERLINK("#gid={gid}", "{issue_name}")' worksheet. update_cell(i, issue_name_index + 1, formula) print(f"Added hyperlink for '{issue_name}' pointing to gid '{gid}'. ") else: print(f"Skipping '{issue_name}' - Tab not found. ") # Add delay to avoid hitting API rate limits time. sleep(1) print("Hyperlinks added to the Summary tab. ") if __name__ == "__main__": # Name of the Google Sheet google_sheet_name = "NAME OF YOUR SHEET" # Connect to the Google Sheet sheet = connect_to_google_sheets(google_sheet_name) # Add hyperlinks to the Summary tab if sheet: add_hyperlinks_to_summary(sheet) This script adds clickable links in the Summary tab, directing users to the corresponding tabs for each issue. Benefits of This Automation Saves Time: Automates tedious tasks like creating tabs and linking them. Processes hundreds of rows in seconds. Improves Accuracy: Reduces human errors when manually copying and organising data. Streamlines Navigation: Hyperlinks in the Summary tab to help navigation. Customisable: Adapt the scripts to your specific needs, such as custom headers or formatting. How to Use the Scripts Create a credentials. json file for Google Sheets. Set Up the Environment: Install Python and the required libraries (pandas, gspread, fuzzywuzzy, etc. ). Download your Screaming Frog Issues Overview Report and issue-specific CSV files. Run the Scripts: Use Script 1 to process the data and organise it into tabs. Use Script 2 to add hyperlinks in the Summary tab. Verify the Output: Open your Google Sheet and confirm that: The Summary tab is complete. Each issue has its own tab with relevant data. Hyperlinks in the Summary tab point to the correct tabs. This script doesn't only simplify the management of Screaming Frog data but it also enhances the efficiency of your SEO workflows. By leveraging Python and Google Sheets, the old-school UI in Screaming Frog isn't a thing anymore, and their issue reports can be exported into a well-structured, easy-to-navigate resource that saves time and let's you get on with the analysis and audit. --- - Published: 2025-02-03 - Modified: 2025-02-09 - URL: https://stringerseo.co.uk/technical/how-to-modify-all-wordpress-links-for-a-reverse-proxy-setup/ - Categories: Technical If you are hosting your WordPress blog or website behind a reverse proxy under a different domain, you might experience a range of issues where internal links, canonical URLs, Open Graph metadata, JavaScript, CSS files, and other assets keep referring to the original WordPress subdomain. This may create problems for SEO, and can impact page speed performance, and consistency in the user experience. In this guide, we’ll walk through the process of rewriting all WordPress-generated links dynamically, ensuring that your site properly reflects the reverse proxy domain. Why Rewriting URLs is Necessary When setting up a reverse proxy, WordPress will still generate URLs pointing to its original domain unless explicitly instructed otherwise. This leads to inconsistencies, such as: Canonical URLs and meta tags still pointing to the old domain. Open Graph and structured data referencing incorrect URLs. RSS Feeds pointing to the wrong domain. Hardcoded JavaScript and CSS file links breaking due to CORS issues. Navigation links, widgets, and internal post links not updating. To resolve these, we need a global URL rewriting solution that modifies all links dynamically. How to Build a WordPress Plugin to Rewrite URLs in Links Instead of manually adjusting each template or modifying WordPress settings, we’ll use a custom WordPress plugin that automatically rewrites all URLs site-wide. Steps to Create the Plugin Create a new PHP file: Name it reverse-proxy-link-rewriter. php. Add the following PHP code to rewrite all WordPress-generated links dynamically. Place the file in /wp-content/plugins/. The PHP Plugin Code --- > HubSpot Blog Post Export: Find out how to clean the m up before importing into WordPress. This technique is useful for blog migratons. - Published: 2025-01-31 - Modified: 2025-11-26 - URL: https://stringerseo.co.uk/technical/hubspot-blog-posts/ - Categories: Technical HubSpot makes it fairly straightforward to export blog content, but the HTML that comes out is often full of inline styles, classes and IDs. When that HTML is pasted into WordPress, it can clash with the theme’s styles, create inconsistent typography and generally make templates harder to maintain. In this guide, a practical workflow is outlined for using Python, pandas, gspread and BeautifulSoup to clean HubSpot blog exports in bulk via Google Sheets. The script removes unwanted attributes from key text elements while leaving links and images intact, so content is ready to paste into WordPress with minimal manual editing. Contents Why clean HubSpot exports before moving to WordPress? What you’ll need Step 1 – Export blog posts from HubSpot Step 2 – Set up Google Sheets API access Step 3 – Add the Python script Step 4 – Run the script and review the output How the clean_html function works Performance and real-world usage tips Versions, assumptions and limitations Troubleshooting common errors Taking this further Why clean HubSpot exports before moving to WordPress? HubSpot’s blog editor applies its own classes, IDs and inline styles to headings, paragraphs and other elements. That works within HubSpot’s templates, but the same styling can: Override or clash with WordPress theme styles. Make typography and spacing inconsistent between migrated and native posts. Add unnecessary code bloat to the HTML. Cleaning exported HTML before migration helps: Keep styling under the control of the WordPress theme and block styles. Improve consistency across all posts. Reduce layout bugs caused by legacy HubSpot styling. Doing this by hand inside each post is time-consuming. A small Python script plus Google Sheets can automate most of the work. What you’ll need A HubSpot account with permission to export blog posts. A WordPress site where the posts will ultimately live. A Google account with access to Google Sheets. Python 3 (3. 10+ recommended) installed locally or on a server. The following Python packages: gspread – for Google Sheets access (gspread docs) pandas – for working with tabular data (pandas docs) beautifulsoup4 – for HTML parsing (BeautifulSoup docs) google-auth – for Google API authentication Install the packages with: pip install gspread pandas beautifulsoup4 google-auth Step 1 – Export blog posts from HubSpot The exact menu labels in HubSpot can change over time, but the export flow is broadly: In HubSpot, go to Content > Blog (or Marketing > Website > Blog, depending on the account layout). Choose the relevant blog if there are multiple. Use the actions menu to select Export blog posts. Select the fields needed (for example “Title”, “Content”, “Meta Description”). Export as a CSV file and download it. HubSpot’s official documentation on exporting blog content provides more detail and screenshots: HubSpot – Export your blog posts. Once the CSV has been downloaded, import it into a Google Sheet. A fresh Sheet with a tab named something like HubSpot Export keeps things organised. Step 2 – Set up Google Sheets API access The script uses a Google service account to read from and write to a Sheet. Google’s official quickstart shows the overall process in detail: Google Sheets API – Python quickstart. In summary: Create a Google Cloud project. Enable the Google Sheets API and (optionally) the Google Drive API. Create a service account for the project and generate a JSON key file, then download it as credentials. json into the project folder. In Google Sheets, share the Sheet with the service account’s email address (usually something like your-project-name@your-project-id. iam. gserviceaccount. com). This gives the Python script permission to read from the “HubSpot Export” tab and write cleaned content into a separate tab in the same Spreadsheet. Step 3 – Add the Python script The script below connects to Google Sheets, reads the exported HubSpot data, cleans the HTML in each cell and writes the results to a new worksheet. It implements: A safer header-handling pattern (the header row is kept as column names, not treated as data). Element-wise cleaning using DataFrame. applymap for broad pandas compatibility. Optional Drive scope to avoid issues when listing worksheets. import html import unicodedata import gspread import pandas as pd from bs4 import BeautifulSoup from google. oauth2. service_account import Credentials # === Configuration === SERVICE_ACCOUNT_FILE = "credentials. json" SCOPES = SPREADSHEET_ID = "YOUR_SPREADSHEET_ID" # Replace with your Sheet ID INPUT_SHEET_NAME = "HubSpot Export" OUTPUT_SHEET_NAME = "Cleaned Export" def clean_html(html_content): """ Clean a single cell of HTML. - Decodes HTML entities. - Normalises Unicode. - Strips non-breaking spaces. - Removes style, class and id attributes from common text elements. - Leaves links, images and structural elements untouched. """ # Skip non-strings and very large cells if not isinstance(html_content, str) or len(html_content) > 5000: return html_content # Decode HTML entities html_content = html. unescape(html_content) # Normalise Unicode html_content = unicodedata. normalize("NFKC", html_content) # Remove stray invalid bytes html_content = html_content. encode("utf-8", "ignore"). decode("utf-8") # Replace non-breaking spaces and trim html_content = html_content. replace("\xa0", " "). strip # If it does not look like HTML, return as-is if "" not in html_content: return html_content # Parse the HTML soup = BeautifulSoup(html_content, "html. parser") # Only clean common text elements; leave links/images/layout elements alone tags_to_clean = tags_to_clean += for tag in soup. find_all(tags_to_clean): for attr in : tag. attrs. pop(attr, None) return str(soup) def main: # Authenticate with the service account creds = Credentials. from_service_account_file( SERVICE_ACCOUNT_FILE, scopes=SCOPES, ) client = gspread. authorize(creds) # Open the spreadsheet spreadsheet = client. open_by_key(SPREADSHEET_ID) # Get the input worksheet input_worksheet = spreadsheet. worksheet(INPUT_SHEET_NAME) # Get or create the output worksheet worksheet_titles = if OUTPUT_SHEET_NAME in worksheet_titles: output_worksheet = spreadsheet. worksheet(OUTPUT_SHEET_NAME) else: output_worksheet = spreadsheet. add_worksheet( title=OUTPUT_SHEET_NAME, rows=1000, cols=50, ) # Read all data from the input sheet data = input_worksheet. get_all_values if not data: print("No data found in input sheet. ") return header, *rows = data if not rows: print("No data rows found in input sheet. ") return # Build a DataFrame with the header row as column names df = pd. DataFrame(rows, columns=header) # Apply cleaning to every cell # applymap is supported in more pandas versions than DataFrame. map df = df. applymap(clean_html) # Write the cleaned data back to the output sheet output_worksheet. clear output_worksheet. update( + df. values. tolist) print(f"Cleaning complete. Data written to worksheet: {OUTPUT_SHEET_NAME! r}") if __name__ == "__main__": main Step 4 – Run the script and review the output With the configuration updated (Spreadsheet ID and sheet names), run the script from the command line: python clean_hubspot_export. py When it completes, the Google Sheet will contain a new tab called something like Cleaned Export. This will have the same column structure as the original export, but with cleaned HTML in each cell. The next steps are: Open a post in WordPress. Switch to the HTML/Code editor (or use a Custom HTML block). Copy the cleaned HTML from the relevant cell in the Sheet. Paste it into WordPress and preview it in the front-end theme. Headings, paragraphs and emphasis should now respect the theme’s styling, without legacy HubSpot inline styles overriding anything. How the clean_html function works The clean_html function is deliberately conservative so that content is made safer for WordPress without breaking layout or media. It decodes HTML entities using Python’s html. unescape, so characters such as   and ’ become plain Unicode text. It normalises Unicode with the standard library’s unicodedata. normalize to reduce odd character variants that sometimes appear after exports. It removes non-breaking spaces (\xa0) and trims whitespace to tidy paragraph text. It only cleans specific tags:Headings –Paragraphs Emphasis and strong tags: , , , , For these tags it strips style, class and id attributes. It leaves links, images and layout elements such as , , , , and untouched. This helps preserve structure and media. It skips obviously non-HTML content and very large cells (over 5,000 characters) to avoid wasting time on plain text columns and to guard against pathological cases. This approach follows typical patterns described in the BeautifulSoup documentation for cleaning attributes from tags while preserving the underlying HTML: see the Modifying the tree section in the official docs for more examples. Performance and real-world usage tips The example above applies clean_html to every cell in the Sheet. For small to medium exports, that is perfectly acceptable. For larger datasets or very wide sheets, performance can be improved by: Restricting cleaning to the column that contains post body HTML, for example: content_column = "Content" # change to match the export column name df = df. apply(clean_html) Splitting extremely large exports across multiple worksheets, then running the script per worksheet. Running the script on a machine with a stable connection to Google’s APIs and avoiding very aggressive reruns (for example, not running the entire migration every few minutes). Versions, assumptions and limitations To keep the example focused, a few assumptions are made: Python version: any reasonably up-to-date Python 3 version should work. Python 3. 10+ is recommended. pandas version: the script uses DataFrame. applymap, which is available in mainstream pandas releases. If working with a very old pandas version, it is worth checking the official pandas applymap documentation for any behavioural differences. Google access: the service account must have access to the Sheet, and the Sheets API must be enabled in the Google Cloud project. HTML scope: only a specific set of text-related tags is cleaned. If HubSpot adds important styling to other elements (for example custom cards or layout blocks), extra rules may be needed. Length cap: the 5,000-character length check is a pragmatic safeguard. For extremely long posts stored in a single cell, this value can be increased. For a detailed view of the authentication and authorisation flow used by this pattern, the official Google Sheets API documentation is the best reference point. Troubleshooting common errors When working with Google Sheets and external libraries, a few common issues tend to appear. Below are some quick checks that reflect real-world experience with this kind of script: SpreadsheetNotFound or similar errors Check that the correct SPREADSHEET_ID has been copied from the Sheet URL. Confirm that the Sheet has been shared with the service account’s email address. Verify that the service account JSON file path in SERVICE_ACCOUNT_FILE is correct. WorksheetNotFound for the input worksheet Make sure that the tab name in Google Sheets matches INPUT_SHEET_NAME exactly, including spaces and capital letters. If the export tab has a different name, either rename it in Sheets or adjust the configuration in the script. AttributeError related to applymap If using a very old version of pandas, double check that applymap is available on DataFrame. If not, upgrading pandas to a current version is usually the simplest fix. Slow performance or suspected rate limits For very large sheets, consider cleaning only the content column instead of every cell. Batching work across multiple runs or worksheets can help avoid hitting Google API quotas in a single burst. For detailed exceptions and usage examples, the gspread documentation is a useful companion to this script. Taking this further The basic pattern here is flexible. It can be extended to handle: Extra attribute cleaning for other tags such as and , if layout classes are not required in WordPress. Custom find-and-replace operations for HubSpot-specific markup patterns. Automated checks for heading levels, empty paragraphs or legacy shortcodes. Because the heavy lifting is in Python, changes can be tested on a subset of posts first, then rolled out to an entire export with confidence. Combined with a clear internal process for redirects and URL mapping, this kind of cleaning step helps make HubSpot-to-WordPress migrations both cleaner and more predictable from a technical SEO perspective. --- > Streamline your SEO migration with this Python script. Automate URL mapping using page titles, the body of content and fuzzy matching. - Published: 2025-01-20 - Modified: 2025-11-26 - URL: https://stringerseo.co.uk/technical/seo-migration-automate-url-mapping-with-python/ - Categories: Technical In any SEO and website migration project, accurately mapping old URLs to new ones is critical for maintaining search engine rankings and minimising disruptions. Google’s own guidance on site moves with URL changes stresses the importance of carefully planned redirects when URLs change. The following Python scripts show how to streamline this process by comparing page titles or on-page content for similarity and generating a URL mapping file that can feed into a 301 redirect plan. They are designed as practical helpers – not full replacements for human review – so that SEO specialists and developers can focus their time on edge cases and strategy rather than manual copying and pasting. Table of contents Script overview: page title matching for URL mapping Key features Code implementation Steps to use the title-matching script Extending the script for content matching Limitations, performance and best practice Frequently asked questions External resources and further reading Script overview: page title matching for URL mapping This Python script is designed to match old URLs to new URLs based on page titles using fuzzy matching techniques. By automating this part of a migration, it saves time and reduces the risk of manual errors when you are mapping hundreds of similar-looking URLs. Key features Fetch page titles: Uses requests and BeautifulSoup to extract page titles from provided URLs. For more detail, see the Requests quickstart and the Beautiful Soup documentation. Fuzzy matching: Leverages the RapidFuzz library’s fuzz. ratio to calculate similarity scores between old and new page titles. See the RapidFuzz fuzz. ratio docs for details. CSV input/output: Reads old and new URLs from a CSV file and generates a mapped output file with match scores using pandas. read_csv and DataFrame. to_csv. You can find more on this in the pandas read_csv documentation. Custom threshold: Allows users to set a similarity threshold (0–100) for better control over matches, so only strong title matches are used. Code implementation Below is the script for page title-based URL mapping: import requests from bs4 import BeautifulSoup import pandas as pd from rapidfuzz import fuzz def fetch_page_title(url): """ Fetches the page title for a given URL. Uses the Requests library to retrieve the HTML and BeautifulSoup to parse the element. For production use you may want to: - Add a custom User-Agent header - Handle retries / backoff for transient errors - Respect robots. txt and crawl-delay settings """ try: response = requests. get(url, timeout=10) response. raise_for_status soup = BeautifulSoup(response. text, "html. parser") title = soup. title. string. strip if soup. title else None return title except Exception as e: print(f"Error fetching title for {url}: {e}") return None def map_urls_by_titles(old_urls, new_urls, threshold=80): """ Maps old URLs to new URLs based on their page titles using fuzzy matching. Parameters: old_urls (list): List of old URLs to map from. new_urls (list): List of new URLs to map to. threshold (int): Minimum similarity score to consider a match (0-100). Returns: DataFrame: A mapping of old URLs to new URLs based on page titles and match scores. Notes: - This script works well for small to medium sets of URLs (hundreds to a few thousand). - For very large migrations, consider batching, caching, or more advanced RapidFuzz APIs to improve performance. """ old_titles = {url: fetch_page_title(url) for url in old_urls} new_titles = {url: fetch_page_title(url) for url in new_urls} mappings = # Create mapping by fuzzy matching titles for old_url, old_title in old_titles. items: best_match = None highest_score = 0 for new_url, new_title in new_titles. items: if old_title and new_title: # Calculate similarity score (0-100) score = fuzz. ratio(old_title, new_title) if score > highest_score: highest_score = score best_match = new_url # Add match details to the mapping mappings. append( { "Old URL": old_url, "Old Title": old_title, "New URL": best_match if highest_score >= threshold else None, "New Title": new_titles. get(best_match, None) if best_match else None, "Match Score": highest_score, } ) return pd. DataFrame(mappings) def read_urls_from_csv(file_path): """ Reads old and new URLs from a CSV file. The CSV should have two columns: 'Old URL' and 'New URL'. Example format: Old URL,New URL https://oldsite. com/page1,https://newsite. com/page-a https://oldsite. com/page2,https://newsite. com/page-b """ try: data = pd. read_csv(file_path) old_urls = data. dropna. tolist new_urls = data. dropna. tolist return old_urls, new_urls except Exception as e: print(f"Error reading URLs from CSV: {e}") return , if __name__ == "__main__": # Input and output file paths input_csv = "urls. csv" # Replace with your CSV file path output_csv = "url_mapping. csv" # Read URLs from the CSV file old_urls, new_urls = read_urls_from_csv(input_csv) if not old_urls or not new_urls: print("No URLs found in the input file. Please check the CSV format. ") else: # Generate the URL mapping url_mapping = map_urls_by_titles(old_urls, new_urls, threshold=80) # Save the mapping to a CSV file url_mapping. to_csv(output_csv, index=False) print(f"URL mapping saved to {output_csv}") Steps to use the title-matching script Install required librariesInstall the Python libraries if they are not already available in your environment: pip install requests beautifulsoup4 pandas rapidfuzz Prepare the input CSVCreate a CSV file (for example urls. csv) with two columns: Old URL and New URL. Each row should represent a potential mapping candidate between an old URL and the new URL that may replace it: Old URL,New URL https://oldsite. com/page1,https://newsite. com/page-a https://oldsite. com/page2,https://newsite. com/page-b https://oldsite. com/page3,https://newsite. com/page-c The script will use the titles of these URLs to suggest the best match for each old URL. Run the scriptSave the script to a file, for example seo_migration_titles. py, in the same directory as your CSV file, and run: python seo_migration_titles. py Review the outputThe script generates a file such as url_mapping. csv containing:Old URLOld TitleNew URL (if a match passes the threshold)New TitleMatch Score (0–100)Use this as a starting point for your 301 redirect rules. Make sure a human reviews the mappings before deploying them to production. Why might some URLs be missing from the output? Threshold too high: If the similarity score does not meet the threshold (default is 80), the New URL will be None. Lower the threshold slightly (for example to 70–75) if you want to see more candidate matches, then manually review. Titles not found: If the script fails to fetch page titles for the URLs (due to timeouts, blocked requests, incorrect URLs or empty titles), matching cannot proceed. Check those URLs manually or adjust your timeout / error handling. Extending the script for content matching To go beyond page titles, the script can also compare other on-page elements such as meta descriptions, H1s, body text, and image alt attributes. This can be useful when titles are generic or have been rewritten during a redesign. The following script demonstrates a more in-depth content comparison. Note that this version calculates a composite similarity score by summing the similarity of several components; it is not restricted to a 0–100 range. That means the threshold is a minimum total score rather than a simple percentage. import requests from bs4 import BeautifulSoup import pandas as pd from rapidfuzz import fuzz def fetch_page_content(url): """ Fetches on-page content for a given URL, including title, meta description, H1, body text, and image alt attributes. """ try: response = requests. get(url, timeout=10) response. raise_for_status soup = BeautifulSoup(response. text, "html. parser") # Extract relevant content title = soup. title. string. strip if soup. title else None meta_description = soup. find("meta", attrs={"name": "description"}) meta_description = ( meta_description. strip if meta_description else None ) h1 = soup. find("h1") h1 = h1. text. strip if h1 else None body_text = " ". join images = for img in soup. find_all("img", alt=True)] return { "title": title, "meta_description": meta_description, "h1": h1, "body_text": body_text, "images": images, } except Exception as e: print(f"Error fetching content for {url}: {e}") return None def calculate_similarity(old_content, new_content): """ Calculates a composite similarity score between two sets of page content. The score is a sum of: - Title similarity (0-100, if present) - Meta description similarity (0-100, if present) - H1 similarity (0-100, if present) - Body text similarity (0-100, if present) - Image alt text similarity (sum of best matches per image) As a result, the total score can exceed 100. In practice, strong matches often end up in the low hundreds, depending on how many components are present and how similar they are. """ total_score = 0 components = # Compare textual components for component in components: old = old_content. get(component, "") new = new_content. get(component, "") if old and new: total_score += fuzz. ratio(old, new) # Compare images (using alt text similarity) old_images = old_content. get("images", ) new_images = new_content. get("images", ) image_score = 0 if old_images and new_images: for old_img in old_images: best_img_score = max(fuzz. ratio(old_img, new_img) for new_img in new_images) image_score += best_img_score total_score += image_score return total_score def map_urls_by_content(old_urls, new_urls, threshold=200): """ Maps old URLs to new URLs based on the closest match of on-page content. Parameters: old_urls (list): List of old URLs to map from. new_urls (list): List of new URLs to map to. threshold (int): Minimum composite similarity score to consider a match. Returns: DataFrame: A mapping of old URLs to new URLs based on content similarity scores. Notes: - Because the similarity score is a sum of multiple components, typical "good" matches may land somewhere between 200-400, depending on how much content is on the page. - Start with a threshold around 200-250, inspect the results, then adjust up or down based on your site. """ old_contents = {url: fetch_page_content(url) for url in old_urls} new_contents = {url: fetch_page_content(url) for url in new_urls} mappings = for old_url, old_content in old_contents. items: best_match = None highest_score = 0 for new_url, new_content in new_contents. items: if old_content and new_content: score = calculate_similarity(old_content, new_content) if score > highest_score: highest_score = score best_match = new_url # Add match details to the mapping mappings. append( { "Old URL": old_url, "New URL": best_match if highest_score >= threshold else None, "Similarity Score": highest_score, } ) return pd. DataFrame(mappings) def read_urls_from_csv(file_path): """ Reads old and new URLs from a CSV file. The CSV should have two columns: 'Old URL' and 'New URL'. """ try: data = pd. read_csv(file_path) old_urls = data. dropna. tolist new_urls = data. dropna. tolist return old_urls, new_urls except Exception as e: print(f"Error reading URLs from CSV: {e}") return , if __name__ == "__main__": # Input and output file paths input_csv = "urls. csv" # Replace with your CSV file path output_csv = "url_mapping_content. csv" # Read URLs from the CSV file old_urls, new_urls = read_urls_from_csv(input_csv) if not old_urls or not new_urls: print("No URLs found in the input file. Please check the CSV format. ") else: # Generate the URL mapping url_mapping = map_urls_by_content(old_urls, new_urls, threshold=200) # Save the mapping to a CSV file url_mapping. to_csv(output_csv, index=False) print(f"URL mapping saved to {output_csv}") By analysing these additional components, SEO specialists can ensure even closer matches between old and new URLs during migrations, particularly on content-heavy sites where titles alone are not enough. Limitations, performance and best practice These scripts are intended as practical aids to accelerate URL mapping, not as fully autonomous redirect engines. To keep migrations safe and aligned with best practice, keep the following in mind: Always perform human QA: Treat the output CSV as a set of suggestions. Review high-value and borderline mappings manually before implementing redirects, and spot-check samples from lower-traffic areas. Scale and performance: Both approaches do pairwise comparisons between old and new URLs, which means they are roughly O(n²). They are usually fine for hundreds or a few thousand URLs, but for very large sites you may need: More efficient matching strategies (for example, RapidFuzz’s process helpers or blocking by directory / section). Caching of fetched HTML to avoid repeated downloads. Batching the migration by directory or site section. Respect crawling etiquette: When fetching pages at scale, make sure you: Respect the site’s robots. txt and any crawl-delay guidance. Use a reasonable timeout and rate limiting. Send an appropriate User-Agent string and avoid overwhelming servers. Redirect implementation: Once mappings have been reviewed, implement 301 redirects in line with Google’s site move and redirect guidance: Avoid redirect chains and loops where possible. Keep old URLs redirecting for a suitable period after the migration. Monitor Search Console for crawl errors and coverage changes. Frequently asked questions Does this script replace a manual redirect audit? No. The scripts are designed to reduce repetitive work and highlight likely matches, but a human still needs to review critical mappings, handle edge cases, and decide how to treat URLs that have no clear destination. What similarity threshold should I use? For the title-based script, a threshold of around 80 is a good starting point. If you find too few matches, experiment with 70–75 and review additional candidates manually. For the content-based script, the score is a sum across multiple components, so typical strong matches may land in the 200–400 range. Start around 200–250, inspect the results, then adjust the threshold up or down based on how conservative you want to be. Can I use these scripts for very large websites? In principle, yes – but out of the box they are best suited to small-to-medium migrations. For large sites (tens of thousands of URLs or more), consider processing one section at a time, caching responses, and using more advanced RapidFuzz utilities to avoid doing every possible pairwise comparison. Where should I implement the redirects? That depends on your stack. Common options include web server configuration (for example, Apache or Nginx), a reverse proxy, a CMS-level redirect manager, or edge-worker logic on a CDN. Whatever the mechanism, make sure redirects are tested in a staging environment before going live. External resources and further reading Requests quickstart – official docs Beautiful Soup 4 documentation pandas. read_csv – official docs RapidFuzz fuzz. ratio usage Google Search Central – Site moves with URL changes Google Search Central – Redirects and Google Search --- ---