Keyword Cannibalization. Duplicate Content. Crawl Priority. All of these are inherently SEO jargon, which can downplay the significance of such problems to boards, exec teams, or senior management. These problems sometimes exist due to information architecture problems or issues with the CMS. However, keyword cannibalization can be the result of a much larger, strategic problem for a company. A problem that can significantly minimize realized revenue. Let’s step out of SEO for a moment and look at some economic aspects of such a problem.
I’ve been working on a number of technical projects recently and wanted to share a small solution to an issue that doesn’t always have a great solution.
Graph theory is fundamental to most of the work done in SEO and social media. Everything from PageRank, EdgeRank, spam analysis, clustering, and implicit / explicit social graphs are all built on graph theory. Although successful SEO can be executed without ever knowing the difference between a node and an edge, a basic understanding of graph theory can help an SEO make the intellectual leap to better understand how search engines can view and analyze data.
Entities. They’re nothing new in SEO, but over the last year, I’ve been ruminating on how moving into entities should fundamentally be changing the way most of us are still thinking about SEO. To start, let’s first step back and look at the history of the algorithms. Moving from Pages to Entities In the early …
Link building is full of time consuming, repetitive tasks. I’ve written once before about humanistic scaling, which works great as a strategic approach to link building, but ultimately, there will always be a degree of heavy lifting that requires someone to sit down and work through a bunch of repetitive tasks. This includes tasks like advanced searches, prospecting, contact detail discovery, following up with content writers, sending emails, reminding yourself to follow up, and copying information from tool to tool or source to tool with no streamlined workflow.
I’d like to share some useful tools that can help you scale these tasks (some of them I use all the time, others I’ve poke around with at times, and others are just cool and useful when needed).
To set the stage, yesterday Google announced a new “Freshness” Update, which had an effect on 35% of all queries. Barry wrote a great post about it on Search Engine Land and Rand and Mike followed up with a great Whiteboard Friday documenting some of the early observations from SEOs. But beyond the news of the update, I’d like to look at some of the potential methodologies used by search engines to implement this update.
SEO is an interesting problem to solve in regards to product development and engineering. More so than other online marketing channels, SEO is heavily dependent on having access to the engineering queue. Without getting SEO tasks in the engineering queue, a lot of SEO becomes dead in the water. The inability to get into this queue is one reason SEO fails at many organizations.
Google sometimes does phrase matching even if the search query isn’t in quotes. They’ll also include words that are traditionally considered search operators and stop words if it’s part of a popular phrase. It seems Google is getting increasingly better at understanding language.
Understanding how Google indexes and uses a phrase based analysis of language, and semantic analysis of these terms, can help you better understand on-site targeting and link building.