I have always believed in standardization. This begins when the big competitors reach agreements that ultimately benefit the end-users. The semantic web points to that north and increasingly we see how companies at the level of Google, Microsoft Bing and Yahoo support these initiatives.
Let’s first know how they define a semantic web: Add semantic metadata that is closer to human language. That was the idea behind HTML5.
The basic skeleton of a website will always be a header, a list of navigation links (navigation), body of content (content) and footer (footer). Thus, if we need to lay out a page and apply the CSS, we would use a scheme similar to this:
<!DOCTYPE html> <html lang="es"> <head> <title>Título de la página</title> </head> <body> <div class="header"> <!-- El cabezal del site --> </div> <div class="navigation"> <!-- Los enlaces de navegación --> </div> <div class="content"> <!-- El contenido de la página --> </div> <div class="footer"> <!-- El pie de página con los créditos del autor --> </div> </body> </html>
Although “class” is a natural attribute of the “div” object, naming classes to objects without directly applying them is what is intended to be achieved with semantic web, so that each group of objects in the markup has its own tag name. Thus, under the same scheme above but with HTML5 we have:
<!DOCTYPE html> <html lang="es"> <head> <title>Título de la página</title> </head> <body> <header> <!-- El cabezal del site --> </header> <nav> <!-- Los enlaces de navegación --> </nav> <article> <!-- El contenido de la página --> </article> <footer> <!-- El pie de página con los créditos del autor --> </footer> </body> </html>
As we can see, the “class” attribute is superfluous, since we can point directly to the object without the need to apply id’s or classes, obtaining a more comfortable HTML to lay out.
Another advantage of the semantic web is that the “search engine’s” can directly index the content without having to resort to some algorithm that recognizes what is or is not valuable in said content.
But, to recognize valuable content or not? Well, nobody knows for sure how the indexing of a search engine works. Google (to use an example), offers recommendations on how to optimize our website to achieve a successful indexing, but those recommendations do not really guarantee the first positions in the search results because if we exceed ourselves, we can end up outside the results without any type of notice by the search engine. I mean, it’s a Russian roulette.
Now HTML5 helps a lot to get closer to a semantic web, but it seems that it is not enough. It seems that
articleit is not totally enough for search engines, so an alternative supported by Google, Bing and Yahoo emerged that aims to become the new SEO of all web: Schema.org
The Schema.org model has its advantages, as being more specific about what each object in the markup is, it expands the range of data types for microformats . Also, it is much easier to use the DOM without the need to apply id’s.
Schema.org is made up of a wide hierarchy of dataypes that handle numbers, texts, urls, addresses, birthday dates, books, events, movies, sculpture, painting, etc. Three basic attributes ( itemscope, itemtype, itemprop ) are used to declare the data types in the markup. Under this modality we have:
<div itemscope itemtype="http://schema.org/Article"> <h2 itemprop="headline">Nombre del artículo</h2> Escrito por <span itemprop="author">JuniHH</span> Este es el contenido del artículo. <meta itemprop="interactionCount" content="UserComments:78"/> </div>
It may seem confusing at first glance, but after delving a little we see the richness of the set, necessary to be considered a standard. Personally I bet on this alternative, which clearly aims to be the new SEO. It is currently in draft condition and it will be a few years before developers adopt it, but it is a very well-grounded proposal and where the new markup will be redirected. It already happened with sitemaps .
Under the Schema.org theme, the success of indexing falls again into the hands of the site developer and is not left to the discretion of a simple search engine, which as I mentioned before, is a Russian roulette.