The domain authority of a website describes its relevance for a specific subject area or industry. This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the SERPs of search engines led to the birth of a whole industry of Black Hat SEO providers, trying to feign an increased level of domain authority. The ranking by major search engines, e.g., Google’s PageRank is agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites in the Internet. The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs.
The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links. Several search engines have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from a graph-theoretical perspective, manifested in the quantity and quality of inbound links. The Software as a Service company Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100.
Prestige of website and authors
Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis of Graph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites but that those pointing websites are prestigious themselves Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration, in those cases, where the authors are named and identified (e.g., with their Twitter or Google Plus profile]. In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive. Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk of Black HatSEO. When backlinks grow rapidly, this nourishes suspicion of Spam or Black Hat SEO as origin. In addition Google looks at factors like the public availability of the WhoIs information of the domain owner, the use of global Top-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain.
Information quality
describes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic, contextual, representational and accessibile. Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’s PageRank algorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality.
Centrality of a website
Prominent actors have extensive and living relationships with other actors. This makes them more visible and the content more relevant, interlinked and useful. Centrality from a graph-theoretical perspective describes unidirectional relationships, not making a distinction between receiving and sending information. From this point of view, it includes the inbound links considered in the definition of “prestige” complemented with outgoing links. Another difference between prestige and centrality is that the measure of prestige counts for a complete website or an author, whereas centrality can be considered on a more granular level like one individual blog post. Search engines look at various factors to judge the quality of outgoing links, i.e., on link-centrality, describing the quality and quantity as well as the relevance of outgoing links and the prestige of its destination. They also look at the frequency of new content publication to be sure that the website is still an active player in the community.
Competitive situation around a subject
The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects like SEO are very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.”