I’ll give you two examples to make it clear. There are several options for this. Imagine that we have a blog for which we are going to upload the sitemaps.xml fragment into different pieces to know the indexing status of each type of page. A basic fragmentation would make us create the following sitemaps:
With this we will be able to know
What percentage of each type of page we have index. Surely if the blog is old and has little technical SEO we will find that each type of content has a large number of URLs that are not index.
And this is good to know, but if we only see business owner database this type of things, we will not be able to act on it, we do not know what to do to improve this situation and we have nothing to do with this information. This is usually the problem.
Now let’s think about applying
SEO logic to our sitemap fragmentation. We think about each type of page and what its indexing problems might be. For example, we choose the posts where we are seeing the highest percentage of non-index content, we analyze them and we on the other hand businesses that see that it is likely to be an age problem: the older the post, the more we think it has been deindex… So we do our sitemap fragmentation looking to obtain information just by this criterion and, instead of uploading a single post sitemap, we upload the following collection:
The amount of indexing of each of these sitemaps is actionable data
I may find that after 6 months they start to lose indexation and discover that I have deep crawling problems. Or that posts from more than 5 years tg data ago are still index but the current ones are not index as well demonstrating a problem of authority and quality of content.