A data catalog keeps your cloud migration agile

Exchange insights, tools, and strategies for canada dataset.
Post Reply
jrineakter
Posts: 810
Joined: Thu Jan 02, 2025 7:15 am

A data catalog keeps your cloud migration agile

Post by jrineakter »

Once your on-premises data is cataloged, you can figure out what data is most important, what data is of the highest business value, and what data sees the most use. And from there, you can create a prioritized backlog of resources to migrate, then iterate through the backlog in an agile manner.

You should prioritize your data using a two-by-two matrix. The axes are high value, low value, high complexity, and low complexity.

.
Start by identifying high-value data. How to do this? Focus on the importance of business use cases; what are the most visible pain points? Which business users are complaining most about slow data delivery or critical broken dashboards?

Next, identify low-complexity data so you can focus on low-complexity, high-value data to start, moving on to more complex data after you’ve shown quick success and value to your team. By demonstrating momentum, your business leaders will feel more secure in your organization’s investment in a data catalog, and be more inclined to support future data governance initiatives.

Using the broken dashboard as an example, your enterprise data catalog’s automated lineage viewer—powered by a knowledge graph—lets you understand which data sources inform it; these are the data you should be prioritizing for unraveling, cleaning, and migrating. With any luck, the node in your knowledge graph that represents indonesia whatsapp number data the dashboard is receiving data from a few, easily viewed and understood data sources, represented by edges. If so, you can consider this “low complexity” data.

.
If, on the other hand, the node is receiving data from more sources than you want to count, and the edges flowing into it look like a plate of spaghetti, this is “high complexity,” and it should be prioritized after your high-value, low-complexity data.

Your knowledge-graph-powered data catalog gives you insight into this lineage, not just visually but also through graph querying and analysis. For example, automatically inferring node centrality in the graph represents bottlenecks in the lineage (imagine a view that joins many tables and is consumed by many other resources).

Analyzing the metadata knowledge graph gives you an opportunity to reorganize and unravel complex flows and make them easier to maintain. This is your chance to sort it all out, eliminate the spaghetti mess, and build a clear, strong link from data sources to a business-critical resource.

After establishing a plan and starting to execute on the migration of high-value data, it’s time to decide what to do with the low-value data. Perhaps this data doesn’t need to be migrated at all, thus avoiding unnecessary expenses and effort.
Post Reply