Best Practices — Populating Big Data Repositories from DB2, IMS and VSAM
Project and Program:
Enterprise Data Center,
Integrating Innovative Technologies
Tags:
Proceedings ,
2017 ,
SHARE San Jose 2017
As Big Data repositories are becoming more strategic to an enterprise, the need to populate them from legacy mainframe databases is percolating to the top of the priority list. Moving business-critical mainframe data into Hadoop, NoSQL or other Big Data databases presents a number of challenges vs populating a relational based data warehouse. Determining the degree of normalization for segment hierarchies / repeating groups, how to process incremental changes and handling general mainframe data issues are some of the key areas that need to be addressed. If not done correctly, the results will not match the expectations.
Attendees will learn some of the best practices for populating Big Data repositories from mainframe databases, including general design considerations, common data issues, initial data loads and near-real-time updates.-Larry Strickland-DataKinetics
Back to Proceedings File Library