Richard Grondin, SAND Technology: Intelligent Information Management for the Real World
October 2008 by Richard Grondin, VP R&D and Product Deployment, SAND Technology
Intelligent Information Management is not really a new concept. What is new is the scope and volume of enterprise data that needs to be managed, and the more stringent regulations concerning data governance by which organizations must abide. Now more than ever, enterprise data assets need to be managed carefully to protect data access, immutability, privacy, monitoring capabilities, auditability, and business value over the complete information lifecycle. This is precisely what IIM is about, and to be successful it needs to be implemented with a focus on the data itself instead of on the specific ways it will be used.
This requires a paradigm shift on the enterprise level: a realignment of IT architectures from an application-centric to a data-centric approach. Business needs are changing quickly, and IT architectures should be able to satisfy these within reasonable timeframes and at acceptable costs, all the while protecting enterprise data assets.
The Corporate Information Factory model, developed by Bill Inmon, is currently in use by a variety of organizations. Typically, the starting point for a Data Warehouse implementation is a business requirement for specific reports. The data architects then identify what information is available, how to get access to it and the level of transformation required to produce those reports (this is the data preparation phase). While this approach has brought significant benefits for many enterprises, it also has some weaknesses – the most important being that it covers only the data associated with a specific need at a specific point in time. For this reason, such an approach could be termed application-centric.
Now, new legal regulations are putting increased pressure on organizations to retain, and maintain access to, a greater variety of data. Data not associated with any specific business requirements must now be kept around “just in case” it is needed, and at the same time data governance policies need to be introduced. The easiest way to respond to these new data requirements would be to store the information on tape, but then the problem becomes how to get access to it. For this reason, some organizations have opted to transform their data warehouses into de facto storage devices. However, a side effect of this approach is that DBA teams are under increasing pressure to maintain the various Service Level Agreements (SLAs) that have already been established, since keeping more and more data in the warehouse while maintaining good performance can be a difficult proposition. This is one of the reasons there are so many RDBMS products currently available on the market.
An Intelligent Information Management implementation can help organizations to overcome these new challenges without going through multiple “revolutions” in their current data architecture. IIM can help satisfy data governance requirements and at the same time improve “data agility” for efficient delivery of Business Intelligence value. This type of implementation requires a shared vision and best practices program supported by a flexible data-centric architecture, along with an iterative transition process to review the various data requirements in the organization.
Many organizations have already deployed multiple data warehouses, data marts and cubes to satisfy their business intelligence requirements. Much has been invested in such deployments, and to protect this investment, IIM has to be implemented as an evolution of the infrastructure currently in place rather than a revolution requiring everything to be rebuilt from the ground up. Typically, the first step taken by organizations implementing IIM best practices it is to introduce an Information Lifecycle Management (ILM) infrastructure.