The way ahead

Editorial Type: Research Date: 05-2018 Views: 878 Tags: Document, Research, Channel, AI, Workflow, Deep Analysis PDF Version:
Alan Pelz-Sharpe, founder of Deep Analysis, discusses how organisations can move from ECM 1.0 to 2.0

When any technology sector becomes 'mature' it gets into a rut. It's as if there is a collective question of "Why change things, aren't we are all doing just fine?". Even so, over the past few years, things have begun to change, and the fundamentals of how an optimal ECM system might work in the future are being reimagined. The change is due to interest in the potential of:

• Artificial Intelligence and Machine Learning
• An increased regulatory burden
• Easy access to cloud storage and distributed processing capabilities
• The need to manage, route, and control multiple sources of inputs
• Managing an overwhelming amount of unstructured data
• Increased need to process and act on data quickly
• The cumulative cost of storing large volumes of unmanaged and access data
• The move to API and service-based architectures

Together these factors provide a tipping point for new technical approaches to manage and draw value from both active and dormant files to emerge. We thought it would be a good idea to provide some structure for this move from ECM 1.0 to 2.0, and so we came up with the diagram above.

ECM 2.0
ECM 2.0 systems will take a contrary position to the 1.0 systems that focused on the importance of a single repository. ECM 2.0 systems will take for granted that although some business documents and files may be closely managed in a closed repository, most will not and never will be. 2.0 also accepts that there will be multiple workflows and integration points in play within a single organisation.

Again, rather than one workflow system optimised for one repository there will be multiple and (sometimes) competing automation and process systems. In our architecture outline, 2.0 also extensively utilises traditional content analytics, AI and ML (and in some cases Deep Learning) at three layers in the stack.

CORPUS ANALYSIS
Most established firms have millions (in some cases billions) of stored historic documents sitting in legacy systems and document repositories. Few have any real insight into what is in these documents or what value (or risk) that these documents carry. Machine Learning and Deep Learning systems can be trained to analyse this corpus of data for legal discovery, the identification of risks, duplicates, and redundancy.

ACTIVE ANALYSIS
Active files and documents in ECM 2.0 leverage machine learning to ensure that intelligent capture, document classification, summarisation, and insight are applied. It's here that we see most of the current activity in the market occurring, whether that be to improving capture efficiencies or simply applying accurate classifications to meet increasing regulatory oversight needs.

ACTIVITY ANALYSIS
To date, external activity analysis has been primarily concerned with web content and there is a growing interest in analysing who engaged with the content, how they engaged with the content and when, and in the process extracting key business insights.

ECM 2.0 opens up a range of possibilities to leverage the rich, yet currently unloved legacy silos of data they have accumulated, while also extracting more value from new content and automating many more activities down the line. The shift will be a major but worthwhile undertaking, and play out over many years. But do not be overwhelmed, as in many cases, the first step may just be simply moving your files to cloud storage. As the Chinese saying goes, a journey of a thousand miles begins with a single step. Carpe Diem!

If you would like a copy of our new report "Intelligent Information Management - from ECM 1.0 to 2.0" send us an email at info@deep-analysis.net.
More info: www.deep-analysis.net