www.Plesums.com (logo)

How to mess up your image and workflow project

As a consultant, I have seen a lot of things I would call errors - sometimes really big errors (or little errors that really mess things up badly). As time allows, I expect to expand this list of topics, and expand each topic into a short article about why I consider it an error and what I would recommend that you do about it. In the meantime, here is a brief paragraph about some of the items.

If you have questions or disagree with my opinions, or want to suggest additional topics, I urge you to share your ideas with me at

Imaging without workflow - scanning documents after processing from paper. Electronic imaging has become the low-cost record keeping technique (cheaper than paper or microfilm). Imaging is good - if you are not using it, you need to get with the program. However, once imaging is in place (or as part of an imaging installation) there are also substantial benefits from the efficiency and management provided through automated work management. In general, the benefits of imaging are often savings in space, supplies, and clerical support. The benefit of workflow is often a 30% or larger improvement in the productivity of the professionals using the system.

Indexing by Professionals. Nobody expected an executive, manager, underwriter, or other senior professional to sort the mail every morning to be sure it was delivered to the right place, but an amazing number of companies expect their professionals to index documents so that they are delivered to the right place. Can't we let the mail room get it right 95% of the time, like they have for the last 100 years, and then correct the few errors?

Too much resolution. An image scanned at 300 pixels per inch is much more attractive than one scanned at 200 pixels per inch. When a system is being planned, the committee may casually choose the "prettier" images, even though the lower resolution is perfectly usable. "Storage is cheap" is a popular mantra, but when the cost-benefit ratio is calculated, and the images are that much larger (with the slower response time and the higher load on the network and processor as well as extra storage), some projects that added the extra cost of "pretty images" have been killed.

Gotta have color. If you are talking about a photograph of the wrecked car or the tree that fell on the house, I agree - color photos only take about 20% more space than "black and white" (really gray) photos, and may help visualize the problem. On the other hand, some people argue that they cannot process ordinary office documents electronically until they have color. These are often the same people who make a copy (black and white) of the document as part of their procedures. Or fax a document, or use a microfilm file copy. Our office traditions make binary documents (black ink on white paper, or red ink on yellow paper, or any other pair of ink and paper colors) acceptable, and have allowed us to use "Xerox" copies for 30 years, and photo copies or carbon paper for many years before that. Color image scanners are getting faster, and storage is getting cheaper, but we aren't there yet!

High volume requires fast scanners. Fast scanners are sexy! How can I run an image operation if I don't have a fast scanner? If the documents are relatively homogeneous, I agree - a fast scanner is a worthwhile investment. But if there are many dissimilar documents - if each envelope contains a surprise - it is sometimes cheaper to use slower scanners where each page is fed into the scanner as it is extracted and analyzed - with virtually no advance document preparation. Sometimes multiple slow scanners are cheaper than a high-speed scanner, even considering the labor involved, when document preparation is adjusted as well.

Image is just another form of data. This statement is often made by someone building his or her own image system. Each image is put into a database (yes, the major databases will support "blobs" of binary data like images, but we won't discuss efficiency). Or each document becomes a separate file (yes, file servers handle lots of individual files, but complex directory structures are required for efficiency when the number of files gets large). Images are very large compared to most data. Images are rarely changed compared to typical database data. Usage patterns are drastically different. Network load is different. Processing requirements are different. Back-up and archive requirements are different. There are some very neat development tool kits that make it easy to add images to a system - such as the ID photo to the personnel system. However, every project that I have seen that starts with the proclamation that "Image is just another form of data," then tries to build a custom high volume image system with these tools, has failed. A far more successful approach is to say, "images are different" and then use program packages that support those differences.


Analysis and modeling the workflow. Some companies want to do a thorough analysis of the current workflow, and completely redesign how it will be performed with an electronic work manager. They buy products, take classes, and hire consultants to simulate how the new workflow will perform under seasonal variations and other assumptions about the workload. After a year or two of analysis, they either (a) give up the project, because they can't handle the rare special cases they have found, or (b) finally implement the system. A few weeks after implementation the users start suggesting improvements. Other companies make few if any changes to the workflow, and implement in weeks rather than months or years. Users still suggest improvements shortly after implementation, but (a) the system is operational a year earlier, and (b) the company didn't spend a fortune on simulations. I normally recommend a middle ground, of looking at improvements and changes to the workflow that are made possible by a work management system, then getting the system implemented as quickly as possible, and coping with the inevitable changes.

Who controls the workflow? In the paper processing environment, where we have 100 years experience, managers would watch the flow of paper, and if a backlog occurs (whether due to changing workload, missing employees, or errors in planning), they would change the workflow (often just by telling the clerk distributing the work). The change was immediate - the supervisor would watch the impact of the first change, and as required, would change it again. With today's automated workflows, some companies feel that the systems analysts know more about how the work is processed than the manager. Changes to the workflow must be proposed, analyzed, programmed, and tested before they can be implemented. Do we wonder why the user-managers, who have a problem NOW, hate these workflow systems? Be sure the manager is able to have control.

We want an inbox. "Our people have selected their work from that in their inbox for years, and they are very efficient." One of the disadvantages of paper was that it had to be somewhere - but only one place at a time - typically in someone's inbox. If someone needed a particular document (the customer was on the phone), it was difficult to find. If someone was ready to do some work, they could spend a significant portion of their time rummaging through the inbox to select the most interesting (but probably not optimum) item to work on next. Some work management systems simulate an inbox for people who cannot consider a new paradigm. But I have seen FAR more efficiency with a shared queue of work, where the system automatically assigns the optimum "next item of work" on request. This is often called the "push" paradigm, where the system pushes work to the user. If the customer calls, the user can search for the work in the queue, and if authorized can "pull" the work to themselves for processing.


Indexing has many definitions. My definition is that enough data must be entered about any document to

If the document will always be handled within minutes or a few hours (as required for some financial transactions), we may be willing to skip the identification until after the work is processed, and only need to route the work to the proper processing area.

Index by Customer Name. Many organizations want to be able to look up a document by customer name. Therefore the name (or last name) is one of the primary index fields. This requires the name to be entered for each document. With the chance for misspelling a difficult name, or having too many entries for a simple name, this is actually inadequate as a search tool. With name changes (marriage, divorce, etc.) the maintenance is prohibitive. A far better approach is to put a sophisticated name search in the initial customer identification, with a cross-reference to particular products. Then use an account or contract identifier as the primary index for each document.

Numerous index fields. If one value to find a document is good, then two is probably twice as good. And while we are at it, why not 5 or 10 index fields? The cost of indexing skyrockets, as does the chance for error. Far better to have a few, carefully maintained values, such as contract number, social security number (or other personal identifier), type of document (since we need that for the workflow anyway), and date received (cheap to capture), and use other systems to cross-reference to these values.

Back to the home page

How to mess up your company.