ERP gone bad: a case study
The following is based on a true story.
A manufacturer of specialized metal parts—let’s call the company “Customer”—not long ago realized it needed an enterprise resources planning (ERP) system.
For years, it had instead used custom programs written in FoxPro by one of its executives. By 2005, those programs were no longer adequate. The business had grown, and with it the volume of orders. New production equipment offered new automated features, including the opportunity to create one tightly integrated manufacturing system.
Finally, the creator of the custom programs was preparing to retire, and would take his knowledge of those systems with him. As a result, Customer went shopping for an ERP system.
Customer’s search led it to “Vendor,” and it purchased the Vendor’s product (we’ll call it simply “Software”) after an on-site demonstration at its location. Frustrations quickly followed.Year One breaks out this way:
June: Software installed
July: Training at Customer’s location
October: Further training at Customer’s location. The Software exhibited many flaws, requiring hours of repairs by Vendor’s technical expert. Little actual training performed.
December: One of Vendor’s staff indicated that the Software had not been properly tested by Vendor before it was released for general use.
Problems continued throughout Year Two. Among them:
Difficulty installing SQL software;
Difficulty exporting from the Software to Customer’s accounting program;
Software was unable to support multiple users;
Specific modules, such as Orders and Invoices, did not function properly or reliably; and
Attempts to run key reports generated fatal errors.
As of the end of Year Three, Customer had paid more than $42,000, but the Software still did not work as promised.
What went wrong?
This case is particularly interesting because Customer was not unsophisticated in IT matters. As mentioned, one of its executives had built Customer’s existing systems “in-house.” Indeed, this executive spent a great deal of time with Vendor’s personnel, helping them solve problems in the Software—not helping create interfaces between the Software and Customer’s existing systems.
Given that level of skill, the question “What went wrong?” is even more compelling.
With the benefit of hindsight, it appears that one factor was haste: Customer placed its order after an on-site demo, but without substantive testing. Neither did Customer check Vendor’s references. In addition, the contract turned out to be very one-sided, in favor of Vendor:
There are no detailed performance specifications.
In the event of defect, Vendor’s sole obligation is to fix the error or replace the Software.
Vendor’s liability for any loss “shall be limited to the purchase price” for the component alleged to have caused the loss.
In practical terms, Customer is required to pay, and Vendor is required to deliver a product and make it perform according to Vendor’s base specifications. Extra features are extra costs. Also, there is no guarantee or warranty that Vendor’s base specifications will be adequate for Customer’s needs.
Significantly, the contract does not give Customer the right to call the project off and receive a refund. Such a provision could have spared Customer two years of frustration, as serious defects in the Software became apparent early on.
Vendor, of course, had its own explanation for the difficulties:
“Tinkering” by Customer’s IT expert;
Defects in Customer’s hardware and software—including, oddly enough, a server that Customer purchased on Vendor’s recommendation;
Lack of a strong commitment from Customer management; and
Customer’s inability, or failure, to assign sufficient personnel to the project.
What might have been
Early in Year Three, Vendor suggested it could successfully complete the project if hired to do so full-time. It noted that the process is complex and gave the example of another customer that had spent $1 million to achieve a successful installation. That statement contrasts oddly with Vendor’s early representations that the Software was an inexpensive, easily installed, “turnkey” product. Had Customer checked Vendor’s references, Customer might have been better prepared for what lay ahead—or it may have selected a different vendor.
It appears Customer believed it was acquiring a proven, stable, “off-the-shelf” product that could be installed and implemented relatively easily. That did not prove to be the case, resulting in headaches for Customer, and a number of valuable lessons for students of contracts.
1. Do your homework .
Identify the problem you seek to solve.
Test, and test thoroughly, vendor’s proposed solution.
Recognize that, in all likelihood, you hope to buy a solution, while vendor hopes to sell technology. Therefore take the time to ensure that the contract clearly spells how the technology is expected to perform.
Check vendor’s references.
2. Do not pay up front, or based on the passage of time .
Tie payments to successful completion of objectives that are clear and which can be objectively measured.
3. Spell out performance expectations . These should be tied to standards that can be measured objectively.
4. Provide meaningful remedies . This includes the right to terminate the agreement if performance standards are not met.
5. Ensure clearly defined responsibilities . When in doubt, vendor should be responsible. After all, they are the experts.
As of this writing, the dispute between Vendor and Customer is still unresolved. Customer believes it has made substantial payments and has not received what it was promised. Vendor argues Customer’s expectations were unreasonable and that Vendor’s best efforts were impeded by Customer’s missteps.
Had Customer been more careful in its search, it might have selected another vendor and avoided these difficulties—although another vendor might have presented other challenges.
Alternatively, a better contract would have given Customer far more leverage to drive Vendor to acceptable performance, and far more protection against substandard work.