Dealing with system integration issues
Generally speaking, system integrators ensure that components, systems, subsystems, processes, and workflows (to name a few) function together reliably, efficiently, and economically. They can serve process industries, discrete industries, and everything in between. While striving to please all the stakeholders, it can be challenging to make everyone happy-at the same time. From time to time, issues can arise that can make system integration seem like Mission Impossible. To ensure your project finds its way to mission accomplished, consider these challenges and how to handle them.
A good beginning makes a good ending
One of the early issues system integrators face is understanding the business issues that drive a project. Successful integration projects not only result in solidly performing automation systems, but also provide solutions to business concerns. Too often, the conversations are solely about technical issues with technical people. This narrowed view can lead to projects that succeed when viewed through a technology lens, but fail miserably in meeting their prime objectives. Only after the why, what, when, and who of a project are thoroughly understood is it wise to determine how to apply the best technical solution.
There are also many nontechnical requirements that must be sorted out early because they can influence technical decisions. Most projects will likely require a variety of:
- Materials and system resources
- Unencumbered access to production systems
- A reasonable timeframe for design/build activities
- Adequate production downtime for installation
- Sufficient time and access to operations and maintenance staff for training.
Each of these items has one or more stakeholders, whose interests can often conflict with each other and your project requirements. Navigating these competing interests can challenge even the most seasoned system integration firm.
The best approach begins with effective communication and planning. When you have a firm understanding of the business drivers, communicate your requirements early with all of the stakeholders. For example, it may not be obvious to a facilities manager that you need plant utilities operational during system installation—even if plant production is otherwise down. You must identify to these stakeholders all of the resources you need to accomplish your project, while accommodating any issues they may have with meeting these needs.
A significant challenge for all parties is that system integration activities often cause some disruption to operations during installation. It is essential to develop a plan for this early. From a system integrator’s standpoint, performing all installation activities in one continuous downtime window usually requires the least calendar time and can be delivered at the lowest cost. This approach is often the most disruptive to operations—particularly production.
Getting stakeholders on the same page can require creative negotiations. Phased systems installation, around-the-clock installation, multishift training, parallel operations between new and old systems, advanced finished-product inventory accumulation, and even production outsourcing can come into play. With some up-front effort, it is possible to strike the right balance among lost production, installation savings, resources, access, and business concerns. After the plan is in place, keep the stakeholders engaged throughout the process—this is the best pathway to a successful ending.
After the project logistics are sorted out, a system integrator can begin to design the physical and functional aspects of the system. There are risks to manage throughout this process and throughout the lifecycle of the system. It often falls on the system integrator to help manage them-sometimes long after the project is completed. Security, controls, software, and testing are important considerations.
Security risks: Managing security risks is becoming increasingly challenging for system integrators. This is due in part to the availability and presence of smart devices on the plant floor, and the desire to take advantage and integrate the information they provide across systems and platforms. This is a large and growing issue, with many layers and nuances. PLCs, HMIs, servers, workstations, networks, and applications have security vulnerabilities. Balancing the advantages of access and integration with the concerns of security is becoming increasingly complex. Understanding the risks, defining the security zones, identifying the data that move between the zones, and identifying the physical and logical entry point for each zone are essential aspects in developing a robust security strategy.
Control equipment operation: The safe and effective operation of the control equipment over the lifecycle of the system is another risk to manage. Control equipment must be vetted to ensure conformance with intended use, input/output signal types, power requirements, corporate standards, and more. Where this equipment is physically installed must also be considered. There is a big difference between equipment designed to be dust resistant and equipment designed to resist 80 lb of hose-directed water with airborne corrosives. When shipping the control equipment, excessive vibration can damage delicate electronic components. The effects of this might not be immediately apparent. Shipping via Air-Ride van or other vibration-reducing methods will help mitigate the risk of in-transit damage.
Software: Systems usually include various types of software. One type is referred to as commercial off-the-shelf (COTS) software, which includes operating systems, security software, visualization platforms, database platforms, and more. There is a unique risk associated with how various COTS software products interact with each other over time. For example, a routine service pack upgrade by your IT staff on your workstation COTS operating system may render your COTS user interface software inoperable. System integrators frequently deal with issues like this. They can be a great resource to help manage these activities over the life of the system.
Customized software: One of the most challenging parts of any system integration project is customized software. It can be found running plant-floor controllers, graphical user interfaces, customized scripting, database management, and much more. Ensuring this software meets the needs of all stakeholders is one of the largest risks to manage. A solid functional specification, thoroughly reviewed by all stakeholders, is the best approach. This details all automated operations including:
- Interlocks, sequencing, and device activations
- How people interact with the system
- HMI screen definitions, security, overrides, alarms, configuration, and navigation
- What data are collected, how it is queried, and how it is presented
- Any information exchanged with other systems.
A well-written functional specification will serve as a basis to measure software development as well as to ensure the system is operating properly over time.
Testing, commissioning: Ensuring equipment, software, and systems are installed and tested properly presents its own challenges. A site installation plan—based on the functional specification and electrical design documents—is needed to keep track of which devices and functions were tested. This is very important on large-scale integration projects involving hundreds or thousands of devices and operations. It is not uncommon on large projects that certain smaller areas are initially untested during installation. This can be due to lack of equipment, utilities, raw material, or other items necessary to test the operation. It is essential to keep track of these areas and to involve the system integrator prior to putting such areas into production.
The central design challenge
A core design challenge is in how much to centralize or decentralize a system, for both control and I/O systems. It’s a more complicated issue than it appears on the surface, and one that is often overlooked. The general rule is that equipment groups that operate reasonably independently are good candidates for distributed control. Highly intertwined equipment groups work best in centralized schemes. The degree of I/O system centralization is based on the physical layout of the equipment. These are the rules; now consider when it’s appropriate to break them.
If installation is done in phases, things get a bit more complicated. For I/O systems, the physical location of the process equipment is not the sole determining factor in this regard. You will also need to consider how the equipment is phased into operation. If installed in multiple phases, it may make sense to distribute the I/O per phase. It really comes down to how much the equipment in the new phase can affect the operation of equipment in a previous phase. Distributing the I/O will be the least disruptive to existing equipment. From a control standpoint, phased installation of highly intertwined areas does not generally create a need for distributed control. Phased installation of independent process areas would strengthen the case for distributed control, but rarely create one for a centralized scheme.
It gets more interesting when you install an inherently centralized process in phases and expect it to run in some temporary manner with manual equipment. After the final automated equipment is eventually installed, it could be incorporated into a centralized control system, or be distributed with its own controls. The centralized approach will have fewer failure points and a simpler structure when it is up and running, but a distributed design could potentially be installed much quicker.
With the centralized approach, you must thoroughly retest each unit operation to ensure safe operation of both the existing and new equipment. In a distributed design, much of this testing could be done in advance without risk to production. This approach can significantly reduce overall installation time, and could be considered if your installation window is tight.
Money makes the world go round, and system integration is no exception. For each approach, there are hardware, installation, and startup costs to contend with. Centralized I/O systems tend to have lower I/O component costs due to economy-of-scale factors. Distributed schemes tend to have more hardware components. Installation costs for traditional hardwired I/O platforms require a separate wiring run between each field device and its associated I/O module. Distributed network platforms can allow you to wire from device to device without the need for individualized runs back to the I/O system. You will tend to pay more for I/O hardware and cabling media, but your overall costs could be significantly lower.
Centralized control systems tend to have lower control hardware costs compared to distributed systems due to fewer controllers in centralized systems. System startup costs for both control schemes are more complicated to determine. Installation phasing and operational expectations can significantly impact these costs. It is import to determine up front to what extent this might be the case, as it can be a large influence on the overall costs.
As mentioned earlier, phasing the system installation is one tool to minimize production downtime as well as a consideration in a centralized or distributed design. However, there are some other technical issues to address under such circumstances. Keeping portions of an existing I/O system intact while phasing in newer portions has its challenges. This will likely require new wiring to the new system and parallel operations of old and new systems. For devices electrically daisy-chained using a common power supply, if at all possible, try to keep the entire loop intact on one system or the other. This will greatly simplify your life.
Where individual instrumentation input signals must be shared between both systems, there are generally a couple of options available. Having signals networked between the old and new systems will be the simplest solution from a wiring standpoint, and there is no loss of signal fidelity with this method. The trade-off is a signal time-lag between systems compared to direct I/O module connections. There is also a potential hardware cost hit because this approach may require specialty communication modules to accomplish. For certain critical or high-speed parameters, this method may not suffice. In that case, you will need to pick one system to control and measure, and have the other request and/or monitor these results.
For output devices that must be controlled from both systems, there are a couple of options you can consider depending on the device type. Networking between the old and new systems is one common option. One system would physically control the device; the other would request activation or deactivation of the device. This is the most secure method of operation and the easiest to troubleshoot should problems occur during this transition process. This works well for both digital and analog devices.
If networking is not an option, interposing relays with dual contacts can be used to allow both systems to simultaneously control a single digital output device, but care must be taken with this approach. If either system requests the device, it will unconditionally activate unless otherwise interlocked. If wired and designed properly, when the cutover is complete, the old system can be disconnected from the relay, the relay coil can be removed (a potential point of failure), and the relay base simply exists as a terminal block between the I/O module and field device in the new system.
Analog output devices are more problematic and not as easily amendable to this approach due to potential loss of signal fidelity. If you simply must control an analog device from two control systems and the systems cannot be networked, if all else fails, place the control (and associated measurement, if necessary) in one system, and hardwire the setpoint or output signal (and any associated permissives) between them. Old-school techniques would typically do this with binary-coded-decimal modules on both ends, although other approaches are also possible.
For pneumatically actuated digital devices, pneumatic switching may be used, although this is a less common option. Parallel operation is a common and viable installation option, but not without its challenges.
Making the right connections
As plant floor devices continue to get smarter, and the cost of both bandwidth and storage continue to plummet, information exchange on the plant floor and across the enterprise is becoming an essential part of most system integration projects. Industrial networks on the plant floor have special requirements not normally found in traditional corporate IT environments. The design and installation of robust industrial networks that operate reliably over the lifecycle of your system present unique challenges to system integration firms.
Unlike an office or home environment, industrial applications are often electrically noisy places. In addition to high electromagnetic interference (EMI), these environments are subject as a class to temperature ranges, dust, humidity, and a host of other factors not normally found in a home or office.
The first step in designing a robust industrial network is choosing cable media. So, what’s the right choice? ANSI/TIA-1005: Telecommunications Infrastructure Standard for Industrial Premises states that Category 6 or better cabling should be used for hosts or devices that are exposed to an industrial environment. Category 6 cable is good for up to 1 GB at 100 meters, and category 6e cable can do up to 10 GB at that distance.
When installing network cable, shielded Ethernet cable may perform better in high EMI environments if run outside of conduit. The key to using shielded cable is in proper grounding. Using one single ground reference is essential. Multiple ground connections can cause what is referred to as ground loops, where the difference in voltage potential at the ground connections can induce noise into the cable. Use a grounded RJ45 connector only on one end of the cable. On the other end, use a nonconductive RJ45 connector to eliminate the possibility of ground loops.
If your Ethernet cable must cross power lines, always do so at right angles. Separate parallel Ethernet and power cables by at least 8 to 12 in., with more distance for higher voltages and longer parallel runs. If the Ethernet cable is in a metal pathway or conduit, each section of the pathway or conduit must be bonded to the adjacent section ensuring electrical continuity along its entire path.
In general, route Ethernet cables away from equipment that generates EMI, such as motors, motor control equipment, lighting, and power conductors. Within panels, separate Ethernet cables from conductors by at least 2 in. When routing away from EMI sources within a panel, follow the recommended bend radius for the cable, which can be obtained from the cable manufacturer.
From a network architecture perspective, never use hubs in this type of environment. If your network contains multicast traffic, a managed switch is an absolute must. Multicast traffic typically comes from smart devices on plant floor process networks in a connection-oriented producer/consumer-based technology. EtherNet/IP (Ethernet industrial protocol), managed by ODVA, is an application layer communication protocol that uses this technology. In addition to handling multicast traffic, this class of switch usually provides error logs, control of individual port speeds, duplex settings, and the ability to mirror ports. This functionality will help optimize and maintain the industrial network over time.
If your network is very small and has no multicast traffic, you could consider unmanaged switches. They are generally less expensive than managed switches because they lack much of the core functionality and features of their managed switch counterparts. As a general practice for networks of more than a few nodes, if cost is not a primary concern, go with a managed switch. You will be happy you did.
Trending toward the future
System integrators must necessarily face many challenges when dealing with clients, including project management best practices, stakeholder engagement, and ensuring stakeholder expectations are aligned with business goals. Risk abatement is also a big part of the system integrator’s world, and an ever challenging one, particularly on technology fronts. System integrators must follow core design principles and face the inherent challenges when it comes time to put these principles into practice. Finally, with the explosion of information on the plant floor, system integrators must face the challenges of putting an industrial network infrastructure in place to best leverage what these technologies offer.
While you are likely to encounter many of these issues on a system integration project, they are just a sample of the entire universe of issues we deal with on a regular basis. Technology will continue to evolve, as will the particulars of the technical issues encountered by the system integration community. What remains constant is the approach to dealing with these issues. These best practices should serve you well now and into the future.
David McCarthy is president and CEO of TriCore Inc., a national system integration firm based in Racine, Wis., with offices in Santa Fe Springs, Calif., and Mesa, Ariz. Before he founded TriCore in 1991, McCarthy served in various capacities at Alfa Laval/Tetra Pak, including manager of engineering for its U.S.-based food engineering company. McCarthy, who has more than 30 years of experience in automation, is a computer scientist from Rochester Institute of Technology.
– See other articles from the supplement below.
TriCore is a CSIA member as of 4/17/2015