Suit Yourself: Tailoring a Best-Fit Network

Most control engineers don't give a rat's rear-end about historical fieldbus conflicts and shifting market shares of networking equipment. They care about immediate implementation, monitoring, maintaining, and updating their individual applications because their jobs depend on objectively measured performance of those applications.

By Jim Montague Control Engineering August 1, 2001

Most control engineers don’t give a rat’s rear-end about historical fieldbus conflicts and shifting market shares of networking equipment. They care about immediate implementation, monitoring, maintaining, and updating their individual applications because their jobs depend on objectively measured performance of those applications.

Although the desire for efficiency and savings is the same, engineers travel many paths to achieve optimization because each facility, plant, application and process is different. This means users must be intimately aware of their application’s functions and requirements, as well as many useful networking methods, equipment and software. Then they need to compare, select and adapt the best set of technologies to fulfill their application’s specific needs.

Specific site characteristics, such as frequency of system checks, for example, can determine whether a refinery-based network with many long-distance runs will be able to accept a spread-spectrum radio system instead of a fiber-optic network. Likewise, where an offshore oil and gas pipeline may be able to use a cellular telephone-based system that downloads archival data once a week, shorter-run lines for water/wastewater may need less-frequent pressure checks, and benefit from lower-cost telemetry using a dial-up or radio-based system.

Implementation basics

Besides following good installation practices, as well as checking cabling, connections, terminations, switches and software, users need to be increasingly aware of network loading, traffic, and data integrity. They must make sure they have a good communications infrastructure, and verify that components they think are in place are actually there.

“The network is the backbone that everything else is built on,” says Ray Bachelor, president, Bachelor Controls Inc. (Sabertha, Kan.). “So developers have to ask: Is network speed and throughput adequate? Is the network installed properly? Is it designed to handle overloads? Can it be modeled and run before installation?”

When selecting a network technology, users typically follow one of two general routes, though there are often exceptions. The first category includes I/O block-focused networks, such as Profibus DP, Actuator Sensor Interface (AS-i), Seriplex, and others. The second consists of configurable device networks, such as Ethernet/IP, FOUNDATION fieldbus, ControlNet, Profibus PA, and others, which support I/O exchanges, explicit messaging, program downloads, and diagnostics. DeviceNet is used in both categories. [For more Ethernet options, see “What’s Left to Say About Ethernet?” CE, May, ’01, p. 72.]

“Users must then follow their network’s installation guidelines, which define operating specifications, such as total trunk length, maximum number of nodes, how many drop-lines are allowed, maximum drop-line length, and accumulated distance of all drop-lines,” says Dave VanGompel, technology consultant, Rockwell Automation’s NetLinx-Logix business (Mayfield Heights, O.). “Careful attention to all the guidelines is crucial because once you start cutting and splicing, it’s easy to lose track of the lengths you’ve installed.

“For example, your network technology may allow individual droplines up to 20 feet. Unfortunately, you’ve done 30 nodes that way, which equals 600 feet of dropline, and your network only allows 512 total feet of dropline. You need to know and take this into account when planning a network.

“Likewise, even though it’s convenient to drop a line from a rafter, doing so eats up dropline length faster. One possible, though inconvenient, solution is to loop the trunkline closer to the cabinets.”

Mr. VanGompel says a similar problem can occur when nodes are added after a network has been running for awhile, but users forget about moving the terminators. Neglecting this can increase retries, which adds jitter, or causes nodes to “drop offline,” intermittently. Eliminating these problems is crucial because newer networks have less margin for error because they’re operating five to 10 times faster than older networks. “The higher the bandwidth, the touchier the physical installation,” says Mr. VanGompel.

Increasing complexity

Beyond the basics, users must also be aware of increasingly complicated issues as the networks they’re implementing become more complex. For example, besides accounting for total loop distances and basic voltage drops, they must also add up capacitance and inductance values to stay within tolerances in intrinsic safety applications.

“Traditional bridging protocols, such as Highway Addressable Remote Transducer (HART) or FoxCom, are more forgiving and tend to be conceptually easier to layout and install because users don’t have worry about terminations or a lot of electrical engineering issues. This means users can delay selecting instruments and wiring techniques until relatively late in the implementation process,” says Jim Gray, director of marketing for I/A Series, Foxboro Co. (Foxboro, Mass.).

The essential appeal of a fieldbus network is that multiple devices can be dropped off one pair of wires, saving time, labor and money. Higher-level networks also generate more sophisticated data to enable predictive maintenance, while improving interoperability allows multiple vendors to work on one network.

“When you go with H1 FOUNDATION fieldbus, you must move instrument selection and functional decisions up to the front end of the engineering process,” adds Mr. Gray. “You’ve got to know ahead of time what devices you’re connecting to H1 because that will determine the layout. You also have to decide on bus topology, timing, power, functions, and capabilities, such as how algorithms will affect control speed.”

To simplify network applications and promote desired interoperability, Mr. Gray adds that using H1 for data acquisition can help reduce required engineering, and allow devices, for example, from Emerson Process Management (Austin, Tex.), Smar International (Houston, Tex.), and Yokogawa Corp. (Newnan, Ga.) to work together on the same network segment.

Further simplification will likely occur as more calculation functions migrate into the field with controllers. Already, fieldbus protocols and embedded tags on field devices allow users to remotely check network points, rather than “ringing out” 4-20 mA wires in the field.

“Still, marrying the plant-floor’s real-time priorities and IT’s data needs is a challenge. Manufacturing execution system (MES) integrators must be able to allow the process to continue as designed without losing data integration,” says Mr. Bachelor of Bachelor Controls. “So, network developers have to decide, if something happens to the system, do they continue to let the process run, and let the data get away? Or do they hold up the system at some point? Deciding on a data imperative can help determine the rules you’ll set.”

Tests and tools

After thoroughly researching and specifying networking solutions, users should also test drive new equipment whenever possible. Many suppliers and fieldbus organizations run interoperability labs where users can run potential solutions for their applications.

“Timing settings often need proper setting. Some field devices have quirks that could show up in testing. For instance, normal network settings might work fine with five devices, but testing could reveal that adding a sixth might cause communication problems,” says Mr. Gray.

Users can also avoid potential problems by ensuring their devices and networks comply with the latest libraries, symbol tables, and parameter tables for the protocol they’re using. This will allow hosts to read all data from the network’s field devices.

Sometimes suppliers proactively test and certify interoperability of some network solutions. For instance, the Fieldbus Foundation’s (Austin, Tex.) recently released Interoperability Test Kit 4.0 evaluates compliance with FOUNDATION fieldbus and tests for interoperability with other certified devices. A capabilities file then tells future hosts what the devices can do. Parameter tables for HART, DeviceNet and Profibus protocols accomplish the same basic task.

To help users examine and adjust existing networks, many software and hardware tools provide views of network traffic and help determine if devices are communicating properly. Some of these include:

Relcom Inc.’s (Forest Grove, Ore.) Fieldbus Monitor FBT-3 and Fieldbus Wiring Validator FBT-5;

SST’s (Waterloo, Ontario, Canada) NetAlert NetMeter for DeviceNet, NetAlert Traffic Monitor Tee; and NetAlert Power Monitor Tee;

Rockwell Automation’s (Milwaukee, Wis.) Allen-Bradley DeviceView Configurator; and

Synergetic Micro Systems’ (Downers Grove, Ill.) DeviceNet Detective.

In addition, from the information technology (IT) realm, a group of similar diagnostic devices called network sniffers also monitor network traffic, bandwidth issues, broadcast traffic and other parameters for whole network segments, or capture and analyze individual data frames.

Adapting “patchworks”

In addition to serving an application’s present requirements, truly suitable networks should also accommodate future needs—though few appear designed to do so. “The biggest problem we’ve found is that people go with software specific to one application, but often don’t address future applications or add-ons they’ll eventually need. Especially in municipalities, which seek low bids and often have a succession of vendors, users are eventually faced with patchworks of equipment and software networks that sometimes interact successfully and sometimes not,” says Christopher LaFavers, control engineer, AutomationSolutions (Houston, Tex.). “Nowadays, people are starting to step away from DCS and larger mainframes, and are going to smaller, PC-based systems with Microsoft Windows NT. This means even more problems as users try to integrate field equipment into new environments.”

Mr. LaFavers adds these multi-vendor, multi-protocol patchwork networks also increase as users seek to reduce I/O points and downsize their industrial systems, for example, from larger GE Fanuc 90-70s and Rockwell Automation’s Allen-Bradley (A-B) PLC 5s to smaller GE F 90-30s and A-B SLC 540s.

One solution for patchwork networks—usually containing equipment from several vendors and 5-10 different software packages—are dynamic data exchange servers or OPC-based servers, which use one software package to support multiple communication media types to varied equipment types. Automation Solutions’ AuCS universal communications server helps manage communication protocols and software, while its newer AuES enterprise server runs internal to Microsoft NT, rather than requiring another software program to be running.

“This means a network isn’t required to run separate servers per application, so several utilities and programs don’t have to compete for the same network resources, which lessens potential for conflicts on the system,” adds Mr. LaFavers. “AuCS and AuES know the protocols they support, so they’ll know what’s on their network, and won’t generate conflicts within themselves.”

Incentive-based partnerships

Because many networking technologies emerge and evolve so fast, there often aren’t many experienced users or guidelines for young products. Consequently, vendors can be one of the best resources for implementing new solutions, if users manage the relationship intelligently.

“I think more users are working out cooperative agreements in which suppliers are sharing some of the risk. This can help reassure users that a supplier is willing to stand behind the performance of its products,” says Steve Loranger, Emerson’s central U.S. area vp. Some risk-sharing methods include:

An agreement by the supplier to provide technical resources, maintenance and support for the first year of a product’s implementation;

A performance clause in the contract that assesses fees and/or waivers if certain performance criteria aren’t achieved; and

For early adopters, free or very low-cost installation of pilot equipment and systems.

“As new technologies become proven in the three-to-five years after they’re introduced, it becomes obvious which suppliers truly understand how to apply them,” says Mr. Loranger. “The key is to personally network with other users in your industry to find these successful suppliers.”

Getting fitted

Beyond cost and component availability, numerous variables must be addressed to make a network successfully fit and serve its application. Here are some of the most important:

Determine data volume, speed and other parameters required by the network;

Decide on network layout and physical size—number of devices, sensors, cables, connectors, actuators, and other equipment—required for efficient operation and future expansion;

Examine networking methods used in similar applications. For example, fieldbus protocols for discrete applications normally run many small data packets at high-speed and high-volume. Networks for process applications typically run fewer data packets more slowly, but with far more data per packet.

Evaluate inherent tradeoffs in a chosen networking method or fieldbus protocol, such as internal speed and throughput versus data traffic and integrity; then adjust the design to optimize application efficiency prior to implementation;

Talk with other users about their experiences with specific network solutions. Actively check word-of-mouth related to particular equipment and systems;

Ensure that performance and support claims meet expectations. For example, if a fieldbus protocol can accept 200 parameters from a device, make sure the device can also handle more than just a few of those parameters;

Form a cooperative agreement in which the supplier shares some risk along with the user implementing a new network;

Enlist a system integrator and/or experienced suppliers to help adapt the network to individual application needs. Users should be comfortable and have a good rapport with their integrator, be able to go over all problems together, and not feel like products are being pushed down their throats; and

Simulate and test network components, software and operation as much as possible before installation.

Schneider Electric’s distributed I/O aids OEM’s plastic loaders

To lessen network wiring and installation costs and enable improved software and hardware, Conair Group (Franklin, Pa.) recently replaced centralized programmable logic controllers (PLCs) in its plastic pellet loaders with Schneider Electric Automation’s (North Andover, Mass.) Modicon Momentum distributed I/O system. Conair also wanted to integrate the loaders’ multiple communications protocols into one platform.

Momentum’s ModBus Plus network protocol provides up to 20,000 registers/sec, peer-to-peer communication, and a design that allows users to see and set Modbus Plus addresses on the module’s front. Conair uses the CPU with a Modbus Plus optional adapter and I/O base as the master. It uses the I/O bases and adapters for individual loader control. Conair’s typical configuration for loaders consists of an I/O base with a Modbus Plus communication top hat, essentially an add-on card or other device.

For users with hundreds of loaders and networks longer than 1,500 ft or more than 65 nodes, Conair adds repeaters and bridges to the configuration. So, if a program reaches a processor’s memory limit, users can add another processor to the network, and divide control responsibilities between processors.

Conair reports that installation and startup time for commissioning a system was significantly reduced compared to a traditional centralized approach. Instead of running discrete wire from the loaders to a central PLC, Conair’s electricians now pull Modbus Plus cable to each loader, then make the connection to the Modbus Plus tap and drop cable.