Model-based design of CANopen systems: System level modeling
Designing complex mechatronic systems can be easier with model-based design. Co-existing, multiple disciplines for mechatronic system design hinder the use of software-oriented modeling principles, such as UML, but modern tools may be integrated into a working tool chain. Part 2 of 2. This is a Digital Edition Exclusive in the Control Engineering January issue. See part 1, linked below.
Part one of this article covered traditional distributed automation and existing modeling approaches that already exist. Part two covers how model-based designs can be integrated into a working tool chain.
System-level modeling in Simulink
A system model typically consists of models of a whole signal command chain, whether it is system or subsystem. The model may also contain submodels describing behavior of hydraulics and mechanics, enabling multidisciplinary design and simulation, just to show a few examples. The main benefit of model-based design is that errors are typically found earlier than in traditional approaches . Models are executable specifications enabling continuous testing . When whole command chains, systems, or subsystems can be tested, more practical test scenarios can be used to reveal the problems more typically found with integration tests.
The multidisciplinary system model can also be used for initial tuning of control behavior if dynamic behavior of hydraulics and mechanics is included. After finalizing the design and initial tuning, the control behavior of each device can be automatically exported into executable programs to the final hardware (HW). Incremental modeling and development becomes efficient because of the ease of use and automated transformations.
Node model in a system contains CANopen mapping and a referenced application behavior submodel. In the early stages of development, parameter and signal descriptors are not required—they do not affect behavior, but just tag the signal or parameter to be published. Signal names and data types are taken directly from the model to the descriptions. Simulation models are commonly used for documentation and communication of interface descriptions . It is important to systematically define the interfaces, because the control functions communicate through the interfaces and any inconsistency can introduce more severe global consequences than an erroneous internal behavior.
Configuration management is required for simulation models . One approach to arrange well-documented and proven configuration management is to implement generic simulation models and publish all the configuration parameters. This approach enables the utilization of configuration management features provided by a system integration framework. If CANopen is used, various model configurations can be stored as profile databases where parameter values can be imported to the new models. Potential conflicts can be detected and solved outside the model, in the corresponding design tools.
The main benefits of the referenced models are that they are faster in simulation , enable parallel development of submodels, and can directly be used from other top-level models , such as in rapid control prototyping (RCP). RCP can significantly speed up development since final processing performance, memory, and I/O constraints do not apply . Model referencing can also be used as a reuse method of the application behavior in other models.
Preparing for export
Code generation from simulation models is a proven technology, but managing system-level interfaces has not been included until now. After completing the application behavior, signals and parameters need to be defined. A dedicated blockset was developed for such purposes. The blocks shown in Figure 2 of the first part of this article are only markers, invisible to the code generation. The simulation model is independent of the integration framework, and therefore only application interface descriptions are exported to framework specific tools. This approach enables the full utilization of the framework-specific tool chain for integrating the application-specific descriptions with hardware and software platform-specific interface descriptions.
Signals and parameters behave differently and need to be managed accordingly . Due to a thoroughly defined process image, signals are automatically assigned into the object dictionary, but most devices have default PDO-mapping (process data object) affecting the organization of the signals. The safest option was to provide a manual override for automatic object assignment for the signals and parameters. The access type of signals is fixed by using direction-specific blocks, and the object type does not need to be defined for the process image. Signals can also be introduced into device profile-specific objects, for example, when standard behavior is developed. In this part of the process, compatibility with existing programmable logic controllers (PLCs) is as important as CANopen conformance.
Parameter management has even more deviations among different implementations, so it should be possible to manually select the main attributes. If grouping is required by the applications, the manual assignment enables parameter grouping into records and arrays. Access type and retain attributes are available only for parameters, and their values are related to the parameter's purpose. If a parameter is intended to indicate a status, it needs to be read-only. If a parameter's purpose is to adapt the behavior of a function, read-write access and retain are needed. Some parameters, such as output forces, require read-write access. Retain storage should not be supported, because forces should be cleared during restart for safety reasons.
Automatics can be implemented later, for example, by using target-file-describing-object assignment rules specific to a target hardware. Development of interface standards and exchange formats will help further development . Currently, there are too many variations—especially in the management of parameter objects—to be covered by automatic assignment without potential need for further editing.
Minimum, maximum, and default values can be assigned for each object. It is important that they be defined, because they can be efficiently reused during later steps of the process. Those values can be given either as plain values or as variables in the Matlab workspace. This approach enables sharing the same metadata with application function blocks as constants linked to the same variables, but may add to the complexity of the model .Value fields can be left empty when default values are automatically used to speed up the modeling. Minimum and maximum possible values, according to the object's data type, are used as minimum and maximum values by default. If a default value is not defined, then zero is used.
The generated application behavior needs to be isolated in a separate submodel. Source code cannot be generated directly from the root of the referenced model. The structure of the generated code strongly depends on the internal structure of the source model . The IEC 61131-3 code generation results in a single function block, where the behavior of the selected block is included. Depending on the model structure, other functions and function blocks may also be generated.
A completely fixed interface is mentioned in a case example presenting the application development improved by using fully automated code generation . The more generic approach expects the management of the interfaces from the model  . However, only application-specific interfaces can be managed in the model, and both hardware and software platform-specific interfaces need to be managed according to the management process of the selected integration framework. Applications can be developed as separate models and mapped onto the same physical node as part of the system design process. The level of modularity can be selected according to the application field. The configuration management  of the applications in the presented approach is supported by published application interface descriptions. The configuration management of the target system is done in a CANopen process, with better support on the system level . Calling of the application interface export of Application A is presented in Figure 1.
The resulting application parameter and signal object descriptions are shown in Figure 2. The file format in the example is a CANopen profile database (CPD) because CANopen was selected as an example system integration framework. Application interface descriptions are combined with descriptions of other optional applications, which will be integrated into the same device and the communication interface of the target device . The resulting electronic data sheet (EDS) file can be used in system design as a template defining the communication capabilities of the device. System structure specific communication parameters are assigned during the system design process  .
There are four requirements:
1. All tools must be compatible with each other . Based on experience, using standard interfaces is the easiest method to achieve a sufficient level of compatibility.
2. Thoroughly defined interfaces are needed in co-development projects to get them working completely . It cannot be assumed that all development is performed within one company or department and with a uniform methodology.
3. Outputs must integrate manually written, existing codes to enable either a smooth transition into model-based development or a flexible use of automatically generated and manually written code .
4. Although CANopen is currently the best integration framework in machinery applications, upgrade paths and additional supported integration frameworks should also be possible.
A generic approach does not support predefined signaling abstraction used in some implementations . Application-specific abstractions need to be generated from the model and developed further in the CANopen process where physical platform-specific and communication-specific details can be integrated most efficiently into a complete description of a device's communication interface. That includes necessary information from the rest of the system  . Finally, the communication abstraction is imported as an IEC code into a development tool. Manual coding is required only for connecting the exported application behavior into communication and I/O abstraction layers. The approach follows a standardized process enabling integration of commonly used tools, which is also recommended in the relevant literature . Relying on a standardized process enables a simple adaptation that is natively supported by the tools, and heavy tool customizations are avoided .
The remaining manual integration work is minimal, mainly consisting of connecting application signals and parameters to the communication abstraction layer.
Signal and parameter metadata—minimum, maximum, default values, and signal validity—if used by application behavior, also need to be connected manually to the relevant application function blocks. Fixed connections are not performed, because such information is not necessarily required for all signals and parameters. Including complete metadata for all signals and parameters with plausibility checking may require too much memory and processing power. An automatic connection would also violate the requirements of flexible mixing of manually written and automatically generated code .