Global vision standards drive industry’s success
Machine vision original equipment manufacturers (OEMs) and system integrators may not think about the how or why of technical standards on a regular basis, but without consensus on these interface standards, the industry wouldn’t be nearly as strong.
"From a user perspective, vision standards guarantee component interoperability," said Bob McCurrach, AIA director of standards development. "For component manufacturers, standards allow them to develop products that can get spread out over many more users."
Volunteer committees meet several times a year to ensure that AIA’s vision industry standards address—and stay ahead of—the needs identified by machine vision and imaging stakeholders on a global scale.
GigE Vision Tightens Its Focus
As it migrates toward version 2.1, with a target release date of July 2018, GigE Vision will strengthen its capabilities while continuing its focus on user-friendliness and efficiency. Multipart image transfer is at the heart of the standard’s newest iteration, aligning with the growth of 3-D imaging.
"3-D imaging has a more complex data structure," said Eric Bourbonnais, chairman of the GigE Vision committee and software design leader at Teledyne Dalsa. "One of the parts is very similar to the 2D image with the pixel format, but more information needs to be transferred with a 3-D image. GigE Vision’s new multipart transfer mode allows transmission of all that data in one block."
Scheduled for release in mid-2019, version 2.2 will include support for GenICam’s GenDC module.
Furthermore, GigE Vision’s "Certified to Use in GigE Vision Systems" program allows the registration of accessories that have been designed to supplement GigE Vision products but are not implementing the full GigE Vision standard. One example is a power-over-ethernet switch that offers locking connectors compatible with GigE Vision cameras.
Camera Link HS quickens the pace
The committee behind Camera Link HS is keeping busy with the release of version 1.2 and continuing work on CLHS revision 2. Version 1.2 takes greater advantage of the capabilities of the X protocol, which capitalizes on the higher bandwidth and higher speeds found in the telecommunications industry.
"Another thing we’ve leveraged from the telecoms is that the X protocol IP core now includes forward error correction (FEC), which simplifies system-level architecture without impacting the bandwidth," said Mike Miethig, technical manager at Teledyne Dalsa and chairman of the CLHS standards technical subcommittee. "The X protocol also uses 64b/66b encoding, which is much more efficient encoding than 8b/10b, so there is less overhead."
The shared IP core ensures products are compatible and reduces development time and product support. "It is a huge value to have access to a proven, unencrypted source code that is FPGA vendor-agnostic, and it makes the core that much more portable," Miethig said. "You can use it on older-generation and newer-generation FPGAs and get the same performance."
While CLHS can run over copper or fiber optic cables, the fiber optics used in telecom allow faster performance, not to mention low cost, robustness, and reliability. "CLHS is a great platform because once you invest in it, you know that several years from now, there’s a possibility to go faster on fiber optic," Miethig says.
On the X protocol, the CLHS committee has agreed to go to 15.9 Gbps per lane – up from 10.3 Gbps – on the F2 (SFP+) connector, "because that’s where the mid-tier FPGAs top out, and our technology basically moves with FPGA technology," Miethig said. Version 1.2 also includes use of the X protocol on the new C3 (SFF-8473) connector at 10.3 and 12.5 Gbps, enabling 87.5 Gbps in a single cable having an effective data throughput of 10.2 Gbytes per second.
Another change to the CHLS interface is the release of revision 1.3 IP core, which fixes minor bugs found by users when pushing the IP core to its maximum bandwidth.
Meanwhile, the roadmap for CLHS revision 2 supports 3-D cameras with new types and bit depths using GenDC terminology; supports multiple regions of interest, each with different pixel type and bit depth and all within a single frame; and measures propagation delays, enabling precise triggering on multiple cameras.
The upcoming version 2.1 of Camera Link aims to improve upon areas of ambiguity in version 2.0. For example, the new version will clarify HDR/SDR standoff requirements, skew the budget for Channel Link signals, and provide new high-bit depth definitions.
Version 2.1 also will allow Camera Link on FPGA implementations, but only with a qualification process, or a PlugFest. The first version of that verification took place during the International Vision Standards Meeting (IVSM) in Hiroshima, Japan, in fall 2017. The second iteration of PlugFest will occur in May during IVSM 2018 in Frankfurt, Germany.
Other changes designed to bring clarity to the standard include an improved pin assignment chart that is much easier to read and will reduce the learning curve for new users, as well as a new CL Serial Read function, since the previous one was poorly documented and resulted in multiple interpretations from users. Furthermore, the technical committee continues to pursue Camera Link 3.0 to formalize the standard on FPGA requirements and testing, in addition to providing basic GenICam functionality, CRC error detection, and device discovery.
USB3 vision addresses cable spec
Version 1.1 of the USB3 Vision standard will offer multistream support and also adds USB 3.2 compatibility with multilane 10/20 Gbit operation. But perhaps the bigger story coming from the standard is the update to the cable specification, including cable compliance testing details.
"When we launched USB3 Vision in 2013, we didn’t have a robust electrical cable qualification process," McCurrach said. "And doing detailed electrical testing on passive copper cables is very complicated and expensive. But we have put together a simplified cable qualification process that we will be implementing by the end of the year."
McCurrach also pointed to the broad range of optical converters on the market. "Those are coming down in price, so I expect that those will really uncap the potential on USB3 Vision for longer-distance applications."
As embedded applications look to be the next frontier in machine vision, standardization becomes a natural topic of discussion. But that discussion has challenges at a fundamental level.
"By definition, an embedded application is unique to its particular application," McCurrach said. "Because of the disparity of the application areas, it will be a harder road to come up with a standardized protocol or physical layout. But it will happen."
Indeed, developing an embedded interface standard will rely on global cooperation from vision industry trade associations, including AIA, Japan’s JIIA, Europe’s EMVA, CMVU in China, and VDMA MV in Germany. If superhero movies have taught us anything, when the good guys consolidate their powers, everybody wins.
Winn Hardin is contributing editor for AIA. This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, email@example.com.