Networking used to improve superconducting magnet

The world's largest and most powerful particle smasher large hadron collider (LHC) at CERN has restarted circulating beams of protons and it uses a lot of CANopen networks to control the high-energy physical experiments.

By CAN in Automation (CiA) June 7, 2017

After a longer winter sleep than usual, the large hadron collider (LHC) has been restarted. The superconducting magnet of the LHC has been substituted and a new beam dump has been installed in the super proton synchrotron (SPS). Additionally, the researcher removed and exchanged some cables.

Among other things, these upgrades will allow the collider to reach a higher integrated luminosity—the higher the luminosity, the more data the experiments can gather to allow them to observe rare processes. Last year, the machine was able to run with stable beams—beams from which the researchers can collect data—for around 49% of the time, compared to just 35% the previous year.

The LHC first started in 2008, and remains the latest addition to CERN’s accelerator complex. It consists of a 27-km ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way. Inside the accelerator, two high-energy particle beams travel at close to the speed of light before they are made to collide. The beams travel in opposite directions in separate beam pipes-two tubes kept at an ultrahigh vacuum.

They are guided around the accelerator ring by a strong magnetic field maintained by superconducting electromagnets. The electromagnets are built from coils of a special electric cable that operates in a superconducting state, efficiently conducting electricity without resistance or loss of energy. This requires chilling the magnets to ‑271.3 °C—a temperature colder than outer space. For this reason, much of the accelerator is connected to a distribution system of liquid helium, which cools the magnets, as well as to other supply services.

Thousands of magnets of different varieties and sizes are used to direct the beams around the accelerator. These include 1 232 dipole magnets 15 m in length which bend the beams, and 392 quadruple magnets, each 5 to 7 m long, which focus the beams. Just prior to collision, another type of magnet is used to "squeeze" the particles closer together to increase the chances of collisions. The particles are so tiny that the task of making them collide is akin to firing two needles 10 km apart with such precision that they meet halfway.

All the controls for the accelerator, its services, and technical infrastructure are housed under one roof at the CERN control center. From here, the beams inside the LHC are made to collide at four locations around the accelerator ring, corresponding to the positions of four particle detectors—Atlas, CMS, Alice, and LHC.

The CERN engineers are using CANopen-networked modules to control and monitor the power supplies. These embedded local monitoring board (ELMB) modules are multi-function devices that are focused on providing analog inputs/outputs (I/Os), digital inputs/outputs, SPI connectivity, and custom functionality. They are installed in the Atlas, CMS, LHC, Alice, and Totem experiments. The units are based on Atmel’s Atmega 128 micro-controllers with on-chip CAN controllers featuring 128 KiB of flash-memory and a 4-KiB SRAM. The PCA82C250 transceiver chips are produced by NXP. The modules are plugged into ELMB or application-specific motherboards.

In some applications, Ethernet-to-CAN gateways are used. These gateways are designed to offer an alternative to PCI and USB interface boards connecting the CANopen networks to the CERN control system.

This article originally appeared on CAN in Automation’s (CiA’s) website. CAN in Automation is a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media,

ONLINE extra

See related articles from CAN in Automation (CiA) linked below.