Model Continuity From Offline Simulation to RealTime TestingOctober 09, 2019 by Jost Allmeling
The dream of many engineering managers is having a single simulation model or “digital twin” for a newly developed product or application. This model
The dream of many engineering managers is having a single simulation model or “digital twin” for a newly developed product or application. This model would serve as a reference for all technical aspects during design, verification, and maintenance. Although working with a single model seems impossible, many benefits of a digital twin can be realized today with the right development process and an appropriate toolchain.
Simulations help developers in many ways during the entire development process of a new power electronics application. The development process typically comprises the functional concept, the component selection, the mechanical design, the controller implementation, and finally, the test and verification under normal and faulty conditions.
Ideally, the development process refers to the familiar V-model (Figure 1), where the design and implementation follow a top-down approach, and the verification follows a bottom-up approach. At each level, the developed component is verified with appropriate test tools and procedures before being integrated into a larger system.
For verification, it is important that the same specifications are used for the design. Although the design and verification tools may use models with different levels of detail, it is important to ensure that the models produce comparable results. This article explains why a single model is not sufficient for all applications and how to ensure that multiple models deliver consistent results despite their different implementations and use cases.
In the first phase of a new design, the feasibility of a particular power circuit and a proper control scheme can be assessed quickly with ideal circuit components and generic control blocks. During this conceptual phase, no manufacturer-specific information is needed to model the individual components of a power converter. Instead, power semiconductors such as MOSFETs and diodes are represented by simple on/off switches that correspond to their functional behavior. The parasitic properties of passive components like winding resistance and inductor saturation might be neglected unless they play a functional role in the circuit operation.
Likewise, the controls are represented by functional block diagrams or state machines, independent of their final implementation. These controls may later be realized as analog circuitry, digital logic, micro-controller code, or hardware PWM generators.
Simulation software used in the conceptual phase must provide generic models for all types of electrical components encountered in power electronic applications. As power semiconductors are normally operated in switch mode, they should be modeled as ideal switches to reduce the complexity of the simulation model. The user must be able to connect arbitrary components and to create his own customized components by means of subsystems.
During the conceptual phase, the system model can grow in complexity and size. In order to obtain fast transient simulations without significant integration errors, the use of a variable time-step solver is strongly recommended. A variable step solver automatically steers the time step in such a way that the maximum integration error is not exceeded and switching instants, and other discontinuous events are hit exactly.
The simulation software PLECS has been designed from the beginning for the simulation of power electronic systems. One distinguishing feature is its capability to model power semiconductors as true ideal switches, which allows for fast and robust simulations of large circuits without the need to tune solver settings.
Based on the current and voltage requirement determined by the circuit design, suitable components with specific part numbers are selected, and the mechanical layout is created. In this second phase, simulations help predict thermal losses and resulting temperatures. In a continuous workflow, the thermal model presents itself with a new layer placed over the electrical circuit. The thermal domain computes the losses generated by the electrical circuit and models the heat transfer.
Switching and conduction losses can be simulated efficiently using a set of loss tables for each semiconductor. Tables for switching losses provide the dissipated energy depending on the blocking voltage, the ON-state current and the junction temperature of the device. Tables for computing conduction losses show the voltage drop as a function of on-state current and device temperature. In each simulation step, the thermal losses are calculated and fed into a thermal equivalent network that comprises the semiconductor device, the heat sink, and the external cooling arrangement.
The advantage of using loss tables over simulating voltage and current transients during switching is that ideal switching, and thus high simulation speed is maintained. The computation of the thermal domain adds some extra effort to the electrical simulation based on ideal components.
The mechanical design, which is carried out in the next phase, includes the placement of the components, the electrical layout, the design of the cooling system, and the integration into a housing.
Because CAD programs and PCB layout software require many other parameters than the ones used for system simulations, the model toolchain is typically broken at this point. In the mechanical design, the developer must adhere to the parameters used in the system simulation or modify them accordingly in the original data set.
Using the spatial layout information from the mechanical design, the developer can determine the influence of parasitic inductances and EMI effects on electrical stimulation. This is only possible if detailed semiconductor models are used that accurately reproduce the switch-ing transients. Since these models are computationally very intensive, they can only be simulated over a few switching cycles and are therefore not suitable for system simulations.
Due to the enormous effort involved in using spatial information and obtaining detailed semiconductor models, as well as the limited gain of new insights, such low-level simulation is often omitted.
In a power electronics application, the development of the controls of-ten requires more effort than the design of the power stage, especially if a microcontroller is employed. The development of the controller should, therefore, begins immediately after the conceptual phase has been completed, before the actual power hardware is built.
In the classical approach, experienced software developers will implement the control code for a specific target MCU according to the specifications of the controls engineers. Implementing embedded control code, though, requires continuous testing. This can be simplified by compiling excerpts of the control code into a DLL destined for the host computer instead of the target MCU. Most system simulation software can include DLLs so that the control code can be verified against a model of the controlled system. In addition, PLECS offers the possibility of pasting C code directly into the editor of the C-Scriptblock. This code is then automatically compiled and executed along with the model.
In order to ensure that the implemented control code and the functional block diagram of the conceptual phase produce the same results, a Configurable Subsystem can be used in PLECS, one implementation of which contains the block diagram and the other the code. Simulating multiple implementations of the same functionality and overlaying the results in a PLECS Scope is a crucial test for model continuity.
A more modern approach to implementing controls on an MCU is to generate target-specific C code from the functional block diagram automatically. In addition to its offline implementation, each block must provide a method for outputting real-time capable C code. The block diagram typically contains generic signal processing blocks as well as target I/O blocks to configure the on-chip peripherals such as ADCsand PWM generators. Especially for beginners, auto-coding speeds up the development enormously, because peripheral configuration through program code is a daunting task that requires in-depth knowledge about the MCU or extensive study of the manual. With respect to model continuity, auto-coding has the advantage that the generated code always adheres to the block diagram definition. However, the responsibility for an efficient implementation of the controls is now shifted from the software developer to the controls engineer.
If the target MCU is not selected or the entire control board with the signal conditioning electronics is not yet available, the MCU can be temporarily replaced by a much more powerful real-time process-ing platform such as the PLECS RT Box. This approach is known as Rapid Control Prototyping (RCP) and helps to quickly get a working setup consisting of the power stage and controller.
In PLECS, by switching from a particular MCU or the RT Box to a generic target, the generated code can be used instead of the original block diagram in the offline simulation. This feature allows for quick verification of the generated code by overlaying the simulation results.
We have seen how handwritten or automatically generated code can be verified against the block diagram definition in an offline simulation by replacing the block diagram with the compiled code. This type of verification is called software-in-the-loop (SIL) testing and allows for short turnaround times since no MCU needs to be flashed. However, the proper configuration of MCU peripherals cannot be verified with SIL, nor can timing issues, processor utilization, or resource corruption be detected on the target MCU.
Testing not only the control code but the entire control hardware, including the MCU peripherals, normally requires the real power stage to be connected to the controller. However, this is often impractical because the power stage and its protection may not be fully developed before final commissioning, and certain faulty operating conditions may even damage the power stage.
To test the control hardware independently of the power stage, the controller can be connected to a real-time simulator that mimics the behavior of the power stage. This approach is referred to as hardware-in-the-Loop (HIL) simulation as the actual control hardware is part of a closed loop-simulation. Replacing the power stage with a real-time simulation has the advantage that the behavior of the controller can be extensively tested under a large number of normal and faulty conditions.
Controller HIL testing has become very popular because the connection at the signal level provides a clearly defined interface between the controller and the power stage, and only a few modifications are needed to replace the power stage with a real-time simulator.
The challenge with HIL simulations of power electronics is the smalltime constants prevailing in electrical circuits. Only a short additional latency is acceptable, which is inevitably introduced by the real-time simulator. The computational latency between capturing the PWM signals and providing simulated sensor signals to the controller should be kept to a minimum since the controller should still behave as if it were connected to the real power stage. This requires not only dedicated real-time hardware such as the PLECS RT Box but also converter models optimized for fast and accurate execution in fixed time-step simulations.
Real-time simulations are always a race against the advancing time since the computation of each time step must be completed within a discretization period. On a modern real-time platform, the minimum achievable time-step lies in the order of a few microseconds and may depend on the model size. Even with specialized converter models computed entirely on an FPGA, the time step can hardly be reduced below 0.5μs.
The switching frequencies of today’s power converters are mostly between 10kHz and 100kHz. If the PWM signals were sampled only once per simulation step, the resolution of the captured duty cycle would be insufficient. In order to capture the duty cycle more accurately, the PWM signals are usually sampled at much smaller intervals of about10 ns and averaged over one simulation step. The average value represents the relative on-time of the PWM signal during that step.
Averaged PWM signals can accurately represent the semiconductor gate signals only if they are applied to appropriate converter models. These models are based on controllable voltage and current sources instead of the ideal ON/OFF-switches (Figure 2). Additional logic is employed to model discontinuous conduction mode and semiconductor blanking time. Since the simulation time step and thus the averaging interval is usually much smaller than one PWM switching cycle, inPLECS, this modeling approach is referred to as sub-cycle averaging. Such converter models are also sometimes referred to as time-stamped bridges.
While sub-cycle average models are recommended for high-fidelity real-time applications, converter models based on ideal switches continue to be preferred in offline simulations. Because the sub-cycle averaging makes certain assumptions and simplifications, the simulation result might not always match exactly. Also, when using the generated C code for fixed time-step simulations, the results can deviate slightly from the continuous-time model.
It is important that the same circuit model developed in the design phase is used when verifying the controller during HIL testing. However, a versatile offline model usually differs from a computation-ally efficient real-time implementation. To solve this dilemma, the power modules for various types of converter and inverter bridges in the PLECS library are all equipped with two implementations. One is based on ideal switches, and the other uses sub-cycle averaging. The user can easily switch between them. To make sure that the two implementations behave the same even when discretized, the user should compare the results of the generated real-time code with the continuous model in an offline simulation, as shown in Figure 3. This can already be done before uploading the code onto the RT Box.
PLECS RT Box
Real-time platforms for HIL testing are traditionally divided into universal systems based on multi-purpose CPUs and highly specialized systems for parallel computing on FPGAs. Conventional CPUsconnect peripherals such as digital and analog I/Os via PCI Express, a serial bus protocol that has a communication latency of at least 10μs. Despite their high processing power, the simulation time step is thus limited to approximately 20μs — too much for most power electronics applications. FPGA based system, on the other hand, allows simulation time steps of 1μs or less and can access peripherals directly. Although ideal for massively parallel computing, FPGAs perform poorly when they execute sequential code, restricting possible applications.
To address the shortcomings of conventional CPUs and FPGAs, the RT Box uses a Zynq System-on-Chip (SoC) from Xilinx that incorporates multiple ARM CPU cores on an FPGA. The tight integration of the CPU cores with the FPGA enables I/O latencies of a few 100ns. The CPU cores can compute any simulation model, while the FPGA is employed as a co-processor for streaming tasks. With typical simulation step sizes between 1μs and 10μs for power electronic circuits, the RT Box closes the performance gap left by conventional HIL simulators.
The real advantage of the RT Box lies in its end-to-end interoperability with the PLECS simulation software. At the push of a button, a PLECS model is converted into real-time C code, compiled, uploaded, and started on the RT Box. With the External Mode, real-time simulation data from the RT Box can be displayed in PLECS Scopes and compared with results from offline simulations.
Plexim is currently expanding its product portfolio of RT Boxes (Figure4). The updated RT Box 1 remains to be the most cost-effective HIL platform for power electronics and now features two CAN transceivers. The new RT Box variants 2 and 3 are deploying the next-generation multi-processing SoC from Xilinx. Besides additional connectivity for industrial communication, the RT Box 2 and 3 feature interfaces for magnetic resolvers. Compared to the RT Box 2, the RT Box 3 has twice the number of analog and digital I/Os. The main differences between the new RT Box variants are outlined in Table 1. With the new hardware offerings, Plexim will expand its position as a provider of HIL systems and offer its customers even more comprehensive solutions supporting the continuity of simulation models.
|RT Box 1||RT Box 2||RT Box 3|
|CPU Clock Speed||1GHz||1.5GHz||1.5GHz|
|Analog I/ Os||16/16||16/16||32/32|
|Digital I/ Os||32/32||32/32||64/64|
|Resolver I / Os||-||1/1||2/2|
|SFP + Interconnects||4||8||8|
|USB 2.0/ 3.0||1/-||-/1||-/1|
Using the same model for all simulations throughout the development process from design to verification is an ideal goal that is difficult to achieve in practice. Different aspects, such as system or device behavior, may require various simulation tools and models with very different levels of detail. Obtaining a single “digital twin” that resembles the entire system and using this twin as the only source of truth is, therefore, illusory.
Nevertheless, it is perfectly realistic to establish a development process in which all system parameters are stored in a central location and referenced by all models. This central location could be a comprehensive database or just a simple initialization script. Certain parameters will be used only by some models for simulating selected aspects. Using an integrated toolchain like PLECS helps to verify models for different purposes against each other and thus enables model continuity.
About the Author
Jost Allmeling received his M.S. degree in electrical engineering from Aachen University in 1996, and his Ph.D. degree from the Swiss Federal Institute of Technology in 2001. In 1996, he became a Research Associate with the Power Electronics Laboratory. From 2001 to 2003, he was a Postdoctoral Researcher with the Power Systems Laboratory. In 2002, he cofounded Plexim, a spin-off company from ETH Zurich that develops the software PLECS for fast simulation of power electronic systems. He is currently the Managing Director of Plexim.