Essay Writing Service

Parameter Dependencies: Problems and Solutions



This master’s thesis concerns development of embedded control systems. Development process for embedded control systems involves several steps, such as control design, rapid prototyping, fixed-point implementation and hardware-in-the-loop-simulations.

Another step, which Volvo is not currently using within climate control is on-line tuning. One reason for not using this technique today is that the available tools for this task (ATI Vision, INCA from ETAS or CalDesk from dSPACE) do not handle parameter dependencies in a satisfactory way. With these constraints of today, it is not possible to use online tuning and controller development process is more laborious and time consuming.

The main task of this thesis is to solve the problem with parameter dependencies and to make online tuning possible.


1.1 Background

Volvo technology (VTEC) is an innovation company that provides expert functions and develops new technology for “hard” as well as “soft” products within the transport and vehicle industry. Among other things VTEC is working with embedded control systems. For one of the embedded control systems particularly “Climate Control Module (CCM)”, VTEC is working with the whole chain. VTEC does this for Volvo Cars, Volvo Trucks, Volvo Construction Equipment, Renault Trucks and Land Rover.

The work process for embedded control system developmet is typically as follows:

  1. Control Design
  2. Rapid Control Prototyping
  3. Fixed-Point Implementation
  4. Hardwar-In-the-Loop Simulation
  5. Online Tuning.

It is an iterative process, but there is one problem for the last step, which limits the possibilities of working iteratively. Control design is typically made in MATLAB/Simulink and Fixed-Point implementation is typically made with a tool such as TargetLink. During these steps the parameters may be handled in an m-file. When going to the on-line tuning step however, the parameters are handled in a tool such as ATI Vision, INCA or CalDesk. Once you have taken this step the connection to the m-file is lost. Therefore the last step is somewhat of a one-way step. It is not completely impossible to go back to the earlier steps in the development chain, but the iterative process is not well supported by available on-line tuning tools of today.

The following m-script instructions are examples of parameter dependencies that will cause the mentioned problems:

Heating = [ -100, -20, 0, 20, 100 ];

BlowerHt = [ 12, 5, 4, 5, 10 ];

Blower_min = min[ BlowerHt];

Defrosting = [ 0, 20, 100 ];

BlowerDef = [ Blower_min, Blower_min, 10 ];

Using the above vectors in interpolation tables, one table with Heating as input vector and BlowerHt as output vector and another table with Defrosting as input vector and BlowerDef as output vector would cause problems during on-line tuning process. Three of the elements are meant to have identical values, but the tools, as it is today would allow them to be tuned individually. This is just one of many constructs, which may be very useful as long as you are in the MATLAB environment but causes problems during the on-line tuning process.

1.2 Goals and objectives

The main goals of this master’s thesis are:

  • To investigate the problem of parameter dependencies.
  • To find possible solutions.
  • To make online tuning possible for dependency parameters in the development process of embedded control systems..



2.1.1 History of Embedded Systems

In the era of earliest development of computers i.e. 1930-40s, generally computers were capable of doing a single task. Over time with the advancement in technology, traditional electromechanical sequencers presented the concept of programmable controllers using solid state devices.

“One of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory.”[1]

After the early applications in 1960, the prices of embedded systems have come down and their processing power has been increased dramatically. A standard for programmable microcontrollers was released in 1978 by National Engineering Manufacturing Association. This standard was for almost any computer-based controllers for example event-based controllers and single board computers.

When the production cost of microprocessors and microcontrollers fell, it became feasible to replace old, big and expensive components like potentiometers and varicaps with microprocessor read knobs.

With the integration of microcontrollers, the application of embedded systems has further increased. The embedded systems are being used into areas where generally computers would not have been considered. Most of the complexity is contained within the microcontroller itself and very few additional components are needed. So because of this most effort is in software area.(last sentence is difficult to understand).

2.1.2 Common Characteristics

Embedded Systems have several common characteristics.

  • Uni-Functional: Embedded systems are usually designed to execute only one program but repeatedly. For example, an ordinary scientific calculator will always do only calculations. While on the other hand, a laptop computer can execute an enormous number of different programs, like web browsers, word processors, programming tools and video games. New programs or softwares are added very frequently.
  • Tightly constrained: All computing systems have constraints on design metrics, but these constraints can be very tight for embedded systems. A design metric is defined as, “a measure of an implementation’s features, such as cost, size, performance, and power”. Embedded systems are often desired to cost just a few dollars, they must be designed for minimum size to fit on a single chip, they must be able to perform fast processing in order to process real-time data, and they must consume minimum power in order to extend battery life or may be to prevent the requirement of a cooling fan.
  • Reactive and real-time: Many embedded systems should be able to continually react to changes in the system’s environment. They must also compute certain results in real time without too much delay. For example, a cruise controller in cars have to monitor and react to speed and brake sensors continuously. It must compute acceleration or decelerations repeatedly within quite limited time; a delay in computation of results could result in a fatal failure to maintain control of the car. On the other hand, a desktop computer generally focuses on computations with comparatively infrequent reactions to input devices. In addition, a delay in those computations may perhaps be inconvenient to the user but that does not result in a system failure.

2.2 Model Based Design

Model-Based design in short MBD is a mathematical and visual method of addressing problems associated with designing complex control systems. It is used in many industrial equipment designing, automotive and aerospace applications. Here in this thesis our focus is on climate control of new vehicles. This methodology is used in designing embedded software.

Embedded software development consists of four steps:

  1. Modeling a plant.
  2. Analyzing and synthesizing a controller for the plant.
  3. Simulating the plant and controller.
  4. Integrating all these phases by developing the controller.

Model-based design is quite different from the conventional designing method. In this methodology designer use continuous and discrete time building blocks instead of using long and complex software coding.

This model based design enables designer to fast prototyping, testing and verification. Along with all these advantages, dynamic effects on the system can also be tested in hardware-in-the-loop (HIL) simulation mode.

Some important steps in model-based design approach are:

  1. By choosing appropriate algorithm and acquisition of real-world system data, various types of simulations and analysis can be performed before producing a real controller.
  2. The model produced in step one is used to identify characteristics of the plant model. Then a controller can be made based on these characteristics.
  3. Using this model, the effect of time varying inputs can be analyzed. In this way the possible errors can be eliminated and it is very convenient to change and test any other parameters.
  4. Last step is deployment.

Advantages of model based design compared with the conventional approach are as follows:

  • Model based design provides common design environment which is important for development groups from the view point of general communication and specifically for data analysis and system verification.
  • Model based design enable engineers to detect and correct errors in early phase of development. This is crucial point for minimizing time and financial impact of system.
  • Model based design can be reused later for upgrading and for derivative systems which are capable to expand.


2.3.1 Conventional Approach for ECU Development

The conventional approach for electronic control unit, ECU, development is summarized in following four steps:

  1. Some experienced personal define the functions and system architecture and then the hardware engineer design the hardware circuit.
  2. Control engineer design the control algorithms and a programmer generate a handwritten code for that algorithms.
  3. Then these control algorithm program codes and hardware are integrated and tested by system engineer or maybe hardware engineer.
  4. Then on the engine test bench the complete system is tested.

There are few problems with this conventional approach for ECU development.

First and very major problem is that the hardware circuits are made before the confirmation of control rules and results. Only this factor adds a big risk in the process of ECU development.

Secondly if some error is encountered during the program code testing, it is very difficult to judge whether the error is because of software codes or in the control algorithms. This programming of the control algorithm is itself a very time consuming process and it take additional time when some errors are encountered and the process of debugging. Since many people from different field of work are involved in this process so coordination between them also take time and it makes the development cost to increase. [2]

That’s why the conventional development process can not satisfy the demand of modern age and its requirements.

2.3.2 Modern ECU Development

On the bases of integrated development environment, the modern development of electronic control units can be efficiently completed and tested. Using model based simulation and hardware-in-the-loop simulation it is very easy and convenient to eliminate software errors and to modify the control algorithms. Due to this the development cost is reduced and development efficiency is improved. This modern development process is called V-cycle development process.

This process is illustrated in Ffig. 1. (when you use figures from other publications, you have to get permission from the auther. It is not enough to put a reference)

Fig. 1. The V-Cycle of model-based software development. [2]

This process is summarized as follows:

  • Using very sophisticated tools like MATLAB/Simulink/Stateflow and dSPACE TargetLink, the control algorithms are modeled. These control algorithms are confirmed using off-line simulations.
  • The ANSI C code is produced using code generation tool. The one we are using is dSPACE TargetLink.
  • The code produced in above step is compiled and downloaded into the control module and simulation is done in Hardware-in-the-loop mode, which confirms the credibility of the control algorithms.
  • This tested program code of control algorithms is downloaded into the electronic control unit for further test and modification.
  • Finally calibration of the whole control system is done.

2.4 Universal measurement and Calibration Protocol (XCP)

“XCP is a standardized and universally applicable protocol with much rationalization potential. It is not only used in ECU development, calibration and programming, it is also used to integrate any desired measurement equipment for prototype development, functional development with bypassing and at SIL(define) and HIL(define) test stands.”[16]

For calibration and measurements, it is common practice to connect electronic control units in a *CAN* network. For this purpose CAN calibration protocol is used extensively. With increasing demands of more sophisticated controllers, new electronic control units are becoming more and more complex and for that reason new networks are being developed such as, FlexRay, TTCAN etc.(give references)

To meet the needs of new networks, the measurement and calibration protocol should be more generalized and flexible. This generalized and flexible protocol is XCP (Universal measurement and calibration protocol).

XCP is independent of transport layers. So in XCP, “X” generalizes the various transport layers that

* Details about CAN are provided in Appendix A.

are used by the members of the protocol family e.g. [9]


XCP on FlexRay

XCP on Ethernet

XCP on USB and so on

(you have to refere to each figure)

Fig. 2. XCP support for different transport layers [10].


This chapter will give answers to the following questions:

  • What is parameter dependency problem?
  • What is the effect of parameter dependency problem on tuning of embedded control systems?
  • What are the difficulties to solve the problem at different platforms?

Note: All examples used in this report are only for illustration purposes and are NOT the actual parameters used in climate control module of Volvo Cars and Volvo Trucks.

3.1 Complete process for developing embedded control systems

The complete process for developing embedded control systems is illustrated in Ffig. 3. First step of this development process is to define parameters and that can be done in the m-file. These parameter values are loaded into MATLAB base workspace from where TargetLink/Simulink model fetches these values to simulate the process.

After checking the simulation results and doing some modifications if required, C-code is generated by TargetLink. That C-code contains all the information about the control algorithm and input values. In the next step the auto-generated C-code is compiled using a Green Hills Suite.

Fig. 3. Complete Production(rapid prototyping process?) Process.

Green Hills’ software together with GNU Make and VBF converter is used to generate a map file and VBF file (Volvo Binary Format). This vbf file is downloaded in the embedded controller. The map file is used to generate A2L file using TargetLink. This A2L file is required by the calibration tool (for this project ATI VISION is used for calibration) and then using this calibration tool we can do parameters modifications in ECU. These modifications are also called tuning.

3.2 Parameter Dependency

As all parameters are defined in a m-file, some parameters depend on the values of some other parameters. It may also be possible that the values obtained as a result of calculation between two or more parameters are used in the definition of other parameters. So, all those parameters which contain some other parameters or calculations of some other parameters in their definitions are called dependent parameters e.g.

In above example parameters:

  • Parameter 2 is dependent on parameter 1.
  • Parameter 4 is dependent on parameter 2 and 3.
  • Parameter 6 is dependent on parameter 2 and 3.

3.3 Reasons for introducing parameter dependencies

Thinking of parameter dependencies a question may arise in minds that, “Why do we need to introduce parameter dependencies at the first place?”

Answer to this question is that, when designing a control algorithm in a tool such as Simulink, it is convenient to use named parameters (variables) instead of hard coded numbers (constants).

For instance, if the highest fan available corresponds to a voltage of 13.5V. Designer may want to have a parameter for this, so that instead of using the value 13.5 at many instances of algorithm, the name of parameter specified for that value can be used. If one day that hardware is needed to be changed and for new hardware 13.4V is the maximum that can be used for highest fan level, then it is easier to change one parameter value rather than changing many hard coded values at different instances.

Sometimes it is good to have one parameter depending on another. For instance in a look-up table, there are several values in each vector and these values may depend on other parameters. It would be rather limiting if a vector or a matrix could only contains hard coded numbers.

So, the use of dependent parameters helps keeping a good structure in the algorithm. It makes easier to work with the parameters.

3.4 Statistics about parameter dependency

There are quite significant numbers of parameters which are dependent on other parameters. For instance in Climate Control P3, total number of parameters is 1618 out of which 227 parameters are dependent on other parameters and 1391 parameters are independent. We call independent parameters as “Base Parameters”.

Fig.4 Percent of Dependent parameters

3.5 Parameter dependency problem in development process

To analyze the problem of parameter dependency, let’s walk through the development process of embedded control systems and find out what exactly is the problem with parameter dependencies.

As the process starts with parameter definitions in m-file, so the investigation starts from m-file, see Fig. 5.. To visualize this process, an illustration with an example of parameter with dependencies in its definition is shown as follows:

Fig.5. Example of parameter definition in m-file.

After defining all parameters, the m-file is run in MATLAB. In this step all the values of dependency parameters are evaluated by MATLAB and are loaded into MATLAB base workspace. Precisely during this loading process the dependencies are replaced by their values and any information about the relation of a parameter with dependency parameter is lost.

Fig.6. Dependency loss in MATLAB base workspace.

As now the dependency information is lost, so this loss will propagate through all the further steps, for example in C-code generation, A2L file and in strategy(?) file.

Following F fig 7. shows that the propagation of dependency information loss. So in C-code there is no information with the help of which we can trace dependency parameters.

Fig.7 Propagation of dependency loss from MATLAB to C-code.

3.6 Effect of parameter dependencies on development process

The problem caused by parameter dependencies comes to the surface during the calibration step.

During calibration the values of parameters are tuned. When the information of parameter dependencies is lost, then we have to tune each parameter value individually. This is shown in the Ffollowing fig 8.

Fig.8. Effect of dependency loss on development process.

So if a parameter is used, for instance, in the definitions of five different parameters, then we have to tune the value of that parameter at those five locations individually. If there is any calculation involved in any parameter definition, then we must do it manually and update the value. This process of changing values manually is very time consuming and error prone.

There is another possibility that to avoid doing these calculations and tuning parameter values individually. We can change the parameter values in the original m-file, where we have all parameter definitions and repeat the complete process again. This is very laborious work and it also takes a lot of time, so this possibility is not so feasible.


As the complete process for developing embedded controllers is a multistage process and it depends on four highly sophisticated software platforms. So there can be different approaches to solve the dependency information loss. Following are the possible platforms for doing modifications in order to handle the dependency loss problem.

  • TargetLink model
  • C-code
  • Calibration tool
  • Separate windows application

Following is the in depth analysis of above mentioned platforms and possibility of finding a feasible solution.

4.1 Parameter dependencies and MATLAB

When m-script, containing all parameter definitions, is run in MATLAB, all parameter values are evaluated and stored in MATLAB base workspace. Right at this first step dependency information in m-script is lost. The Rreason of this loss is that MATLAB base workspace support values belonging to only one class type. That can be “char”, “double”, “struct” or any other class but the values can not belong to a mixture of two or more class types, i.e., values cannot consist of two elements of an array belonging to “char” class and other elements of array belonging to “double” class.(I guess that a struct can consist of chars as well of doubles)

Fig.9. Supported Class types in MATLAB base workspace.

In our case of parameter dependency for example, we have an array of eighth elements. Second element and eighth element of our example array are names of some other parameters, so these names belongs to char class and rest of elements of that array are numerical values belonging to double class. So MATLAB evaluates the values of dependency parameters and replace all names with their corresponding values and our dependency information is lost.

Although there is a function in MATLAB called “eval” and this function can be used instead of dependency parameter name but this does not solve our problem because this function will evaluate the values of those parameters and eventually it’s the value of parameter which is updated in the base workspace and dependency information is still filtered out.

Moral of the story is that we can not do anything in MATLAB to save our dependency information until unless MathWorks do some changes in MATLAB so that base workspace would be able to support values belonging to different classes in same definition. 4.2 Parameter dependencies and TargetLink

In TargetLink we can use custom lookup tables and we can include custom code. Let us suppose for a moment that by adding these custom lookup tables and using some extra blocks we manage to introduce lost dependency information in TargetLink model. But when TargetLink will generate C-code, most probably it will evaluate all those values and resulting values will be included in C-code.

There are two reasons for this behavior of TargetLink:

  • First reason is that, TargetLink work inside MATLAB so all the calculations are done in MATLAB and we face the same problem as described previously.
  • Second reason is that, dSPACE claims that TargetLink generates C-code in the most efficient way, because this C-code is flashed into controller in binary format, so it is the maximum effort of TargetLink to keep C-code as small as possible because of the limited memory of ECU and demand of high operational speed.

So TargetLink does not generate extra variables and pointer in C-code until unless some significant changes are done in TargetLink by dSPACE.

4.3 Parameter dependencies and C-code

C-code generated by TargetLink can be modified and it is possible to add any kind of extra information but there are two reasons which make this possibility impracticable.

  • First reason is that, this C-code will be flashed into ECU and there is very limited memory in the control unit and bigger C-code will result into a less efficient embedded controller.
  • Second reason which makes this possibility impracticable is that. iIt requires a lot of manual labor every time we change something. This is also error prone.

4.4 Parameter dependencies and Calibration tool

In calibration tool like ATI VISION, there is an option to use script written in Vision scripting language or in Visual basic. Instead of doing manual calibration we can automate calibration using the script.

In our case, we have matrices with dependencies. So in order to do calibration using thescripting option we have to write function for doing matrix calculations and then that script must be able to evaluate dependencies according to new values. So this option is not so feasible.

4.5 Separate windows application

After analyzing all possibilities only one option is left. That is to develop a separate windows application which will extract dependency information from m-script, calculate the values of dependency parameters according to the values tuned in calibration tool and will implement those new values of dependencies back in calibration tool.


After analysis of all possible solutions, it is deducted that the most feasible solution to the dependency loss problem is a separate windows application which:

  • Extracts dependency information from m-file.
  • Gets tuned parameter values from calibration tool.
  • Calculates all values corresponding to those tuned parameter values.
  • And implements updated values of dependency parameters back in calibration tool.

5.1 Reasons for selecting this solution

Among other solutions we have selected development of “separate windows application”, as a feasible solution. Major reasons for selecting this solution are as follows:

  • Selected solution which is developing a separate windows application does not need any modification of present softwares. This solution is fast, no extra licenses are required for this and it works just according to our requirements.
  • If we choose any solution which includes modification in software tools, then that involves the involvement of tool makers. That process of convincing toolmakers to modify their software according to our requirements and if they agree then the process of developing and releasing new version of software may take very long time.
  • Tool makers would charge a great sum of money to make specified changes or for making an add-on application for the softwares.

5.2 Overview of solution

The solution is an application named “Dependency Calibrator”. It works in two steps.

In the first step the m-file is parsed and the information of dependency parameter along with their location in parent parameter areis extracted and rearranged in a way that it can be used in the second step that is calibration.

During the second part of the process, first of all the application will import data from VISION so that if user has tuned any value in calibration tool, that data will be updated in MATLAB and then the application will do calculations in MATLAB after that new values obtained as a result of those calculation will be updated again back to VISION. This cyclic process from VISION to MATLAB and back to VISION will update parameter values. If user has changed values which was used by other parameters, those new values will be updated on all locations where they are used. This is shown in the following figFig. 10..

Fig. 10. Overview of solution.

The application “Dependency Calibrator” is divided into two parts.

  • Parser
  • Calibrator

Detailed explanation of how this application isn working is as follows.

5.3 Required Softwares

Parser works without any requirement of external software but in order to run “Calibrator” following softwares must be installed on your system:

  • MATLAB R2007b
  • ATI VISION 3.5.3

MATLAB is automatically launched by the application but make sure to launch ATI VISION before you use “Calibrator” part of “Dependency Calibrator” application.

5.4 Project file

Project file is a key to control the “Dependency Calibrator” application. Instead of using hard coded paths for different files used in this application, an option is given to the users to select their desired locations. These locations can be specified in a separate file which is named as project file.

In this project file the instructions can be given after certain tags. One must be very careful because these tags should not be altered. While user inputs can be given after the symbol “@”.

“Dependency Calibrator” application is in fact capable of handling multiple m-files and multiple c-files. Directory path for these files can be specifies in project file.

Project file contains following tags:

  • VISION’s Device Name @ : After this tag, name of the hardware device which is used in the VISION device tree, should be given. For example,
    VISION’s Device Name @ PCM
    VISION’s Device Name @ CCM
  • Path of m File @ : After this tag the full path for m-file should be given. If number of m-files is more than one, then this tag followed by file path of those m-files should be given on a new line. Parser will read all these files and will merge them into one file. For example,
    Path of m File @ C:FolderNamesubFolderFile_Name.m
    Path of m File @ C:FolderName2subFolder2File_Name2.m
  • Root directory for c files @ : In general practice c files can be generated in different folders but their root directory remains same. So in order to avoid repeating same address and to minimize the chances of error this tag is introduced in project file. So after this tag path of root directory for c files should be specified. Please note that there should be no “” at the end of root directory path. For example:
    Root directory for c files @ D:ABC_XYZsubFoldersubSub
  • Folders containing c-files @: After this tag the names of folders which contain c files should be specified. If there are more than one folder containing c files then those folders names should be added after a comma “,”. The parser will then search these folders for all c files contained in them.
    For example:
    Folders containing c-files @ FolderMedCfiles,subFoldercFolder
  • Root Output Directory @: This tag should be followed by the path for required location where the user wants the application to generate all files. For example:
    Root Output Directory @ C:
  • Extra File for calibrating non-calibratable parameters @: After this tag, there should be the path for the file containing names of those parameters which are not calibratable but they are desired to be calibrated in VISION. Those names should be exactly the same as defined in m-file, followed by underscore “_” and followed by any desired word or character.
  • For example:
    Extra File for calibrating non-calibratable parameters @ C:ExtraParNames.txt

5.5 Parser

The F first part of the complete dependency calibration process is the parser. When “Parser” is executed, a window appears showing two options, “Load Project Fil

With Our Resume Writing Help, You Will Land Your Dream Job
Resume Writing Service, Resume101
Trust your assignments to an essay writing service with the fastest delivery time and fully original content.
Essay Writing Service, EssayPro
Nowadays, the PaperHelp website is a place where you can easily find fast and effective solutions to virtually all academic needs
Universal Writing Solution, PaperHelp
Professional Custom
Professional Custom Essay Writing Services
In need of qualified essay help online or professional assistance with your research paper?
Browsing the web for a reliable custom writing service to give you a hand with college assignment?
Out of time and require quick and moreover effective support with your term paper or dissertation?