Hot Keywords
Complex Engineering Systems Nonlinear Digital Twin PID Vehicle Prediction Fault Diagnosis

Complex Eng Syst 2022;2:10. 10.20517/ces.2022.13 © The Author(s) 2022.
Open Access Research Article

Guaranteed consistency between measurements and parameter systems with correlated additive and multiplicative uncertainties

Université de Lorraine, Centre de Recherche en Automatique de Nancy, Vandœuvre-les-Nancy Cedex 54518, France.

Correspondence to: Prof. José Ragot, Université de Lorraine, Centre de Recherche en Automatique de Nancy, CNRS UMR 7039, 2, Avenue de la forêt de Haye, Vandœuvre-les-Nancy Cedex 54518, France. E-mail: José

    Views:190 | Downloads:43 | Cited:0 | Comments:0 | :1
    Academic Editor: Qichun Zhang, Hamid Reza Karimi | Copy Editor: Fanglin Lan | Production Editor: Fanglin Lan

    © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (, which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.


    The estimation of the parameters of a system by a set membership approach consists in characterizing the set of parameters completely compatible with all the measurements made on the system, the model of this system and the characteristics of the errors and uncertainties that affect the measurements and the system. In this context, it is assumed that the error affecting the measurements is bounded and belongs to a set that is realizable a priori. The estimation problem to be solved then consists in finding the set of admissible values of the model parameters in adequacy with the measurements, the errors and the uncertainties. These uncertainties are handled by an approach that takes into account the unknowns that are the structural error of the model and the values of these parameters. From a practical point of view, the result obtained is a domain of parameters varying in time, domain which is characterized by its bounds. The volume of this domain is minimized, the proposed model explaining the measurements made at each time by optimizing a criterion of precision of the volume in consideration.

    1. Introduction

    1.1. Historical point of view

    Obtaining a representative image of the functioning of a system remains an important step in the management of such a system. Indeed, even the techniques based on data use in one way or another the synthesis of a model capable of representing the functioning of a system. In this approach, several difficulties make this synthesis difficult, namely: the uncertainties on the data and measurements, the variations of the parameters which characterize the functioning of the system, the assumptions made on the structure of the model. If we consider each difficulty separately, many solutions have been proposed to deal with them. However, taking them into account at the same time remains a major difficulty. In what follows, we are interested in the identification of a system from uncertain data and taking into account the variability over time of the parameters of the system. More precisely, the aim is to estimate the bounds of the variable parameters simultaneously with those of the measurement errors.

    From a historical point of view, the first works on parametric estimation taking into account bounds were published in the 80's and the proposed strategies were interested in circumscribing the domain describing the model uncertainties by a simple form. This estimation problem then leads to the determination of the set of admissible parameters known as the Feasible Parameter Set (FPS). This approach was initially designed to deal with a linear model with uncertain parameters and bounded errors. The estimation procedure amounts to determining the set of parameter values explaining all available observations. In this way we can guarantee that these observations are consistent with the bounds of the errors and the structure of the model. Among these sets are, for example, polytopes, zonotopes and ellipsoids. Generally, the ellipsoidal set is privileged in literature for its simplicity. More recently, constrained zonotopes have been introduced [1-3] which provide a new representation of sets allowing to combine the flexibility of convex polytopes with the efficiency of zonotopes. For that reason they have been extensively used in several fields in automatic control that include in particular state estimation, reachability analysis, identification, fault detection and isolation, diagnosis [3-8].

    For models that are linear with respect to their parameters, the FPS is characterized by a convex polytope that can be easily approximated by an ellipsoid [9] or by an orthotope [10] containing it in the least pessimistic way. One can in particular refer to the works by Walter et al. [11] and Mo et al. [12] which made use of polytopic domain for an exact and recursive characterization. In 2014, a recursive approach has been developed to define an approximation by an orthotope containing a set of parameters, the latter belonging to a polytope [13]. The main idea is to select a restricted number of constraints providing a quantified approximation of the exact set.

    Many results have been published by Milanese et al. [14]. In this paper, which is a historical reference, the authors' idea was to get rid of the representation of uncertainties by Gaussian stochastic variables and to substitute them with a set of possible values whose bounds are only known. In another works [15, 16] the authors use the same idea of interval representation and develops it for the estimation of parameters of autoregressive-moving-average-exogenous (ARMAX) models. Of course this approach has been extended to non linear models with respect to their parameters. In order to reduce complexity, various methods have been proposed to determine an approximation of the FPS and some linear techniques have been extended to the nonlinear case using a succession of linearizations of the model [17].

    To solve the problem of nonlinear estimation with bounded error, the study by Jaulin et al. [18] proposed to use set inversion techniques and based on interval analysis, the idea being to characterize the FPS by means of boxes enclosing it externally and internally. We can also refer to the study by Bravo et al. [19] which uses a bounded description of the measurement noise and considers a representation of the set of parameters by a zonotope. The dimension of the monotype is adapted recursively as the measurements are acquired. The article [20] proposes a minimax estimation of parameters of nonlinear parametric models using experimental data. After choosing model structures, it is then possible to exhibit sets of linear inequalities to describe a domain approximating the FPS. The proposed algorithm effectively combines a local search procedure to decrease the upper bound of the solution with a pruning procedure based on the propagation of interval constraints.

    Of course, identification is not an end in itself and many works use bounded parameter models for control, observer synthesis and diagnosis. For example, the article [21] is located in a Bayesian framework to address the problem of identification and detection of fault. A new approach to estimating fault by interval is proposed in the study by Zhou et al. [8]. A zonotope is used to represent a discrete linear time system whose parameters vary in the presence of bounded parametric uncertainties, measured disturbances and system disturbances. In general, many articles widely use polytope representation for the synthesis of state observers. For example in the study by Valero et al. [22], the authors present an alternative state estimation method using convex polyhedra. The Kalman filter condition estimate was also discussed in light of the zonotope representation [6] and [4] in which, based on a new zonotope dimension criterion and combining observer gain design according to optimality and robustness criteria, a zonotopic Kalman filter is proposed with a robust convergence proof. The recent paper [23] also exploits the interval representation of stochastic uncertainties affecting a system in order to synthetize a Kalman filter dedicated to sensor fault detection.

    Despite a possible resemblance, the problem considered in the study by Ploix et al. [24] is significantly different in that the uncertain parameters depend on time; more precisely, they are defined by random variables whose realizations have limited amplitudes. In addition, the proposed method does not use probabilistic formalism to determine the imprecision with which each model parameter is known. Only a class of linear structured and static models in uncertain parameters is considered. As already mentioned in citerag the measurement errors are bounded while the system parameters fluctuate within a limited domain invariant in time represented by a convex domain.

    1.2. Aim of the paper

    Thus, the proposed paper deals with parameter estimation in a bounded error context for linear models in parameters. Parameters may vary in a bounded volume domain, measurement errors also belong to a bounded domain, but the two domains are not known a priori. The objective of the proposed method is to determine the geometric characteristics of these domains (centre and radii for example). The idea is to determine the nominal value of the parameter vector and some time-variant uncertainties making it possible to explain the current observation. Maximal magnitudes of these uncertainties make it possible to deduce the characteristics of the considered domain. By fluctuating inside this one, parameters can explain all measurements. Moreover, in order to obtain the most precise model, the estimation problem is then to find the smallest domain.

    In the following, section 2 formulates the variable parameter estimation problem and section 3 defines an accuracy criterion to obtain the smallest possible parametric uncertainty domain while guaranteeing the adequacy of the data to the system model. Sections 4 and 5 are related to the implementation of the proposed approach and to the presentation and discussion of numerical results.

    What are the contributions of the proposed paper? As indicated by the previous bibliographic references, the bounded error approach to identify the parameters of a system is not new. Therefore, the contribution of our proposal lies in the following points:

    ● The representation of the uncertainties in additive form but also in multiplicative form, according to whether they affect the measurements of the outputs or the parameters of the considered system;

    ● The taking into account of the coupling between outputs of the system due to the presence of parametric uncertainties. Therefore the parametric domain has a minimal volume;

    ● The taking into account of a matrix characterizing the direction of the parametric uncertainties, this matrix can be a priori known or obtained in an experimental way;

    ● The joint identification of the parameters of the system and the bounds of the uncertainties, this estimation guaranteeing the membership of all the measurements to the identified model.

    2. Problem formulation

    We describe in subsection 2.1 the structure of an uncertain system and its modelisation. Subsection 2.2 gives the principle of estimating the parameters of the model.

    2.1. Modelling of an uncertain system

    Let us consider an uncertain model of a system with several outputs, linear in parameters and observations, and represented by the following structure at each time instant $$ k $$:

    $$ \begin{equation} \begin{gathered} y(k)=X(k)\theta(k)+b(k)+ e(k) \quad k=1..N \\ \theta(k)=\theta_c+M(\lambda) v (k)\\ e(k)=Z(\delta)w(k) \\ \parallel v(k) \parallel_\infty \leq 1 \quad \parallel w(k) \parallel_\infty \leq 1 \end{gathered} \end{equation} $$

    where $$ y(k) \in \mbox{IR}^n $$, $$ X(k) \in \mbox{IR}^{n \times p} $$ and $$ b(k) \in \mbox{IR}^n $$ are the explanatory variable at the time $$ k $$ and $$ \theta(k) \in \mbox{IR}^p $$ defines model parameters. The bounded vector $$ e(k) \in \mbox{IR}^n $$ defines the error taking into account the uncertainties due to the measuring process and to modeling errors at the same time. The vector $$ v(k) $$ is varying inside an unit hypercube noted $$ \mathcal{H}_q $$ :

    $$ \mathcal{H}_q=\{ v \in \mbox{IR}^q , \parallel v \parallel _{\infty} \leq 1\} $$

    This vector allows to represent the uncertain nature of model parameters. Theses uncertainties are distributed on the various components of the vector $$ \theta $$ via a full row rank matrix $$ M(\lambda) \in \mbox{IR}^{p \times q} $$ (in general $$ q\geq p $$) depending on the vector $$ \lambda=(\lambda_1\dots \lambda_q)^T $$. The matrix $$ M(\lambda) $$ and $$ Z(\delta) $$ are supposed to have the following structure:

    $$ \begin{equation} \begin{array} {ll l l l} M(\lambda) &=& M \, Diag(\lambda), & \lambda \in \mbox{IR}_+^q , & M \in \mbox{IR}^{p \times q}\\ Z(\delta) &=& Z \, Diag(\delta), & \delta \in \mbox{IR}_+^n, & Z \in \mbox{IR}^{n \times n} \end{array} \end{equation} $$

    where $$ M=\begin{bmatrix} m_1 & m_2 & \dots & m_q \end{bmatrix} $$ and where $$ Z=\begin{bmatrix} z_1 & z_2 & \dots & z_n \end{bmatrix} $$. When the uncertainties affect independently each output, $$ Z $$ reduces to a diagonal matrix. Thus the outputs of the system are not correlated by the uncertainties $$ \delta_i, i=1, \dots, n $$. On the other hand, as the matrix $$ M(\lambda) $$ of the uncertainties does not have a particular structure, which can be the cause of coupling between its lines due to the parameters $$ \lambda_i, i=1, \dots, q $$.

    In the numerical applications of section 5, without affecting generality, the matrix $$ Z $$ will be taken equal to the identity matrix. The vector $$ \delta=\begin{bmatrix} \delta_1 & \dots & \delta_n \end{bmatrix}^T $$ defines the magnitude of additive uncertainties which are considered bounded. Summarizing, the uncertainties allow to define two invariant domains:

    $$ \begin{align} \mathcal{P}_e(\delta)&=\{ e(k)=Z(\delta)w(k), \parallel w(k)\parallel _\infty \leq 1 \} \end{align} $$

    $$ \begin{align} \mathcal{P}_\theta(\lambda, \theta_c)&= \{ \theta(k)=\theta_c+M(\lambda)v(k), \parallel v(k)\parallel_\infty \leq1 \} \end{align} $$

    This type of model includes the particular case of multi input single output (MISO) systems and that of multi input multi output (MIMO) systems. However, in the MIMO case, according to the presence of uncertain parameters $$ \theta(k) $$ in (1), the components of the output $$ y(k) $$ can be coupled by some of the uncertain parameters $$ v(k) $$ that can lead to some difficulties in the estimation problem. This important point is discussed in Section 3.

    Remark 1.The reader will note that the model describing $$ \theta(k) $$ in (1) is that of a zonotope usually defined as a Minkowski sum, but formulated here from its center and its generating vectors [7]. The values of $$ M $$ and $$ \lambda $$, but also the number of faces of the zonotope linked to $$ q $$, impact the shape of the domain $$ \mathcal{P}_\theta(\lambda, \theta_c) $$, which allows to describe a large number of situations. As an example, with $$ M=\begin{bmatrix} 1 & -1 & 0 \\ 1 & 1 & -1\end{bmatrix} $$ and $$ \theta_c = \begin{bmatrix} 0 \\ 0\end{bmatrix} $$, the Figure 1 visualizes the shapes obtained for two values of $$ \lambda $$. The left part visualises the $$ 3 $$ generating vectors $$ m_1, m_2, m_3 $$ (the columns of the $$ M $$ matrix) and the left part shows the deformation of the zonotope according to a modification of its pareameters.

    Figure 1. Two shapes of zonotope $$ \mathcal{P}_\theta(\lambda, \theta_c) $$.

    Remark 2.Obviously, the formulation (1) of the system output also applies to a dynamic model. Indeed, without affecting generality, let us consider the stable system with a completely measured state:

    $$ \begin{equation} \begin{array} {ll l l l l l} x(k+1)&=& A(k) x(k)+Bu(k) \\ y(k) &=& x(k) + v(k) \end{array} \end{equation} $$

    By restricting the estimation problem to the coefficients of the matrix $$ A(k) $$, we can always write (5) in the form :

    $$ \begin{equation} \begin{array} {ll l l l l l} x(k+1)&=& F(x(k)) \theta(k) +Bu(k) \\ y(k) &=& x(k) + v(k) \end{array} \end{equation} $$

    with $$ \theta=Vec(A) $$ where the $$ Vec(.) $$ operator stacks the columns of the matrix $$ A $$ and where $$ F(x(k)) $$ is expressed linearly in terms of the vector $$ x $$. Substituting $$ x $$ as a function of the measure $$ y $$ leads to :

    $$ \begin{equation} \begin{array} {ll l l l l l} y(k+1)&=& F(y(k)) \theta(k) +Bu(k) +e(k) \\ e(k) &=& v(k+1)-F(v(k)) \theta (k) \end{array} \end{equation} $$

    the structure of this equation being then consistent with (1).

    2.2. Principle of parameter estimation

    The problem involved with parameter estimation is to characterize the unknown parameters $$ \theta(k) $$ of a model using measured data $$ \tilde{X} (k) $$ and $$ \tilde{y} (k) $$. In other words, the aim is to determine the parameter domain containing all possible values consistent with data. Moreover, this domain must be, for reasons of precision, of minimal volume.

    In the case of time-invariant parameters, Milanese and Belforte [10] suggest approximating the parameter domain with an orthotope aligned with the parameter coordinate axes and finding the minimal and maximal values of the $$ \theta_i, i=1, \dots, p $$ components, by using linear programming. Fogel and Huang [9] propose an ellipsoidal outer-bounding recursive algorithm: the centre of the ellipsoid and the positively defined symmetrical matrix which definies it are considered, respectively, as the central value of the parameter and its measurement of uncertainty.

    In our formulation, the parameter estimation problem consists in finding the values of the vectors $$ \theta_c $$, $$ \lambda $$ and $$ \delta $$ which define the parameters domain $$ \mathcal{P}_\theta(\lambda, \theta_c) $$ (4) and the measurement errors domain $$ \mathcal{P}_e(\delta) $$ (3), so that the characterised model explains all the available measurements $$ \tilde y(k) $$ in the most precise way:

    $$ \begin{equation} \tilde{y}(k) \in \mathcal {P}_y(\lambda, \delta, \theta_c), \quad k=1, \dots, N \end{equation} $$


    $$ \begin{equation} \begin{gathered} \mathcal {P}_y(\lambda, \delta, \theta_c)=\{y(k) \, \mid \, y(k)=\tilde{X}(k)\theta_c+b(k)+ \tilde{X}(k)M(\lambda)v (k)+Z(\delta)w(k) \\ \parallel v(k)\parallel _\infty \leq 1, \parallel w(k) \parallel _\infty \leq 1 \} \end{gathered} \end{equation} $$

    $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$ defines all possible values of the variables $$ \tilde y(k) $$ consistent with measurements $$ \tilde{X}(k) $$ of variables $$ X(k) $$ and the model uncertainties description given by the vectors $$ \lambda $$ and $$ \delta $$. So, $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$ is an interval estimation of measurements $$ \tilde{y}(k) $$.

    In this paper, it is assumed that a certain level of knowledge about the parameter domain is available in the sense that $$ \mathcal{P}_\theta(\lambda, \theta_c) $$ (4) has a pre-determined shape (the matrix $$ M $$ is a priori chosen by the user) or an undetermined shape ($$ M $$ must be determined) whose structure has been given in the previous section. As shown in Figure 1, the $$ \mathcal P_{\theta}(\lambda, \theta_c) $$ domain is characterized by the position of its center $$ \theta_c $$ and by its shape due to the matrix $$ M(\lambda) $$. It is the same for the $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$ domain which, in addition, is sensitive to the uncertainties $$ \delta $$. These parameters can have complementary or competing effects depending on their respective magnitudes, as evoked by the following three cases.

    In a first case, if we consider that the parameters of the model are constant ($$ \lambda_i $$ are equal to $$ 0 $$), then the $$ \delta_i $$ bounds can be chosen as large as desired for a given value of $$ \theta_c $$. This then increases the volume of the uncertainty domain affecting the model, until it becomes compatible with all measurements;

    In a second case, if the measurements are not affected by errors (the $$ \delta_i $$ bounds are equal to $$ 0 $$), then the model can be totally compatible with the measurements by increasing the magnitude of the $$ \lambda_i $$ bounds enough;

    In the other cases, it will be possible to define a criterion for adjusting the bounds which is representative of the accuracy of the model, the latter being linked to the size of the domain. Indeed, increasing "arbitrarily" the values of the bounds $$ \lambda_i $$ and $$ \delta_i $$ is possible in order to explain all the measurements, but is not a satisfactory solution in terms of accuracy. This is why the volume of this domain must be controlled, and even optimized.

    In view of the above remarks, it is necessary to define an indicator that is sensitive to the difference between the actual measurements and their model-generated estimates, and this indicator should depend explicitly on the model parameters. Ragot [25], defined a criterion based on interval arithmetic [26] for a model with only one output. In this paper, a MIMO model is studied and the aim is to characterise uncertainties while minimising a criterion of precision related to the dimension of the output domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$. An obvious and intuitive choice that one can make, is to consider the volume of the domain. If $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ has a pre-determined form, it is easy to show that its volume is proportional to the components of $$ \lambda $$. Then, the solution is the smallest $$ \lambda $$ which explains all measurements.

    The extension of this domain characterisation procedure, when $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ has an indeterminate form, leads to some calculation difficulties. Indeed, the evaluation of the volume of a polytope leads to an expression containing symbolic functions [27], which are unusable to find a solution and make the calculation very delicate. It is therefore necessary to establish a criterion which, at the same time, is representative of the precision of the model and does not lead to major difficulties in calculation.

    3. Criteria for estimating the model parameters

    The aim of this section is to define a mathematical criterion which provides a solution $$ (\lambda_s, \delta_s, \theta_{c, s}) $$ with a twofold objective. The first one is that the domain $$ \mathcal{P}_y(\lambda_s, \delta_s, \theta_{c, s}) $$, corresponding to the estimation of $$ y(k) $$, contains all the measurements $$ \tilde{y}(k) $$. The second concerns the accuracy of the model and aims to have a minimal size domain. In the following, the general case where the parameter domain $$ \mathcal{P}_\theta(\lambda, \theta_c) $$ (4) has an undetermined shape (and consequently $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$ too) is considered. To start with, we have to give the definition of a vertex $$ S $$ of $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$, vertex being a good way for caracterizing the shape and further the volume of $$ \mathcal {P}_y(\lambda, \delta, \theta_c) $$. This section gives also the way to characterize the data domain and the construction of the precision criterion.

    3.1. Output domain

    Now, we are interested in the computation of all vertices of $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$. Moreover, due to the definition of the uncertainties (that are centered), the set of the vertices and the pseudo-vertices of $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$, are symmetrically distributed around its centre $$ \tilde{y}_c(\theta_c, k) $$. Since its structure depends on $$ \lambda $$ and $$ \delta $$, the shape of this set is directly related to model uncertainties $$ \lambda $$ and $$ \delta $$. Then, the distances between the centre of $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ and its vertices (which can be easily computed) can also describe this shape. So, it is then possible to consider these distances as a criterion of the model precision. The expression which generates the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$, parametrized by $$ \lambda $$, $$ \delta $$ and $$ \theta_c $$, given in (9), can also be expressed as:

    $$ \begin{equation} \begin{gathered} \tilde{y}(k)\in \mathcal{P}_y(\lambda, \delta, \theta_c) \Leftrightarrow \tilde{y}(k)=\tilde{y}_c(\theta_c, k)+\tilde T(k, \lambda, \delta)\xi(k) \end{gathered} \end{equation} $$


    $$ \begin{align} \tilde T(k, \lambda, \delta)&=\begin{bmatrix} \tilde{X}(k)M(\lambda) & Z(\delta)) \end{bmatrix}, \qquad \tilde T(k, \lambda, \delta) \in \mbox{IR} ^{n \times (q+n)} \end{align} $$

    $$ \begin{align} \tilde{y}_c(\theta_c, k)&= \tilde{X}(k)\theta_c + b(k) \end{align} $$

    $$ \begin{align} \xi(k)&=\begin{bmatrix} v(k) \\ w(k) \end{bmatrix}, \qquad \xi(k)\in\mathcal{H}_{q+n} \end{align} $$

    The expression (10) clearly shows that the components of $$ \tilde y(k) $$ are coupled with respect to the variables $$ \xi(k) $$. It is therefore necessary to take these couplings into account in order to describe correctly the $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ domain and especially not to overdimension it.The highlighting of these couplings requires the search for $$ R(k) $$ matrices reducing the number of uncertain variables present in the $$ T(k, \lambda, \delta)\xi(k) $$ matrix (10). Therefore the matrix $$ R(k) $$ must be orthogonal to a set of columns of the matrix $$ T(k, \lambda, \delta) $$. In this way a certain number of components of the uncertainty vector $$ \xi(k) $$ will not appear in the new equation thus generated :

    $$ \begin{equation} \begin{gathered} \tilde{y}(k)\in\mathcal{P}_y(\lambda, \delta, \theta_c) \Leftrightarrow R(k)(\tilde{y}(k)-\tilde{y}_c(\theta_c, k))= d(k, \lambda, \delta, \theta_c) \\ d(k, \lambda, \delta, \theta_c) = R(k)\, T(k, \lambda, \delta)\, \xi(k) \end{gathered} \end{equation} $$

    with $$ R(k)\in \mbox{IR}^{\overline {r} \times n} $$ (where $$ \overline r $$ will be defined later on). Before detailing how to obtain $$ R $$ and $$ d $$, here is an example illustrating the problem of dependence between the outputs of a system. We have chosen the free input system described by :

    $$ \begin{equation} \begin{gathered} \tilde X = \begin{bmatrix} 1 & 2 \\ 0 & 1\end{bmatrix}, \ M(\lambda) = \begin{bmatrix} \lambda_1 & -\lambda_2 \\ 2 \lambda_1 & 0\end{bmatrix}, \quad Z(\delta)= \begin{bmatrix} \delta_1 & 0 \\ 0 & \delta_2 \end{bmatrix}, \ \theta_c = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \ \delta= \begin{bmatrix} 1\\ 2 \end{bmatrix} \end{gathered} \end{equation} $$

    the time $$ k $$ being voluntarily omitted. Given the definition (1), the components of the measured output are explained

    $$ \begin{equation} \begin{array} {l l l l } \tilde y_1 &=&5 \lambda_1 v_1 -\lambda_2 v_2 + \delta_1w_1 \\ \tilde y_2 &=& 2 \lambda_1 v_1+ \delta_2 w_2 \end{array} \end{equation} $$

    These equations being only coupled with respect to $$ v_1 $$, by simple linear combination using $$ R = \begin{bmatrix} 2 & -5\end{bmatrix} $$ which is orthogonal to the first column of $$ \tilde X M $$, we obtain from (14) :

    $$ \begin{equation} 2 \tilde y_1 - 5\tilde y_2 =-2 \lambda_2 v_2 +2 \delta_1 w_1-5 \delta_2 w_2 \\ \end{equation} $$

    whose structure is in accordance with (12). Given the bounds on the uncertainties $$ v $$ and $$ w $$, we deduce from (14) and (15) the bounds of $$ \tilde y_1, \tilde y_2, 2\tilde y_1-5 \tilde y_2 $$ and those of $$ \tilde y $$ which are then collected :

    $$ \begin{equation} \begin{bmatrix} \ \ \ 1 & \ \ \ 0 \\ -1 & \ \ \ 0 \\ \ \ \ 0 & \ \ \ 1 \\ \ \ \ 0 & -1 \\ \ \ \ 2 & -5 \\ -2 & \ \ \ 5 \end{bmatrix} \tilde y\quad {\leq} \quad \begin{bmatrix} 5 \lambda_1 + \lambda_2 + \delta_1 \\ 5 \lambda_1 + \lambda_2 + \delta_1 \\ 2 \lambda_1+ \delta_2 \\ 2 \lambda_1+ \delta_2 \\ 2 \lambda_2 +2 \delta_1+5 \delta_2 \\ 2 \lambda_2 +2 \delta_1+5 \delta_2 \end{bmatrix} \end{equation} $$

    In the space $$ \{\tilde y_2, \tilde y_1\} $$, the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ can then be constructed by looking for the intersections of the half-spaces defined by (16), This amounts to determining the support lines (hyperplanes in the general case) of the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$. This search is difficult from an analytical point of view because the considered half-spaces are functions of the unknowns $$ \lambda, \delta $$. Of course, if the parameters $$ \lambda $$ and $$ \delta $$ are known, the situation is quite different and the determination of the intersections of the half-spaces is trivial. With $$ \lambda=\begin{bmatrix} 1 & 0.5 \end{bmatrix}^T $$ and $$ \delta=\begin{bmatrix} 1 &2 \end{bmatrix}^T $$, the Figure 2 visualizes, in the space $$ \{\tilde y_2, \tilde y_1\} $$, the $$ 12 $$ intersections, the points of green color corresponding to the vertices of the polytope $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$, those of red color being able to be qualified of pseudo vertices. Again, if the shape parameters $$ \lambda, \delta $$ are unknown, the distinction between vertices and pseudo vertices is complex. For this reason, the following section proposes an alternative to the characterization of the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ without necessarily trying to distinguish vertices from pseudo vertices. The Figure 2 clearly highlights the pessimism resulting from not taking into account the coupling (15) of the model outputs. The real domain (green colour) which takes this coupling into account is to be compared to the red rectangle obtained without this coupling.

    Figure 2. Domain $$ P_y(\lambda, \delta, \theta_c) $$, vertices in green, pseudo-vertices in red, true domain in green.

    In the aforementioned, the reader's attention has been drawn to the problem of the coupling of the equations by the parametric uncertainties. Of course, this coupling could also be taken into account depending on how the noise $$ w $$ affects the measurements. In this case, the matrix $$ Z(\delta) $$ (13) has to be restructured like the matrix $$ M(\lambda) $$.

    3.2. Data paralelotope characterisation

    Following the previous observation, in what follows all the intersections of the hyperplanes defining the half spaces will be considered without distinguishing between vertices and pseudo vertices of the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$.

    In fact, to characterize the domain, three steps are considered, the first for analyzing separately the components of $$ \tilde{y}(k) $$, the second to take into account the coupling of the component of $$ \tilde{y}(k) $$ according the variable $$ w(k) $$ and, at last, the results of the two analysis are aggregated. Moreover this characterization of the domain $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ is to be done for all the available information, i.e., for $$ k=1, \dots, N $$. Let us now specify the implementation of these three steps.

    For the first step, knowing that $$ \xi(k) $$ (11c) varies in $$ \mathcal{H}_{q+n} $$ ($$ \parallel \xi(k) \parallel _\infty \leq 1 $$), it is possible to calculate the lower and upper bounds of each component of $$ \tilde{y}(k) $$. Indeed, from (10), one obtains the folowing halfspaces:

    $$ \begin{gather} \tilde{y}(k)\leq \tilde{y}_c(\theta_c, k)+\mid \tilde T(k, \lambda, \delta) \mid \mathcal{I}_{q+n} \end{gather} $$

    $$ \begin{gather} \tilde{y}(k)\geq \tilde{y}_c(\theta_c, k)-\mid \tilde T(k, \lambda, \delta) \mid \mathcal{I}_{q+n} \end{gather} $$

    where $$ \tilde T(k, \lambda, \delta) $$ has been defined in (11a) and where $$ \mid . \mid $$ denotes the absolute value operator and $$ \mathcal{I}_{q+n} $$ is the unity vector in $$ { \mbox{IR}}^{q+n} $$ (all its elements are equal to 1). In order to point out the role played by the parameters $$ \delta $$ and $$ \lambda $$ in (17), let us define:

    $$ \begin{align} \alpha&= \begin{bmatrix} \lambda \\ \delta \end{bmatrix} \in \mbox{IR}^{q+n} \end{align} $$

    $$ \begin{align} \tilde{T}(k)&=\begin{bmatrix} \tilde{X}(k)m_1& \dots& \tilde{X}(k)m_q & Z\end{bmatrix} \end{align} $$

    Then $$ \mathcal{P}_y(\lambda, \delta, \theta_c) $$ and $$ d(k, \lambda, \delta, \theta_c) $$ become respectively $$ \mathcal{P}_y(\alpha, \theta_c) $$ and $$ d(k, \alpha, \theta_c) $$. Using definitions (18), relations (17) are grouped under the form:

    $$ \begin{equation} \begin{bmatrix} I_n\\-I_n\\ \end{bmatrix} \quad (\tilde{y}(k)-\tilde{y}_c(\theta_c, k))\leq \begin{bmatrix} \mid \tilde{T}(k) \mid\\\mid \tilde{T}(k) \mid\\ \end{bmatrix} \alpha \end{equation} $$

    and highlights the influence matrix of the $$ \alpha $$ uncertainties. The relations (19) define an aligned orthotope in $$ \mbox{IR}^n $$ centred on $$ \tilde{y}_c(\theta_c, k) $$. However, these relations do not take into account the dependencies between the components of $$ \tilde{y}(k) $$ generated by the elements of $$ \xi(k) $$ (10). Indeed, the $$ j $$th component of $$ \xi(k) $$ generally appears in the expression of several components of the vector $$ \tilde{y}(k) $$, thus it creates a dependency between the components of $$ \tilde{y}(k) $$ where it occurs.

    Thus, the second step, consists in reducing the coupling effect between the components of the $$ \tilde y(k) $$ output with the objective of reducing the volume of domain $$ \mathcal P_{\theta} $$. First of all, recall that the influence of $$ v(k) $$ and $$ w(k) $$ on the output $$ \tilde y(k) $$ is due to the matrix $$ \tilde T(k, \lambda, \delta) $$ (10) :

    $$ \begin{gather} \tilde y(k) =\tilde y_c(\theta_c, k)+ \tilde T(k, \lambda, \delta) \xi(k) \end{gather} $$

    $$ \begin{gather} \quad =\tilde y_c(\theta_c, k)+ \begin{bmatrix} \tilde X(k) M & Z(\delta) \end{bmatrix} \begin{bmatrix} Diag(\lambda)v(k) \\ Diag(v) w(k)\end{bmatrix} \end{gather} $$

    For that purpose, we try to eliminate, as much as possible, subsets of the components of the uncertainties $$ v(k) $$ and $$ w(k) $$. The uncertainties to be eliminated are identified by the indices :

    $$ \begin{equation} \tau = \begin{Bmatrix} j_1 & j_2 & \dots & j_s\end{Bmatrix}, \quad 1 \leq s \leq n-1 \end{equation} $$

    where $$ j_p (p=1, \dots, s) $$ are the indices of the components of $$ \xi(k) $$ that one seeks to eliminate and where $$ s $$ is the number of components to eliminate, number necessarily lower than the number of equations $$ n $$ serving for this elimination. The influence of the selected $$ \xi_{\tau}(k) $$ components on $$ \tilde y(k) $$ is then due to the submatrix $$ \tilde T_{:, \tau}(k) $$ extracted from $$ \tilde T(k) $$ where the indices $$ \{:, \tau\} $$ selecting respectively all the rows of $$ \tilde T(k) $$ and its columns defined by $$ \tau $$. Let us define he co-kernel $$ g_s(k) $$ of $$ \tilde T_{:, \tau}(k) $$:

    $$ \begin{equation} g_s^T(k)\ \tilde T_{:, \tau}(k)=0 \end{equation} $$

    It goes without saying that the dimension of this co-kernel cannot be given because it depends on the rank of the matrix $$ \tilde T_{:, \tau}(k) $$, but, necessarily it is less than $$ n-s $$.

    The elimination of a part of the uncertain parameters $$ \xi(k) $$ is made effective by pre-multiplying on the left (20a) by $$ g_s^T(k) $$, which leads to :

    $$ \begin{equation} g_s^T(k) \ \tilde y(k) = g_s^T(k)\ \tilde y_c(\theta_c, k)+g_s^T(k)\ \tilde T(k, \lambda, \delta) \xi(k) \end{equation} $$

    Taking into account the bounds of $$ \xi(k) $$ and using definition (18b) we deduce from (23):

    $$ \begin{gather} g_s^T (k)\tilde{y}(k)\leq g_s^T(k)\tilde{y}_c(\theta_c, k)+\mid g_s^T(k)\tilde{T}(k) \mid \alpha \end{gather} $$

    $$ \begin{gather} g_s^T (k)\tilde{y}(k)\geq g_s^T(k)\tilde{y}_c(\theta_c, k)-\mid g_s^T(k) \tilde{T}(k) \mid \alpha \end{gather} $$

    By iterating this procedure for of all possible sets $$ \tau $$ (21) of bounded variables to eliminate and aggregating the pairs of inequalities (24), one obtains:

    $$ \begin{equation} \begin{bmatrix} g_1^T(k) \\...\\g_{n_y}^T(k)\\-g_1^T(k) \\...\\-g_{n_y}^T(k) \end{bmatrix} \quad (\tilde{y}(k)-\tilde{y}_c(\theta_c, k))\leq \begin{bmatrix} \mid g_1^T(k) \tilde{T}(k) \mid\ \\ ... \\ \\ \mid g_{n_y}^T (k)\tilde{T}(k) \mid\\ \mid g_1^T(k) \tilde{T}(k) \mid\ \\ ... \\ \\ \mid g_{n_y}^T (k)\tilde{T}(k) \mid\\ \end{bmatrix} \alpha \end{equation} $$

    where $$ n_y $$ is the total number of eliminations. As indicated by the definition (21), the elimination can concern a single component ($$ s=1 $$) or a group of components ($$ 2 \le s \le n-1 $$). As a particular case, if we consider only the case of a group of $$ s $$ components we have $$ n_y\leq \mathrm{C}_{q+n}^{s} $$. In section 4, we will come back to this point which can be the source of some complexity.

    ● In the last step gathering inequalities (19) and (25) allows to describe the domain $$ \mathcal{P}_y(\alpha, \theta_c) $$ by:

    $$ \begin{equation} R(k)(\tilde{y}(k)-\tilde{y}_c(\theta_c, k))\leq D(k)\, \alpha \end{equation} $$


    $$ \begin{align} R(k)&=\begin{bmatrix} g_1(k) &\dots g_{n_y}(k) &-g_1(k) &\dots &-g_{n_y}(k) &I_n &\dots& -I_n \end{bmatrix}^T, \quad R(k)\in \mbox{IR}^{2(n_y+n)\times n} \end{align} $$

    $$ \begin{align} D(k)&= \mid R(k)\tilde{T}(k) \mid , \quad D(k)\in \mbox{IR}^{2(n_y+n)\times (n+q)} \end{align} $$

    Note that this domain can also be defined by the hyperplanes of equation :

    $$ \begin{align} R(k)(\tilde{y}(k)-\tilde{y}_c(\theta_c, k)) &= D(k)\, \alpha \end{align} $$

    $$ \begin{align} D(k)&= \mid R(k)\tilde{T}(k) \mid \end{align} $$

    which depend linearly on the parameters $$ \theta_c, \alpha = \begin{bmatrix} \lambda^T, &\delta ^T \end{bmatrix}^{T} $$ of the model. Therefore, the convex domain defined by the inequalities (26) also depends on these parameters. As recalled at the beginning of section 3, it is necessary to adjust these parameters to guarantee that the $$ \tilde y(k) $$ measures belong to the domain $$ \mathcal{P}_y(\alpha, \theta_c). $$

    3.3. Precision criterion

    The main result of section 3.2 provides the bounded domain to which the measurements $$ \tilde{y}(k) $$ belong. Remember that this domain is characterized by several parameters, i.e., the center $$ \theta_c $$ of the parameter domain, the shape of the domain described by the $$ \lambda $$ parameter and the bound $$ \delta $$ of the error. Adjusting these parameters refers to a problem of identification, for which we have to define a criterion to be optimized. It is clear that the "best" model is that which can explain all the measurements with the smaller fluctuations of its parameters, these fluctuations depending on $$ \lambda $$ and $$ \delta $$.

    For that purpose, we propose to compute the distances between the centre $$ \tilde y_c(\theta_c, k) $$ of $$ \mathcal{P}_y(\alpha, \theta_c) $$ and its vertices without any distinctions between vertices and pseudo-vertices (see Figure 2 in the toy example in section 3.1). This evaluation of the accuracy consists of 3 steps.

    First, it is necessary to determine the intersections of the hyperplanes (28) defining the domain $$ P_y(\alpha, \theta_c) $$, in order to identify the vertices of this domain. As the hyperplanes are defined in a space of dimension $$ n $$, finding these intersections can be reduced to the solution of linear systems of $$ n $$ equations with $$ n $$ unknowns. The objective here being not to propose the most efficient approach in terms of computation volume for the search of these vertices, the approach we have used is naive. It uses the brute force principle which enumerates all the full rank matrices $$ \Gamma_i(k) $$ extracted from the matrix $$ R(k) $$ defined by:

    $$ \begin{equation} \Gamma_i(k)=\begin{bmatrix} d_{i_1}^T(k)& \dots &d_{i_n}^T(k)\end{bmatrix}^T, i_j \in \{1:2(n_y+n) \}, \qquad \Gamma_i(k) \in \mbox{IR}^{n \times n} \end{equation} $$

    which consists of $$ n $$ independent row vectors of the matrix $$ D(k) $$ (28b) (in a formal way, we can note that, if we consider the space $$ \mathcal E $$ generated by the column vectors of $$ R(k) $$, the column vectors of the matrices $$ \Gamma_i(k) $$ also generate $$ \mathcal E $$). We cannot know a priori the number $$ n_{\gamma, k} $$ of matrices $$ \Gamma_i(k) $$, due to the rank condition on these matrices, but necessarily $$ n_{\gamma, k} \leq \rm{C}_{2(n_y+n)}^n $$.

    Then, using (28a) in which $$ R(k) $$ has been reduced to $$ \Gamma_i(k) $$, determine the points $$ S_i(k) $$ such that:

    $$ \begin{equation} \Gamma_i(k)(S_i(k)-\tilde{y}_c(\theta_c, k))=\mid \Gamma_i(k)\tilde{T}(k) \mid \alpha, \qquad i=1, \dots, n_{\gamma, k} \end{equation} $$

    We then have the coordinates of the intersections of hyperplanes:

    $$ \begin{equation} S_i(k)=\tilde{y}_c(\theta_c, k)+\Gamma_i^{-1}(k)\mid \Gamma_i(k)\tilde{T}(k) \mid \alpha, \qquad i=1, \dots, n_{\gamma, k} \end{equation} $$

    which is a linear expression in respect to the parameters $$ \theta_c $$ and $$ \alpha $$.

    Second, the quadratic distance $$ \Delta_i(k) $$ between the point $$ S_i(k) $$ and the centre $$ \tilde{y}_c(\theta_c, k) $$ of the domain $$ \mathcal{P}_y(\alpha, \theta_c) $$ is evaluated :

    $$ \begin{equation} \Delta_i(k)=\parallel S_i(k)-\tilde{y}_c(\theta_c, k)\parallel ^2_2, \qquad i=1, \dots, n_{\gamma, k} \end{equation} $$

    Given (31), the distance is directly explained in terms of the parameters $$ \alpha $$ :

    $$ \begin{equation} \Delta_i(k)= \alpha^TQ_i(k)\alpha, \qquad i=1, \dots, n_{\gamma, k} \end{equation} $$


    $$ \begin{equation} Q_i(k)=\mid \Gamma_i(k)\tilde{T}(k) \mid^T\Gamma_i^{-T}(k)\Gamma_i^{-1}(k)\mid \Gamma_i(k)\tilde{T}(k) \mid , \qquad i=1, \dots, n_{\gamma, k} \end{equation} $$

    Consequently, the quadratic mean of $$ \Delta_i(k) $$ at a time $$ k $$ is:

    $$ \overline\Delta(k)=\alpha^T \left( \frac{1}{n_k}\sum\limits_{i=1}^{n_{\gamma, k}}Q_i(k) \right)\alpha $$

    Third, taking into account all the available data, the criterion of precision may be expressed:

    $$ \begin{equation} J(\alpha)=\alpha^T \sum\limits_{k=1}^N \left( \frac{1}{n_{\gamma, k}} \sum\limits_{i=1}^{n_k} Q_i(k)\right) \alpha \end{equation} $$

    This criterion is clearly a quadratic function of the magnitude of the uncertainties $$ \alpha $$ which include those affecting the model outputs and its parameters (1). In order to improve the accuracy of the model, it is therefore necessary to estimate the value of $$ \alpha $$ which makes the criterion $$ J $$ minimal.

    4. Problem solving

    Given the previous formulations, the characterization of the uncertain model takes into account two objectives.

    The first one concerns the consistency of the data with the bounds of the model domain model, i.e., the parameter domain must be designed in order to explain all the available data. The second is an accuracy constraint, as the model parameters must control the volume of the domain.

    For the first objective the principle of parameter estimation is to explain all the measurements. Thus, the vector $$ \alpha $$ (18a) must be calculated in such a way that $$ \tilde{y}(k)\in\mathcal{P}_y(\alpha, \theta_c) $$. So $$ R(k)\tilde{y}(k)\leq d(k, \alpha, \theta_c) $$ describes all the values of $$ \alpha $$ and $$ \theta_c $$ which are consistent with the measurements at each instant $$ k $$. Then, taking into account the definition (11b) of $$ \tilde{y}_c(\theta_c, k) $$, we have from (26) :

    $$ \begin{equation} R(k)\tilde{X}(k)\theta_c+\mid R(k)\tilde{T}(k) \mid \alpha \geq R(k)\tilde{y}(k), \qquad k=1, \dots, N. \end{equation} $$

    Thus, from (36), all the measurements $$ \tilde{y}(k) $$, $$ k=1, \dots, N $$, belong to $$ \mathcal{P}_y(\alpha, \theta_c) $$ if the values of $$ \theta_c $$ and $$ \alpha $$ are such that the following inequality holds:

    $$ \begin{equation} A_N \begin{bmatrix} \alpha \\ \theta_c \end{bmatrix} \geq b_N \end{equation} $$


    $$ \begin{equation} A_N = \begin{bmatrix} \mid R(1)\overline{T}(1) \mid & \mid R(1)\tilde{X}(1) \mid \\ \dots & \dots \\ \mid R(N)\overline{T}(N) \mid & \mid R(N)\tilde{X}(N)\mid \end{bmatrix} \quad b_N= \begin{bmatrix} R(1)\tilde{y}(1) \\ \dots \\ R(N)\tilde{y}(N) \end{bmatrix} \end{equation} $$

    Then, for the second objective, the procedure of parameter estimation is reduced to a convex optimisation problem that consists to minimize the criterion (35) under linear inequality constraints (37) which define a domain in $$ \mbox{IR}^{p+q+n} $$ imposed by the measurements. In other words, according to $$ \theta, \lambda, \delta $$ we have to mimimise:

    $$ \begin{equation} J(\alpha)=\alpha^T \left (\sum\limits_{k=1}^N\frac{1}{n_k}\sum\limits_{i=1}^{n_k}Q_i(k) \right ) \alpha \end{equation} $$

    under the constraint (37). The search for the solution $$ \hat \theta, \hat \lambda, \hat \delta $$ is based on algorithms solving convex optimisation problems in particular on the quadratic programming theory widely evoked in the literature [28, 29]. Finally, the proposed procedure is summarized by the two algorithms 1 and 2, the first dedicated to the synthesis of the model and the second to its validation.

    Algorithm 1 Model Identification
    1: Collect a set of $$ N $$ measurement $$ \tilde y(k) $$ and $$ \tilde X(k) $$ for the system in normal operation.
    2: Define the model structure (1): choose the structure (2) of the matrices $$ M(\lambda) $$ and $$ Z(\delta) $$.
    3: Define matrix $$ \tilde T(k) $$, k=1, …, N (16)
    4: Define co-kernels $$ g_s(k) $$ (22)
    5: Define matrices $$ R(k) $$ and $$ D(k) $$, k=1, …, N (27)
    6: Define matrix $$ \Gamma_i(k) $$ containing linearly independent row of $$ R(k) $$ (29)
    7: Define matrix $$ Q_i(k) $$ (34)
    8: Solve problem (39) and obtain model parameters $$ \hat \theta, \hat \lambda, \hat \delta $$

    The implementation of the proposed procedure does not present any particular difficulty, except for the choice of the initial values of the parameters to be estimated. However, the reader's attention should be drawn to the possible numerical complexity which may be due, on the one hand, to the dimension of the matrices involved in the generation of the vertices of the domain $$ P_y $$, and, on the other hand, to the number of inequality constraints to be taken into account.

    Algorithm 2 Identified model validation
    1: Adapt model (1) with identified parameters $$ \hat \theta, \hat \lambda, \hat \delta $$
    2: Construct domain $$ \mathcal P_{\theta}(\hat \lambda, \hat \theta) $$ (4)
    3: Using data $$ X(k) $$ construct domain $$ \mathcal P_y (\hat \lambda, \hat \delta, \hat \theta) $$
    4: Verify that $$ P_y(\hat \lambda, \hat \delta, \hat \theta) $$ contains all the output measurements $$ \tilde y(k) $$.

    We also mentioned the dimension problem in the second step of section 3.2 about the coupling of the $$ \tilde y(k) $$ outputs to the uncertain parameters. For this purpose, ad-hoc combinations of $$ n $$ outputs allowed to eliminate the influence of a number $$ s $$ of uncertain parameters from a predefined list (21), this number being between $$ 1 $$ and $$ n-1 $$. If all the eliminations were possible, they would be $$ C_{q+n}^1+C_{q+n}^2+ \dots + C_{q+n}^{n-1} $$. Although these eliminations are simple to implement, their number can become important, for example $$ 967 $$ for $$ n=8 $$ and $$ q=2 $$, knowing that they must be done at each time $$ k $$ of measurement. The possible solution consists in applying this principle of elimination only for the largest value of $$ s $$, i.e., $$ n-1 $$, which limits the number of eliminations to $$ C_{q+n}^{n-1} $$, i.e., $$ 120 $$ with the preceding values of $$ n $$ and $$ q $$.

    Regarding the search for the intersections $$ S_i(k) $$ (31) of the hyperplanes (28) which define the domain $$ P_y(\lambda, \delta, \theta_c) $$, the implemented computations involve resolutions of linear systems of dimension $$ n $$ which is that of the system output. The number $$ n_{\gamma, k} $$ of intersections is bounded by $$ C_{2(n_y+n)}^n $$ where $$ n_y $$ is itself bounded by $$ C_{q+n}^s $$ whose order of magnitude we have already mentioned. Finally, let us recall that the search for intersections, which nevertheless remain simple operations to implement, must be done at each moment.

    The above dimensions can be partly explained by the fact that the polytopic domains are accurate, since on the one hand no approximations were made for their evaluation and on the other hand the dependencies between variables due to uncertainties were taken into account. In the end, it is still possible to use a quantified simplification technique for polytopic domains. For standard zonotopes, this problem is addressed by applying reduction techniques that overapproximate a given zonotope by another with fewer generators [30, 31].

    5. Examples

    The first three examples that follow are from the same system, but different by the nature of the data that have been generated : a pseudo-static system, i.e., where the matrix $$ X(k) $$ is constant, a system where the directions of parametric uncertainties are not known a priori, a dynamic system. The last example is based on the more realistic case of a DC motor whose model has two uncertain parameters.

    5.1. A static system

    The system (1) is used with the definitions

    $$ \begin{equation} \begin{array} {c} \tilde X(k) = \begin{bmatrix} -1.5 & 0.5 \\ -1.0 & 3.0 \end{bmatrix}, \quad M =\begin{bmatrix} -.3 &.1 &-.3 \\-.2 &.5 & .1 \end{bmatrix}, \quad Z = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad \theta_c = \begin{bmatrix} 5 \\ 5 \end{bmatrix}, \quad \delta = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad \lambda= \begin{bmatrix} 1 & 1.5 & 2 \end{bmatrix} \end{array} \end{equation} $$

    It is considered as static because the coefficients of the matrix $$ X(k) $$ are constant and its dimensions are $$ n=2, p=2, q=3 $$. This example of very reduced complexity is made for pedagogical purposes to simplify the presentation of the results. Moreover the reduced dimension of the state allows a graphical visualization of the parameters and outputs domains. By referring to the definitions (20 to 27) the reader will be able to easily verify some intermediate results:

    $$ \begin{equation} \begin{array} {c} \tilde XM =\begin{bmatrix} 0.35 & 0.10 & 0.50 \\ -0.30 & 1.40 & 0.60 \end{bmatrix} \quad g_1 = \begin{bmatrix} 0.30 \\ 0.35 \end{bmatrix} \quad g_2 = \begin{bmatrix} -1.40 \\ \ \ \ 0.10 \end{bmatrix} \quad g_3 = \begin{bmatrix} -0.60 \\ \ \ \ 0.40 \end{bmatrix} \quad R = \begin{bmatrix} \ \ \ 0.30 &\ \ \ 0.35 \\ -1.40 & \ \ \ 0.10 \\ -0.60 & \ \ \ 0.40 \\ -0.30 &-0.35 \\ \ \ \ 1.40 & - 0.10 \\ \ \ \ 0.60 & -0.40 \\ \ \ \ 1 & \ \ \ 0 \\ \ \ \ 0 & \ \ \ 1 \\ -1 & \ \ \ 0 \\ \ \ \ 0 & -1 \end{bmatrix} \end{array} \end{equation} $$

    The reader will have noted that there are five orthogonal vectors to the left of the $$ \tilde T $$ matrix, but only the first three are useful for the elimination of uncertain variables. Hereafter we give the first matrix $$ \Gamma_1 $$ (29) extracted from $$ R $$ and the resulting matrix $$ Q_1 $$ (34).

    $$ \begin{equation} \begin{array} {c} \Gamma_1 = \begin{bmatrix} \ \ \ 0.30 & 0.35 \\ -1.40 & 0.10 \end{bmatrix}, \qquad Q_1 =\begin{bmatrix} 0.213& 0.385& 0.528& 0.794& 0.300 \\ 0.385& 1.970& 1.838& 2.173& 1.400 \\ 0.528& 1.838& 1.922& 2.482& 1.338 \\ 0.794& 2.173& 2.482& 3.392& 1.615 \\ 0.300& 1.400& 1.338& 1.615& 1.000 \end{bmatrix} \end{array} \end{equation} $$

    The matrices $$ \Gamma_i $$ et $$ Q_i $$ which are deduced from $$ R $$ do not depend on time $$ k $$, since $$ X $$ is a constant matrix.

    The variables $$ v(k), w(k) $$, ($$ k=1, \dots, 250 $$), were generated from a uniform distribution. Since this is a simulation, we can compare the domains constructed directly from the data and those identified by the procedure proposed above. This comparison can be made on the one hand on the parameters and on the other hand on the outputs of the system.

    The Figure 3, in the $$ \{ \theta_1, \theta_2\} $$ space visualizes the values taken by $$ \theta(k) $$ over time marked with blue points, the orthotope $$ \mathcal P(\theta) $$ calculated from the values of the parameters used for the generation of the data, the parallelotope $$ \mathcal P_{\theta} (\hat \lambda, \hat \theta, \hat \delta) $$ obtained after identification of the model parameters. We note that $$ \mathcal P_{\theta} (\hat \lambda, \hat \theta, \hat \delta) $$ perfectly frames the values of the parameters $$ \theta(k) $$ with a minimal volume whose reduction with respect to the orthotope $$ \mathcal P(\theta) $$ can be appreciated.

    Figure 3. Parameters domain (static system).

    The Figure 4, in the $$ \{ \tilde y_1, \tilde y_2 \} $$ space, relative to the measurement domain, compares the position of the output measurements $$ \tilde y(k) $$ to the domain $$ \mathcal P_{ y}(\hat \lambda, \hat \delta, \hat \theta) $$ constructed after identification of the model parameters. In addition to the visual comparison, a quantitative appreciation of the domains can be made from their volume [27] : $$ 10.1 $$ for the convex defined from the output measurements, $$ 12.1 $$ for the one evaluated from the identified model. Obviously, the difference observed is directly linked to the number of vertices of the polytopic domain, which itself depends on the zonotope of the parameter model reduced here to three generating vectors for the $$ M $$ matrix.

    Figure 4. Output domain (static system).

    In conclusion, independently of the values of the identified parameters, we can see that the domains $$ \mathcal P_{\theta} (\hat \lambda, \hat \theta, \hat \delta) $$ and $$ \mathcal P_{y} (\hat \lambda, \hat \delta, \hat \theta) $$ resulting from these parameters are in accordance with the objective, i.e., they contain all the measurements and their volumes respect the desired precision condition. The Table 1 allows to compare the values of the identified parameters to their true values, i.e., those used for the generation of the data. As in any identification method, an important parameter is the richness of the information, this richness concerning the number of information and their distribution in space. It is clear that the nature of the excitations applied to the system is a key element for obtaining pertinent information and consequently representative parameters of the real system. We have used several other databases and in particular for the example treated, the use of 1000 observations makes it possible to obtain identified parameters whose relative deviation from the real parameters is of the order of 1 percent.

    Table 1

    True and identified parameters

    $$ \lambda_1 $$$$ \lambda_2 $$$$ \lambda_3 $$$$ \delta_1 $$$$ \delta_2 $$$$ \theta_1 $$$$ \theta_2 $$

    Remark 3.In the same spirit, a priori knowledge about the magnitudes of the parameters $$ \lambda $$ and $$ \delta $$ can also be taken into account. If this knowledge is expressed as an inequality

    $$ \begin{equation} \begin{array} {c} \lambda_{min} \leq \lambda \leq \lambda_{min} \\ \delta_{min} \leq \delta \leq \delta_{max} \end{array} \end{equation} $$

    it is then sufficient to modify the system (37) accordingly (43).

    5.2. About uncertainty directions

    The formulation (1) uses the matrix $$ M $$ which is an image of the directions of the uncertainty influence. If for many physical systems a phenomenological modeling makes it possible to know these directions, a certain number of situations are more complex. The case where this matrix $$ M $$ is unknown is presented in the following example and its search is done in a heuristic way. The following structure was adopted, the 7 directions having been chosen arbitrarily:

    $$ \begin{equation} M^s =\begin{bmatrix} \ \ 0.3 & 0.2 & -0.3 & 0.0 & 0.2 & 0.4 & 0.1\\ -0.1 & 0.5 &\ \ 0.2 & 0.2 & 0.0 & 0.1 & 0.6 \end{bmatrix} \end{equation} $$

    The other data of the previous example have been kept. The previous proposed procedure applied to determine the center $$ \theta_c $$ of the parameters domain and the bounds $$ \lambda $$ and $$ \delta $$ of the uncertainties. Figure 5 visualizes the domain relative to the parameters. The blue markers locate the variable parameters, the solid and dashed contours correspond respectively to the domain $$ \mathcal P_{\theta}(\hat \theta, \hat \lambda, \hat \delta) $$ constructed from the exact directions of the uncertainties and the $$ \mathcal P^s_{\theta}(\hat \theta, \hat \lambda, \hat \delta) $$ domain constructed from the $$ M^s $$ directions. It is clear that the $$ \mathcal P^s_{\theta}(\hat \theta, \hat \lambda, \hat \delta) $$ domain contains all the realizations of the variable parameters of the model.

    Figure 5. Parameters domain (surdimensionned model).

    Figure 6 visualizes the identified domain of the outputs. The blue markers locate the measured outputs, the solid and dashed contours correspond respectively to the $$ \mathcal P_y(\hat \theta, \hat \lambda, \hat \delta) $$ domain constructed from the exact directions of the uncertainties and to the $$ \mathcal P^s _y(\hat \theta, \hat \lambda, \hat \delta) $$ domain constructed from the $$ M^s $$ directions. The $$ \mathcal P^s _y(\hat \theta, \hat \lambda, \hat \delta) $$ domain does contain all the measurements and its volume is quite comparable to the actual volume of $$ \mathcal P_y(\hat \theta, \hat \lambda, \hat \delta) $$. Thus, the lack of knowledge of the effective directions of uncertainty was compensated by an over-dimensioned choice of their number and by the ad-hoc identification of the bounds of the uncertainties.

    Figure 6. Output domain (surdimensionned model).

    With regard to the identification of the parameters, we obtained $$ \hat{\theta_c}= \begin{bmatrix} 4.978 & 5.056 \end{bmatrix} $$, values very close to the true values. The values of the $$ \lambda $$ parameters are not given, as they cannot be compared to the true values given the chosen dimensions.

    5.3. A time-varying system

    This third example concerns a system simulated with a matrix $$ X $$ of which two terms vary over time:

    $$ \begin{equation} \begin{array} {l l l l} X(k) &=\begin{bmatrix} x_{11}(k) &.5 \\-1 &x_{22}(k) \end{bmatrix}, \quad k=1, \dots, 200 \\ x_{11}(k) & =-0.5+0.5 \cos(0.08 \, k \, \sin(0.08\, k)) \\ x_{22}(k) &= 1+0.5\sin(0.04\, k\, \cos(0.02\, k)) \end{array} \end{equation} $$

    The output $$ y(k) $$ was generated using the same matrices $$ M, Z, \theta_c, \lambda $$ as those used in the example in section 5.1. The matrix $$ X(k)M $$ is now expressed:

    $$ X(k)M = \begin{bmatrix} -0.3x_{11}(k)-0.1 & 0.1x_{11}(k)+0.25 & -0.3 x_{11}(k) +0.05 \\ 0.3-0.2x_{22 }(k) & -0.1+0.5x_{22 }(k) & 9.3-0.1x_{22 }(k) \end{bmatrix} $$

    and the vectors $$ g_i(k) $$ can be computed analytically at each time instant. Figure 7 shows the evolution of the two components of the output $$ y $$ of the system. After implementing the procedure proposed in sections 3 and 4, the system parameters were identified. The Table 2 gathers the values of the identified parameters and those of the parameters used to generate the data. As before, we notice a significant proximity of the identified values to the real values, while knowing that an increase in the number of measurements and their richness can still improve the quality of the estimation. Figures 8 and 9 are relative to the parameter domain on the one hand and to the output domain on the other.

    Figure 7. System outputs (time varying system).

    Table 2

    True and identified parameters

    $$ \lambda_1 $$$$ \lambda_2 $$$$ \lambda_3 $$$$ \theta_1 $$$$ \theta_2 $$

    Figure 8. Parameter domain. Time varying system.

    Figure 9. Output domain. Time varying system.

    In conclusion, we can see that the domains $$ \mathcal P_{\theta} (\hat \lambda, \hat \theta, \hat \delta) $$ and $$ \mathcal P_{y} (\hat \lambda, \hat \delta, \hat \theta) $$ resulting from these parameters are in accordance with the objective, i.e., they contain all the measurements and their volumes respect the desired precision condition.

    5.4. A dynamical system

    The following continuous system

    $$ \begin{equation} \begin{array} {l l l l} { \begin{bmatrix} {\frac{dx_1(t)}{dt} } \\ {\ \frac{dx_2(t)}{dt}} \end{bmatrix} } &=& \begin{bmatrix} { \frac{-f}{J}} & { \frac{K_m}{J} } \\ { \frac{-K_e}{L} }& { \frac{-R}{L} } \end{bmatrix} \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} + \begin{bmatrix} 0 \\ { \frac{1}{L} } \end{bmatrix} u(t) \\ y_1(t)&=&x_1(t) \\ y_2(t)&=&x_2(t) \end{array} \end{equation} $$

    describes in a conventional way the simplified dynamics of a $$ DC $$ motor where $$ x_1 $$ and $$ x_2 $$ are respectively the motor rotation speed and its excitation current, $$ R $$ and $$ L $$ being the resistance and the inductance of the excitation circuit. The parameters $$ K_m $$ and $$ K_e $$, respectively related to the motor torque and the counter electromotive force, are considered uncertain, because they depend on the temperature of the environment and on disturbances affecting the magnetic flux in the motor. For the simulation, the system is set in discrete form with a sampling period equal to $$ 0.1s $$:

    $$ \begin{equation} \begin{array} {l l l l} x_{1, k+1} &= &-0.2\, x_{1, k} +(0.5 -0.3\lambda_1 v_{1, k} +0.1 \lambda_2v_{2, k} -0.3\lambda_3v_{3, k} ) x_{2 , k} \\ x_{2 , k+1} &= &(-0.2-0.2\lambda_1v_{1, k} +0.5 \lambda_2v_{2, k} +0.1\lambda_3v_{3, k} )\, x_{1, k} - 0.9\, x_{2 , k} +0.1\, u_k\\ y_{1, k}&=&x_{1, k}+\delta_1w_{1, k} \\ y_{2, k}&=&x_{1, k}+\delta_2w_{2, k} \end{array} \end{equation} $$

    where the uncertainties affect the two parameters $$ K_m $$ and $$ K_e $$ through $$ \lambda=\begin{bmatrix} \lambda_1& \lambda_2& \lambda_3 \end{bmatrix}^T $$ and the measurements of the two outputs via $$ \delta = \begin{bmatrix} \delta_1 & \delta_2 \end{bmatrix}^T $$. Figure 10 visualizes the input $$ u $$ and the measurements of the two outputs. To keep the presentation simple, we limit ourselves to the estimation of the uncertain parameters of the system. Therefore the model used for this estimation is written :

    $$ \begin{equation} \begin{array} {l l l l} x_{1, k+1} &=&-0.2 \, x_{1, k} + \theta_{1, k}\, x_{2, k} \\ x_{2, k+1} &= &\theta_{2, k}\, x_{1, k} -0.9 \, x_{2, k}+ 0.1 \, u_k \\ \theta_{1, k} &=& \theta_{1, c} + m_1^T Diag(\lambda) v_k\\ \theta_{2, k} &=& \theta_{2, c} + m_2^T Diag(\lambda) v_k \\ y_{1, k}&=&x_{1, k}+\delta_1w_{1, k} \\ y_{2, k}&=&x_{1, k}+\delta_2w_{2, k} \end{array} \end{equation} $$

    Figure 10. Input and outputs.

    It is now necessary to structure this model in the form defined in (1). Following remark (2), after eliminating the states according to the measurements, we obtain :

    $$ \begin{equation} \begin{array} {l l l l} \begin{bmatrix} \overline y_{1, k+1} \\ \overline y_{2, k+1} \end{bmatrix} = \begin{bmatrix} y_{2, k} & 0 \\ 0 & y_{1, k} \end{bmatrix} \begin{bmatrix} \theta_{1, c} \\ \theta_{2, c} \end{bmatrix} + \begin{bmatrix} y_{2, k} & 0 \\ 0 & y_{1, k} \end{bmatrix} \begin{bmatrix} m_{12, \lambda}^T \\ m_{21, \lambda}^T \end{bmatrix} v_k + \begin{bmatrix} 1 &0 \\ 0& 1 \end{bmatrix} \begin{bmatrix} \delta_1 \\ \delta_2 \end{bmatrix} \end{array} \end{equation} $$

    the terms $$ \delta_1 $$ and $$ \delta_2 $$ being bounded, and with :

    $$ \begin{equation} \begin{array} {l l l l} \begin{bmatrix} \overline y_{1, k+1} \\ \overline y_{2, k+1} \end{bmatrix} = \begin{bmatrix} y_{1, k+1} + 0.2\, y_{1, k} \\ y_{2, k+1} +0.9\, y_{2, k}-0.1\, u_k\end{bmatrix} \end{array} \end{equation} $$

    The $$ M $$ matrix is identical to the one used in section 5.1. In order to limit the number of the graphical and numerical results, we present in Figure 11 only the parameter domain. The blue points, in the $$ \theta_{1}, \theta_{2} $$ plane, correspond to the values of $$ \theta (k) $$ from the simulation. The blue line is the contour of the convex domain of minimal volume which contains the values of $$ \theta (k) $$. The red line is the contour of the domain $$ P_{\theta}(\hat \lambda, \hat \delta, \hat \theta) $$ identified which is a very satisfactory approximation of the domain of the parameters used in the simulation.

    Figure 11. Parameter domain with taking interaction between the model parameter.

    The last graph (12) compares the results of our approach with those of a simpler approach which does not take into account the couplings between model parameters due to uncertainties. The implementation of the latter is limited to taking into account only the inequalities (19) to define the domain of the parameters. The comparison of Figures 11 and 12 shows the interest of taking into account the coupling of the parameters.

    Figure 12. Parameter domain without taking interaction between the model parameter.

    6. Conclusion

    An approach, consisting in explaining the set of measurements while optimizing an accuracy criterion, is proposed in the most general case where the parameters are variable in time without considering the notion of speed of variation of the parameters. Moreover, the characterization of the uncertainties of a MIMO model highlights the dependencies between the model outputs, these dependencies being created by the parameters to be estimated. A technique taking into account these dependencies, combined with the calculation of an accuracy criterion is proposed. It provides an optimal solution (via the accuracy criterion) in the form of a set of parameters, its central value and the limits of the equation error.

    In the future, this approach could be extended to other model structures. Moreover, it seems relevant to us to take up and deepen a remark that has been made about the richness of the excitation signals, namely: how to choose, when possible, the nature of the excitations to be applied to a system with uncertain parameters so as to best characterize the bounds of these parameters? A second area to explore concerns the presence of non-linearities. When these nonlinearities bring a bounded contribution to the evolution of the state variables of the system, to what extent can we also model their effects by bounded variable parameters?


    Authors' contributions

    Wrote and reviewed the manuscript: Ragot J

    Availability of data and materials

    Not applicable.

    Financial support and sponsorship


    Conflicts of interest

    All authors declared that there are no conflicts of interest.

    Ethical approval and consent to participate

    Not applicable.

    Consent for publication

    Not applicable.

    Consent for publication

    © The Author(s) 2022.


    • 1. Bertin E, Hérissé B, dit Sandretto JA, Chapoutot A. Spatio-temporal constrained zonotopes for validation of optimal control problems. In: CDC 2021: 60th Conference on Decision and Control Dec 2021, Austin, United States. Available from: [Last accessed on 17 Jun 2022].

    • 2. Scott JK, Raimondo DM, Marseglia GM, Braatz RD. Constrained zonotopes: A new tool for set-based estimation and fault detection. Automatica 2016;69:126-36.

    • 3. Rego BS, Raffo GV, Scott JK, Raimondo DM. Guaranteed methods based on constrained zonotopes for set-valued state estimation of nonlinear discrete-time systems. Automatica 2020:111.

    • 4. Combastel C. Zonotopes and Kalman observers: Gain optimality under distinct uncertainty paradigms and robust convergence. Automatica 2015;55:265-273.

    • 5. Jianwang H, Ramirez-Mendoza RA. Zonotope parameter identification for piecewise affine system. Syst Sci Control Eng 2020;8:232-40.

    • 6. Wang Y, Puig V, Cembran G. Set-membership approach and Kalman observer based on zonotopes for discrete-time descriptor systems. Automatica 2018;93:435-43.

    • 7. Zhang W, Wang Z, Raissi T, Dinh TN, Dimirovski G. Zonotope-based interval estimation for discrete-time linear switched systems. In: 21st IFAC World Congress, Berlin, Germany, 2020. Available from: [Last accessed on 17 Jun 2022].

    • 8. Zhou M, Cao Z, Zhou M, Wang J, Wang Z. Zonotoptic fault estimation for discrete-time LPV systems with bounded parametric un-certainty. Trans Intell Transport Syst 2020;21:690-700.

    • 9. Fogel E, Huang YF. On the value of information in system identification-bounded noise case. Automatica 1982;18:229-38.

    • 10. Milanese M, Belforte G. Estimation theory and uncertainty intervals evaluation in presence of unknown but bounded errors: linear families of models. IEEE Trans Automat Control 1982;27:408-13.

    • 11. Walter E, Piet-Lahanier H. Exact and recursive description of the feasible parameter set for bounded error models. In: 26th IEEE Conference on Decision and Control. IEEE 1987;26: 1921-22.

    • 12. Mo SH, Norton JP. Fast and robust algorithm to compute exact polytope parameter bounds. Math Comput Simul 1990;32:481-93.

    • 13. Casini M, Garulli A, Vicino A. A constraint selection technique for recursive set membership identification. IFAC Proceedings 2014;47:1790-5.

    • 14. Milanese M, Norton JP, Piet-Lahanier H, Walter E. Bounding approches to system identification. Springer Science & Business Media, 2013. Available from: [Last accessed on 17 Jun 2022].

    • 15. Clement T, Gentil S. Reformulation of parameter identification with unknown-but-bounded errors. Math Comput Simul 1988;30:257-70.

    • 16. Norton JP. Identification of parameter bounds for ARMAX models from records with bounded noise. Int J Control 1987;45:375-90.

    • 17. Belforte G, Bona B, Cerone V. Parameter estimation algorithm for a set-membership description of uncertainty. Automatica 1990;26:887-98.

    • 18. Jaulin L, Walter E. Set inversion via interval analysis for nonlinear bounded-error estimation. Automatica 1993;29:1053-64.

    • 19. Bravo JM, Alamo T, Camacho EF. Bounded error identification of systems with time-varying parameters. IEEE Trans Automat Control 2006;51:1144-50.

    • 20. Jaulin L. Reliable minimax parameter estimation. Reliable Comput 2001;7:231-46.

    • 21. Fernández-Cantí RM, Blesa J, Puig V, Tornil-Sin S. Set-membership identification and fault detection using a bayesian framework. Int J Syst Sci 2016;47:1710-24.

    • 22. Valero CE, Villanueva ME, Houska B, Paulen R. Set-based state estimation: a polytopic approach. IFAC-PapersOnLine 2020;53:11277-82.

    • 23. Lu QH, Fergani S, Jauberthie C. A new scheme for fault detection based on Optimal Upper Bounded Interval Kalman Filter. IFAC-PapersOnLine 2021;54:292-7.

    • 24. Ploix S, Adrot O, Ragot J. Parameter uncertainty computation in static linear models. Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No. 99CH36304). IEEE 1999;2:1916-21.

    • 25. Ragot J, Maquin D, Adrot O. Parameter uncertainties characterisation for linear models. IFAC Proceedings Volumes 2006;39:581-6.

    • 26. Neumaier A. Interval methodsfor systems of equations Cambridge: Cambridge University Press; 1990.

    • 27. Lasserre JB. An analytical expression and an algorithm for the volume of a convex polyhedron. J Optim Theory Appl 1983;39:363-77.

    • 28. Aderemi AO, Olusola AA. A new approach for kuhn-tucker conditions to solve quadratic programming problems with linear inequality constraints. Math Comput Sci 2020;5:86-92.

    • 29. Gill PE, Murray W, Wright MH. Pratical Optimization. Society for Industrial and Applied Mathematics 2019.

    • 30. Seladji Y, Qu Z. International Journal of Computer Mathematics: Computer Systems Theory. Society for Industrial and Applied Mathematics 2018;3:215-29.

    • 31. Yang X, Scott JK. A comparison of zonotope order reduction techniques. Automatica 2018;95:378-84.


    Cite This Article

    Ragot J. Guaranteed consistency between measurements and parameter systems with correlated additive and multiplicative uncertainties. Complex Eng Syst 2022;2:10.




    Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at

    © 2016-2022 OAE Publishing Inc., except certain content provided by third parties