Quadratic PLS regression applied to external preference mapping

Quadratic PLS regression applied to external preference mapping

Food Quality and Preference 32 (2014) 28–34 Contents lists available at SciVerse ScienceDirect Food Quality and Preference journal homepage: www.els...

696KB Sizes 67 Downloads 229 Views

Food Quality and Preference 32 (2014) 28–34

Contents lists available at SciVerse ScienceDirect

Food Quality and Preference journal homepage: www.elsevier.com/locate/foodqual

Quadratic PLS regression applied to external preference mapping V. Cariou ⇑, S. Verdun, E.M. Qannari LUNAM University, ONIRIS, USC ‘‘Sensometrics and Chemometrics Laboratory’’, Nantes F-44322, France INRA, Nantes F-44316, France

a r t i c l e

i n f o

Article history: Received 2 October 2012 Received in revised form 26 June 2013 Accepted 8 July 2013 Available online 18 July 2013 Keywords: Quadratic PLS regression External preference mapping Multivariate modeling Multivariate mapping Sensometrics

a b s t r a c t In sensory analysis, preference mapping covers several modeling techniques applied to hedonic data for a better understanding of consumers’ liking and product optimization. External preference mapping aims at relating consumers’ hedonic data to sensory data in order to identify liking drivers. Classically, preference mapping proceeds in two steps: the first step consists in defining the perceptual space on which preference data are regressed and, the second step, identifies the predictive model according to this perceptual space. The strategy of analysis (quadratic PLS regression) presented herein fits within the framework of PLS regression. An optimal perceptual space is sought by taking account of the linear and the quadratic relationships between hedonic and sensory data. Quadratic PLS is compared to other methods of analysis on the basis of a case study related to coffee data. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction In sensory analysis, preference mapping covers several modeling techniques applied to hedonic data in order to better understand consumers’ liking (Greenhoff & MacFie, 1994). The basis of preference mapping is to achieve a multidimensional representation of the products as a decision tool to relate the acceptability of the sensory aspects of a product (McEwan, 1996). There are two kinds of methods: internal and external analysis (Carroll, 1972). ‘‘Internal preference analysis gives precedence to consumers’ preferences and uses perceptual information as a complementary source of information. External analysis, on the other hand, gives priority to perceptual information by building the product map based on attribute ratings and only fits consumer preferences at a later stage.’’ (van Kleef, van Trijp, & Luning, 2006, p. 388). In internal preference mapping (MDPref), only hedonic data are considered to produce this multidimensional representation. In external preference mapping (PrefMap), the aim is to relate liking data to a perceptual map of the products obtained from some potential explanatory variables such as sensory descriptors (Schiffman, Reynolds, & Young, 1981). Multidimensional unfolding approach also known as the ideal point based preference mapping (Coombs, 1964) is yet another popular method to analyze hedonic data. The goal is to estimate coordinates in a multidimensional space to represent the products and consumers with the

⇑ Corresponding author. Tel.: +33 (0) 2 51 78 55 32; fax: +33 (0) 2 51 78 54 38. E-mail address: [email protected] (V. Cariou). 0950-3293/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.foodqual.2013.07.003

understanding that products close to a consumer are preferred by that consumer to products far removed from that consumer point. Two ideal point models that have been applied recently to sensory and preference studies are Landscape Segmentation Analysis (LSA) (Rousseau & Ennis, 2008) and Euclidean Distance Ideal Point Mapping (EDIPM) (Meullenet, Lovely, Threlfall, Morris, & Striegler, 2008). The PrefMap strategy aims at relating consumers’ liking data to sensory data in order to uncover hedonic drivers and eventually achieve product optimization (Danzart, 1998; McEwan, 1996). The basic idea is to project consumer liking onto a perceptual map of the products generally obtained from sensory descriptors. Various approaches have been proposed to address this issue. Among these methods, we single out the most commonly used one which consists in applying principal component analysis (PCA) to the sensory data (McEwan, 1996). Then, each consumer’s liking scores are regressed onto the space spanned by the first principal components using a vector or a polynomial model. More recently, new developments have been made in order to achieve a balance between the complexity of the fitting model and the number of retained principal components for the product space (Faber, Mojet, & Poelman, 2003). It is also worth mentioning that a method of analysis called Probabilistic Unfolding (Rousseau, Ennis, & Rossi, 2011) was proposed as an alternative to PrefMap. Both methods (i.e. PrefMap and Probabilistic Unfolding) use the same data (i.e. sensory and hedonic data) but Probabilistic Unfolding follows a reversed process since it starts by setting up the product space from the hedonic data and, thereafter, regresses the sensory data upon this space.

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

Since our strategy of analysis fits within the general approach of PrefMap, we shall stick, from now on, to this framework. As mentioned above, in PrefMap we set up individual models by fitting each consumer’s hedonic scores onto the perceptual map obtained by means of sensory data. In order to sum up all these individual models, Danzart (1998) proposed a strategy of analysis which consists in aggregating all the models into a single model which is, thereafter, depicted as a response surface plot. Other alternative methods have also been proposed (Faber et al., 2003; Meullenet et al., 2008). A review and a comparison of these different techniques are presented in (Yenket, Chambers, & Adhikari, 2011). In the most commonly used approach of PrefMap, the relationships of hedonic data and sensory characteristics are investigated by means of multiple linear regression. More precisely, the scores of liking given by the consumers are regressed upon the first two or three principal components of the sensory data. Several models can be set up among which we distinguish the vector model and the quadratic model. The vector model consists in a regression of the liking scores on the retained principal components. In the quadratic model, squared and interaction terms are included in order to take account of non-linear relationships between liking and sensory data (Greenhoff & MacFie, 1994). Obviously, the limitation of this approach is that the retained principal components may not be the most relevant for explaining the liking scores variations (Meullenet et al., 2008). In order to overcome this shortcoming, it was advocated using PLS regression (Martens & Næs, 1989) instead of PCA in order to determine the perceptual space (Helgesen, Solheim, & Næs, 1997; Husson & Pagès, 2003; Tenenhaus, Pagès, Ambroisine, & Guinot, 2005; Sveinsdóttir et al., 2009). However, this strategy of analysis is relevant insofar as the vector model is considered. Indeed, the PLS components take account of only the linear relationships between sensory and liking data. As a result, some directions in the sensory space which might explain non-linear variations are overlooked. The aim of this paper is precisely to move one step further than PLS regression by proposing a strategy of determination of latent variables (or components) in the sensory space that take account of both linear and non-linear relationships between sensory and liking data. This is achieved by performing quadratic PLS following (and extending) the strategy of analysis proposed by Höskuldsson (1992). In the scope of non-linear PLS regression, the originality of this paper is to show how to set up an ideal point model using a quadratic PLS2 regression. Quadratic PLS2 was introduced by Wold, Kettaneh-Wold, and Skagerberg (1989). Given two blocks of data, called X and Y, the basic idea is to relate the block scores T (summarizing X) and U (summarizing Y) with a quadratic polynomial function. It is worth noting that in quadratic PLS2 regression, it is not the original variables of X that are squared as this is done by some authors (Clementi, Cruciani, Curti, & Skagerberg, 1989). In the proposed algorithm, PLS components together with their squared effects and their interactions are used to explain the consumers’ hedonic data. The approach adopted herein is based on Höskuldsson’s quadratic PLS1 algorithm (Höskuldsson, 1992) revisited by Verdun, Hanafi, Cariou, and Qannari (2012). The present paper outlines how to extend this approach to the case of a block of Y data and how to apply it to the particular case of preference mapping. The rest of the paper is organized as follows. In Section 2, we start by defining the quadratic PLS regression when applied to several Y responses. In Section 3, we give an outline of PrefMap and we show how to use quadratic PLS regression within this framework. Thereafter, quadratic PLS regression is compared to other methods on the basis of a case study related to coffee data. Finally, the results are discussed.

29

2. Quadratic PLS revisited 2.1. Quadratic PLS models Given two datasets supposed to be column centered, X (e.g. sensory data) and Y (liking scores), PLS2 regression of Y upon X consists in determining in a first step a latent variable (or component), denoted by t, in X (i.e. a linear combination of the variables in X) and a latent variable, denoted by u, in Y (i.e. a linear combination of the variables in Y) such that cov2(u,t) is as large as possible where cov(u,t) stands for the covariance between t and u. In subsequent steps, other pairs of latent variables are determined following the same procedure after imposing orthogonality constraints. Two remarks are worth mentioning. The first remark is that, unlike PCA, the components in X are determined in such a way so as to take account of Y. The second remark is that, being based on the covariance criterion, the components determined by PLS2 regression take account of only the linear relationships between X and Y. In order to take account of non-linear relationships, several approaches have been advocated. We single out those approaches that fit within the general framework of PLS regression. Durand and Sabatier (1997) proposed a strategy of analysis called PLSS which consists in transforming the X variables by B-splines functions prior to performing a PLS2 regression. An illustration of this technique of analysis within the context of preference mapping was outlined in (de Kermadec, Durand & Sabatier, 1997). A more straightforward approach (Clementi et al., 1989) consists in augmenting the dataset X by including the squared and cross-product terms (interactions) and, thereafter, performing a PLS2 regression of Y upon the augmented matrix thus obtained. As noted by Höskuldsson (1992), this strategy tends to set up PLS components mostly dominated by squared and cross-product terms because, on the one hand, these latter terms are more numerous than the linear terms and, on the other hand, their variances may be much larger than the variances of the original X variables. Wold et al. (1989) proposed an extension of PLS2 regression to take account of quadratic effects. In the first step, this strategy of analysis seeks two components t (in X) and u (in Y) where u is approached by a quadratic function of t. However, this approach involves rather complicated computations and has some convergence problems. Moreover, it is not clear which criterion is being optimized. Baffi, Martin, and Morris (1999) proposed an improved algorithm to this strategy of analysis but the rationale (criterion being optimized) behind this method remains unclear. The method of analysis that we propose herein is in line with the method of analysis proposed by Höskuldsson (1992) under the name quadratic PLS1 regression. This latter method concerns the case where there is only one variable to be predicted (i.e. univariate Y). Our contribution to this approach is twofold: i. We propose a new and, although iterative, simple algorithm whose convergence is guaranteed. ii. We extend this approach to the multivariate case (i.e. multivariate Y) which is indeed, more appropriate for preference mapping. 2.2. Höskuldsson’s QPLS1 revisited In the case of one response variable y, Höskuldsson (1992) extended the linear criterion C PLS ¼ cov2 ðu; tÞ to the quadratic case by introducing squared and cross-products terms. Let us assume that k1 components (linear combination of X variables) have already been determined. The subsequent step consists in determining a component tk. This is achieved by maximizing the following criterion:

30

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

C QPLS ¼ cov2 ðy; t k Þ þ cov2 ðy; t 2k Þ þ cov2 ðy; tk t1 Þ þ . . . þ cov2 ðy; t k t k1 Þ

ð1Þ

It is clear that, in QPLS1, components are sought in such a way that they account of the linear and the quadratic relationships with y. In order to solve this apparently tricky problem, Höskuldsson (1992) proposed a simple algorithm. However, the convergence of this algorithm is not guaranteed and we observed that, indeed, it has some convergence problems. This is the reason why we proposed a revised algorithm (Verdun et al., 2012) for which the convergence is proven. The details regarding this algorithm are reported in Appendix A. Another accommodation should be brought to QPLS1. In the first step, the criterion to be maximized (QPLS) takes account of the variance of the latent variable together with the variance of its squared term. Moreover, when more than one latent variable is sought, the cross-product terms of each latent variable with the latent variables previously determined are also considered in the QPLS1 criterion. From a practical point of view, the latent variable is not expressed in the same scale unit as the remaining terms. Therefore, depending on the data at hand, one of the terms may dominate the criterion to be maximized because its associated variance may be relatively larger than the variance of the other terms. In order to cope with this problem, we propose to introduce a scaling factor a (a between 0 and 1) to accommodate the latent variable in the criterion and (1a) to accommodate the squared and cross-product terms. More precisely, the expression to be maximized is:

acov2 ðy; tk Þ þ ð1  aÞcov2 ðy; t2k Þ þ ð1  aÞcov2 ðy; tk t1 Þ þ . . . þ ð1  aÞcov2 ðy; tk tk1 Þ

ð2Þ

Clearly, the rationale behind the scaling factor is to balance the contribution between the linear term on the one hand and the quadratic terms on the other hand. We can remark that for a = 1, only the linear term is considered and the method of analysis amounts to PLS1 regression. Contrariwise, for a = 0, only the quadratic terms are considered. More generally, the parameter a needs to be customized to the data at hand. For this purpose, we propose to test several values of a between 0 and 1 (e.g. increase a between 0 and 1 by a step of 0.1). For each value of a, the R-squared criterion associated with the quadratic regression of y upon t1, t2, . . ., tk is computed. Eventually, we retain the value of the parameter a leading to the best fit (i.e. largest R-squared). More clarifications regarding the tuning parameter a are needed. It is clear that without this parameter the linear or the quadratic term in the criterion will dominate depending on the scaling effect. But, ultimately, the latent variables which are obtained by means of QPLS are used in a quadratic regression to predict the hedonic scores. This is a reason why we propose to balance the contribution of the linear and quadratic terms in the optimisation criterion, the final choice of the tuning parameter being made on the basis of the prediction ability of the (quadratic) model.

n cov2 ðu; t2k Þ þ cov2 ðu; tk t1 Þ þ . . . C QPLS2 ¼ acov2 ðu; t k Þ þ ð1  a Þ o þcov2 ðu; tk tk1 Þ

The solution of this optimisation problem is detailed in Appendix B. 3. Application of QPLS in the scope of PrefMap The data, which are used to illustrate QPLS and to compare this method to alternative strategies of analysis, pertain to a preference study on coffees (ESN, 1996). A panel formed of 160 French consumers were asked to express their liking of eight varieties of coffee by assigning a liking score on a 9 points scale. Moreover, the sensory profiles of the eight varieties of coffee were established by a panel of trained assessors using 23 sensory attributes mainly related to taste (e.g. burnt, sour, chocolate, metallic, . . .) and odor (e.g. chocolate, intense, moisty, sweet, . . .). 3.1. Segmentation of consumers In a preliminary study, the liking scores (centered for each consumer) were subjected to a cluster analysis in order to group the consumers into homogeneous segments (Vigneau, Qannari, Punter, & Knoops, 2001). Eventually, three clusters were retained containing respectively 50, 45 and 65 consumers. The choice of a partition into three clusters stems from the evolution of the aggregation criterion (Ward criterion) which showed a clear jump from a partition into three clusters and a partition in two clusters (results not shown herein). In each cluster, the liking scores were averaged over consumers leading to a vector of scores which is assumed to be representative of the scores in the cluster under consideration. While the variance of the latent variables associated with the first and the second clusters are larger than 2, the latent variable associated with the third one, has a variance equal to 0.2. Further analyses (not shown herein) indicated that the third cluster of consumers was composed of those consumers who did not express any marked preference to any of the products (i.e. small variances of the scores). This cluster of consumers is not considered any further in the subsequent study. Eventually, the matrix Y to be related to the sensory data is formed of two vectors of scores respectively associated with the two remaining segments of consumers. The (centered) scores in matrix Y are depicted in Fig. 1. It can be seen that the consumers in group 1 do not like coffees 2, 5 and 7 and prefer coffees 3 and 8. The consumers in group 2 do not like coffees 2, 4 and 7 and prefer coffees 1, 6 and 8. 3.2. Quadratic PLS2 model As far as a global interpretation was sought, QPLS2 regression was applied considering the sensory descriptors as X matrix and

2.3. Extension to the multivariate case We propose an extension of QPLS to the multivariate setting which is more appropriate to the context of preference mapping. For this purpose, we consider the same criterion as for QPLS1 regression but we introduce a latent variable associated with the Y variables instead of a single y. More precisely, let us assume that at stage k, components t1, t2, . . ., tk1 have been determined. We seek a component tk (linear combination of X variables) and a component u (linear combination of the Y variables) so as to maximize the criterion:

ð3Þ

Fig. 1. Centred average scores in cluster 1 and cluster 2.

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

the average vectors of the two groups as Y matrix. A model with two components was eventually retained. The selected parameter a was 0 for the first component and 0.25 for the second one. The resulting model can be interpreted as a pure quadratic model for the first component and a compromise between a linear model and a pure quadratic one for the second component. Regarding the tuning parameter, the precise value a = 0.25 cannot be directly interpreted as such. It merely stands as a scaling factor which balances the variances of the linear and the quadratic effect. The interesting fact is that the latent variable thus obtained takes account of a linear and a quadratic effect and ensures a good fit of the quadratic model beyond the discrepancy of the variance of this latent variable and its squared values. Outputs of QPLS2 regression can be interpreted in the same way as the usual PLS regression results. The percentage of total variance in X (resp. Y) explained by the first latent variable t1 is 60% (resp. 34%). The first two QPLS components t1 and t2 explain up to 79% (resp. 82%) of the total variance in X (resp. Y). Similarly to PLS2 regression, VIP (Variable Importance in the Projection) indices can be computed for QPLS2 regression. These indices highlight the importance of the sensory variables in explaining the variation in the liking data. For a quadratic model based on the two first QPLS2 components t1 and t2, VIP indices are depicted in Fig. 2. From a practical point of view, VIP indices above 1 should be retained. Namely, these variables include: ‘odor of chocolate’, ‘green odor’, ‘odor of grilled coffee’, ‘odor of moist’, ‘goat odor’, ‘sour taste’, ‘green taste’ and ‘aftertaste intensity’. In Fig. 3, we show a biplot representing the correlations of the sensory variables with a VIP index larger than 1 with QPLS2 components t1 and t2 and the configuration of the eight varieties of coffee. It can be seen that the first axis opposes the coffees having an intense taste and odor to coffees having a green or chocolate odor and green taste. The second axis opposes the coffees having a chocolate or a grilled coffee odor to the coffees having a sour taste. A quadratic model was fitted to predict y1 (the average vector of scores in group 1) and y2 (the average vector of scores in group 2) from QPLS2 components (R2 = 0.986 for y1 and R2 = 0.832 for y2). The response surfaces associated with these

Fig. 2. Variable Importance in the Projection of the sensory variables associated with QPLS2 (The prefix ‘o’ stands for odor, ‘t’ for taste, ‘inm’ for in mouth and ‘aftert’ for aftertaste).

31

two models are depicted in Fig. 4 which represents contour plots drawn in the plane spanned by the first two QPLS2 components. Each contour represents a curve of constant level of liking. It turns out that the consumers in the first group prefer the coffees with a sweet taste and odor. They reject very intense coffees and also coffees with a high green taste and odor. The consumers in the second group prefer coffees with green odor and taste and reject sour coffees. The response surfaces for both groups of consumers correspond to saddle points meaning that they do not identify ideal points but anti-ideal points. The further, we move away from these anti-ideal points, the higher the liking becomes. 3.3. Comparison of models By way of comparing methods, the standard method of PrefMap, based on setting up vector or quadratic models using the two first components obtained by means of PCA applied to the sensory data, was performed. We also performed on the same data other alternative methods of analysis: i. Vector and quadratic models established on the basis of the first two PLS components; ii. Quadratic PLS regression introduced by Wold et al. (1989); iii. Quadratic PLS regression introduced by Baffi et al. (1999). For each method of analysis, we retained two components and we set up a vector model and a quadratic one on the basis of these components. The models were compared on the basis of the R2 coefficients associated with the average scores in cluster 1 (y1) and in cluster 2 (y2) (Table 1). From table 1, it can be seen that for cluster 1, the quadratic models, obtained by means of all the methods, perform rather well. Quadratic QPLS outperforms all the models, closely followed by quadratic PCA. For the second cluster of consumers, again quadratic QPLS outperforms all the methods followed by quadratic PCA. It can also be seen that for this cluster, the performance of the remaining models is very poor. The poor performance of PLS models is rather surprising. As we mentioned before, the liking data are dominated by non-linear relationships with sensory data. In this particular case, it can be seen that the determination of latent variables based solely on a linear model (e.g. PLS) degrades the global performance of PrefMap. Conversely, the more straightforward approach consisting in applying a PCA for the determination of the latent variables seems to be more efficient in the present case study. In this application, we have focused on the setting-up of a global model by applying QPLS2 on the average vector of the scores of the clusters of consumers. This approach (i.e. QPLS2 regression) is relevant when there are several Y variables (several consumers or several clusters of consumers). By performing QPLS2, a unique perceptual space is obtained instead of separate ones if we used QPLS1 for each group. Indeed, since the latent variables are common to all these Y variables, we can provide an overall and synthetic graphical display. Moreover, one of the referees suggested taking account of the different sizes of the clusters by weighting the average liking scores associated with the various clusters. This can be by considering qffiffiffiachieved ffi nk for cluster k, the weighted variable y instead of yk, where n k nk stands for the size of cluster k and n stands for the size of the whole panel. For a closer investigation of each cluster, we suggest to apply QPLS2 on the subset of consumers belonging to the same cluster. In this approach, a graphical display is associated to each group of consumers. By way of comparison, we performed this approach to each of the two clusters. The results (not shown herein) bear a

32

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

Fig. 3. Biplot representing the correlations of a subset of sensory variables with the first two QPLS2 components and the configuration of the eight varieties of coffee.

Fig. 4. Response surfaces associated with the two segments of consumers.

Table 1 R-squared from different models associated with the average scores in cluster 1 and the average scores in cluster 2. Model

Cluster 1

Cluster 2

PCA vectorial PCA quadratic PLS vectorial PLS quadratic Wold vectorial Wold quadratic Baffi vectorial Baffi quadratic QPLS vectorial QPLS quadratic

0.45 0.961 0.603 0.855 0.611 0.86 0.593 0.885 0.392 0.986

0.421 0.733 0.483 0.59 0.458 0.561 0.402 0.453 0.406 0.832

high similarity to those depicted in Fig. 4. Notwithstanding, the model for cluster 2 was easier to interpret when applying QPLS2 on all the subset of the consumers belonging to this cluster.

4. Conclusion Several authors have advocated using PLS2 regression instead of PCA in external preference mapping. Their motivation is that, unlike PCA, PLS2 regression takes account of the hedonic data when determining the perceptual space upon which the liking scores are projected. Therefore, this space is likely to be more linked to preference data than the perceptual space defined by PCA. As we have seen on this application, PLS regression is not recommended within the context of PrefMap when non-linear relationships with sensory data are present. Further investigations should be undertaken to assess whether this remark could be generalized to other datasets. Notwithstanding, this leads us to consider that the determination of latent variables within the context of PrefMap should take account of quadratic effects. This argument motivates the jump from PLS2 regression to QPLS2 regression. Indeed, this latter method takes account of, not only the linear relationships between sensory and preference data, but also of the quadratic and

33

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

interaction effects. Therefore, the quadratic model determined on the basis of the QPLS2 components is likely to explain more variation in the preference data than the model obtained from the PLS2 regression components as shown on the basis of the Coffee dataset. Another asset of QPLS2 regression is that it leads to outcomes that can be visualized and interpreted in a very similar way as those from external preference mapping outcomes obtained either by means of PCA or PLS2 regression. More investigations are needed to further explore the properties of this method of analysis and confirm its advantages in comparison to alternative methods. More investigations are also needed to set up a clear strategy of analysis consisting of applying QPLS2 on the centroids of pre-defined segments, or alternatively, performing QPLS2 on the consumers from each segment. Appendix A. The purpose of this appendix is to outline the revised algorithm proposed by Verdun et al. (2012) to perform QPLS1 regression. Let us assume that at stage k, the latent variables (or components) t1, . . ., tk1 are already determined. We seek a component tk which is a linear combination of the variables X. Formally, tk can be written as t k ¼ Xw, where vector w is the vector of loadings (or weights, or coefficients) assumed to be of unit length (kwk ¼ 1). We aim at maximizing the following criterion:

acov2 ðy; tk Þ þ ð1  aÞcov2 ðy; t2k Þ þ ð1  aÞcov2 ðy; tk t1 Þ þ . . . þ ð1  aÞcov2 ðy; tk t k1 Þ

ð4Þ

Following the notations and the developments given by Höskuldsson (1992), let xi be the column vector composed of the elements in the ith row of matrix X. The ith element of y is denoted yi. We can show (see Höskuldsson (1992) for more details) that the criterion given in (2) can be written in a matrix form as: 2

maximize f ðwÞ ¼ ½wT Bw þ ðwT GwÞ  for kwk ¼ 1

ð5Þ

where: 8 ! !T 9 < X = X yi xi yi xi : i ; i 8 ! !T ! !T 9 < X = X X X yi t i1 xi yi ti1 xi þ ::: þ yi t ih1 xi yi tih1 xi þ ð 1  aÞ : i ; i i i

B¼a

and G ¼

o pffiffiffiffiffiffiffiffiffiffiffiffin T 1  a X diagðyÞX :

On the basis of this criterion, Verdun et al. (2012) proposed the following iterative algorithm to determine the vector of loadings w: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1. Set a0 ¼ kmax ðBT BÞ þ 2kmax ðGT GÞ, where kmax ðBT BÞ (resp. kmax ðGT GÞ) is the largest eigenvalue of matrix BT B (resp. GT G). 2. Choose an initial vector w(0) of unit length. 3. Repeat until convergence (i.e. insignificant increase of criterion (4)) for n = 1, 2,. . .:

    T H wðnÞ ¼ B þ 2 wðnÞ GwðnÞ G þ a0 I; where I is the identity matrix.

wðnþ1Þ ¼

  H wðnÞ wðnÞ : kHðwðnÞ ÞwðnÞ k

where jjxjjdenotes the length of vector x. At convergence, the vector w (vector of loadings) is set to the w(n) (stationary vector in the iterative algorithm).

For the determination of subsequent latent variables, we propose to perform a quadratic regression of y on the latent variables determined up to stage k and replace the variable y by the vector of residuals from this regression. This means that we are seeking new directions in the X space which explain (linear and quadratic) variations in y that have not been captured so far by the latent variables. Appendix B. The purpose of this appendix is to outline the algorithm of QPLS2 regression. At stage k, we seek to determine a component tk = Xw (linear component of the variables in X) and a component u = Yc (linear component of the variables in Y) so as to maximize the following criterion:

 C QPLS2 ¼ acov2 ðu; tk Þ þ ð1  aÞ cov2 ðu; t2k Þ þ cov2 ðu; t k t1 Þ þ . . .  þcov2 ðu; tk tk  1Þ

ð6Þ

It is clear that for a fixed vector u, the solution to this maximization problem is given by performing QPLS1 of u on X and for this purpose the algorithm outlined in Appendix A can be performed. As for PLS2 algorithm, u can be initialized by the first column in matrix Y. Conversely, for a fixed vector tk, we can show that criterion (3) can be written in a matrix form as ½cT Ac, where matrix A is given by: 8 ! !T 9 ! !T < X = X X X A¼a t ki Y i tki Y i þ ð 1  aÞ t 2ki Y i t 2ki Y i : i ; i i i þð1  aÞ

X tki t 1i Y i i

!

X tki t 1i Y i i

!T þ ::: þ ð1  aÞ

X t ki t k1i Y i i

!

X t ki tk1i Y i

!T

i

In this notation, Yi is the vector formed of the entries in the ith row of matrix Y and tki is the ith entry of vector tk. It follows that the optimal vector is given by the eigenvector of A associated with the largest eigenvalue. As a summing up, the proposed algorithm runs as follows: Step 0: Choose an initial vector u which can be the first column or any column in matrix Y. Step 1: Perform QPLS1 algorithm of u on X (Appendix A) to determine component tk. Step 2: Compute matrix A defined above and extract its eigenvector c, associated with the largest eigenvalue, and set the latent variable associated with Y to u = Yc. Step 3: Iterate the algorithm starting from Step 2 until convergence (i.e. insignicant variation in criterion (3) in two successive iterations). For the subsequent stages, we propose as for QPLS1 to perform a quadratic regression of each variable in Y and replace it by its residuals. References Baffi, G., Martin, E. B., & Morris, A. J. (1999). Non-linear projection to latent structures revisited: The quadratic PLS algorithm. Computers & Chemical Engineering, 23(3), 395–411. Carroll, J. D. (1972). Individual differences and multidimensional scaling. In R. N. Shepard, A. K. Romney, & S. B. Nerlove (Eds.), Multidimensional scaling: Theory and applications in the behavioral sciences. New York: Academic Press. Clementi, S., Cruciani, G., Curti, G., & Skagerberg, B. (1989). PLS response surface optimization: The CARSO procedure. Journal of Chemometrics, 3(3), 499–509. Coombs, C. H. (1964). A theory of data. New York: Wiley. Danzart, M. (1998). Quadratic model in preference mapping. In Proceedings of the 4th Sensometrics meeting. Copenhagen, Denmark, August. de Kermadec, F. H., Durand, J. F., & Sabatier, R. (1997). Comparison between linear and nonlinear PLS methods to explain overall liking from sensory characteristics. Food Quality and Preference, 8(5/6), 395–402. Durand, J. F., & Sabatier, R. (1997). Additive splines for partial least squares regression. Journal of the American Statistical Association, 92(440), 1546–1554.

34

V. Cariou et al. / Food Quality and Preference 32 (2014) 28–34

ESN (1996). A European sensory and consumer study: A case study on coffee. Chipping Campden, UK: European Sensory Network ESN, CCFRA. Faber, N. M., Mojet, J., & Poelman, A. A. M. (2003). Simple improvement of consumer fit in external preference mapping. Food Quality and Preference, 14, 455– 461. Greenhoff, K., & MacFie, H. J. H. (1994). Preference mapping in practice. In H. J. H. MacFie & D. M. H. Thompson (Eds.). Measurements of food preferences (pp. 137–166). Glasgow: Blackie Academic and Professional. Helgesen, H., Solheim, R., & Næs, T. (1997). Consumer preference mapping of dry fermented lamb sausages. Food Quality and Preference, 8(2), 97–109. Höskuldsson, A. (1992). Quadratic PLS regression. Journal of Chemometrics, 6(6), 307–334. Husson, F., & Pagès, J. (2003). Nuage plan d’individus et variables supplémentaires. Revue de Statistique Appliquée, 4(LI), 83–93. Martens, H., & Næs, T. (1989). Multivariate calibration. Chichester: John Wiley & Sons Ltd.. McEwan, J. A. (1996). Preference mapping for product optimization. In T. Næs & E. Risvik (Eds.), Multivariate analysis of data in sensory science (pp. 71–102). London: Elsevier Science. Meullenet, J. F., Lovely, C., Threlfall, R., Morris, J. R., & Striegler, R. K. (2008). An ideal point density plot method for determining an optimal sensory profile for Muscadine grape juice. Food Quality and Preference, 19(2), 210–219. Rousseau, B., & Ennis, D. M. (2008). An application of landscape segmentation analysis to blind and branded data. IFPress, 11(3), 2–3.

Rousseau, B., Ennis, D. M., & Rossi, F. (2011). Internal preference mapping and the issue of satiety. Food Quality and Preference, 24(1), 67–74. Schiffman, S. S., Reynolds, M. L., & Young, F. W. (1981). Introduction to multidimensional scaling. New York: Academic Press. Sveinsdóttir, K. N., Martinsdóttir, E. A., Green-Petersen, D., Hyldig, G., Schelvis, R., & Delahunty, C. (2009). Sensory characteristics of different cod products related to consumer preferences and attitudes. Food Quality and Preference, 20(2), 120–132. Tenenhaus, M., Pagès, J., Ambroisine, L., & Guinot, C. (2005). PLS methodology to study relationships between hedonic judgements and product characteristics. Food Quality and Preference, 16(4), 315–325. van Kleef, E., van Trijp, H. C. M., & Luning, P. (2006). Internal versus external preference analysis: An exploratory study on end-user evaluation. Food Quality and Preference, 17, 387–399. Verdun, S., Hanafi, M., Cariou, V., & Qannari, E. M. (2012). Quadratic PLS1 regression revisited. Journal of Chemometrics, 26(7), 384–389. Vigneau, E., Qannari, E. M., Punter, P. H., & Knoops, S. (2001). Segmentation of a panel of consumers using clustering of variables around latent directions of preference. Food Quality and Preference, 12(5/7), 359–363. Wold, S., Kettaneh-Wold, N., & Skagerberg, B. (1989). Nonlinear PLS modeling. Chemometrics and Intelligent Laboratory Systems, 7(1/2), 53–65. Yenket, R., Chambers, I. E., & Adhikari, K. (2011). A comparison of seven preference mapping techniques using four software programs. Journal of Sensory Studies, 26(2), 135–150.