WebGL enabled smart avatar warping for body weight animated evolution

WebGL enabled smart avatar warping for body weight animated evolution

Entertainment Computing 32 (2019) 100324 Contents lists available at ScienceDirect Entertainment Computing journal homepage: www.elsevier.com/locate...

7MB Sizes 0 Downloads 20 Views

Entertainment Computing 32 (2019) 100324

Contents lists available at ScienceDirect

Entertainment Computing journal homepage: www.elsevier.com/locate/entcom

WebGL enabled smart avatar warping for body weight animated evolution☆ a,⁎

a

a

a

Georgios Bardis , Yiannis Koumpouros , Nikolaos Sideris , Athanasios Voulodimos , Nikolaos Doulamisb a b

T

Department of Informatics and Computer Engineering, University of West Attica, Athens, Greece School of Rural and Surveying Engineering, National Technical University of Athens, Athens, Greece

ARTICLE INFO

ABSTRACT

Keywords: Image warping Intelligent graphics WebGL Obesity

Obesity represents one of the most important health risks for a steadily increasing part of the population. The effort to prevent it or fight it is often hindered by the fact that a considerable percentage of individuals perceive their body image as healthier than it actually is. It is, therefore, essential to enhance awareness on actual body figure and weight state. In the current work we attempt to achieve this goal through visual feedback on the actual individual's image, increasing the impact by animating the transition from the initial state to the future outcome. The proposed module commences from a current body figure image and, relying on lifestyle and dietary information, gradually warps it to reflect expected changes. The mechanism relies on pure WebGL/ Javascript, animating the transition at a selectable pace and to the extent chosen by the user, based on triangulation of the body figure outline and warping of the resulting polygon and corresponding texture according to input parameters. The WebGL/Javascript platform in combination with the absence of usage of external libraries allows for a small footprint of the module while offering high portability due to its native support by most modern day browsers.

1. Introduction Computational or digital morphing is now almost thirty years old, at least in its photo-realistic form, considering its extensive use in movie “Willow” in 1988, one of its earliest implementations in wide public entertainment [1,2]. Initially considered as the generation of intuitively believable intermediate images between a given source and a destination [3], morphing has been gradually facilitated and accelerated by the rapid progress in computer hardware, which, in turn, has offered fertile ground for innovative algorithmic approaches [4,5]. As a result, computational morphing mechanisms have been extended to include 3D meshes and point clouds as well as texture, offering increasingly efficient solutions to problems in entertainment, engineering, medicine, etc. [2,6]. Similarly to other tasks supported by digital means, a technique once private to state-of-the art movie industry has now become a tool readily available to the interested user [8–10]. The aforementioned progress inevitably inspires the advancement of applications pertaining to custom user images and their relevant dynamic adjustment compliant to specific requirements aiming to offer an innovative and intuitive visual outcome.

In this context, the current work employs image warping techniques to offer a morphing experience of an individual’s body figure evolution along the course of a dietary/exercise plan, adjusted to the plan’s parameters and their respective outcome, visualizing and animating an otherwise numerical and, thus, less intuitive progress. The typical indices for obesity presence or absence as well as its intermediate degrees, e.g. Body Mass Index (BMI), offer, in any case, only a partial view of the overall impact of a certain nutritional or training program. According to World Health Organization, obesity has more than doubled since 1980, leading to over 600 million individuals, i.e. 13% of adult population, being obese in 2014 [11]. Several studies have revealed that obese individuals perceive their body image as less obese than it actually is [12,13]. Indicatively, a study among 18-to-25 years old women in reproductive clinics of Texas, USA showed that 23% of overweight women misperceived their weight as normal [14]. A survey among undergraduate students in University for Development Studies, Ghana revealed that, among the participating undergraduate students, 20.6% of the participants misperceived their weight status and, among them, 78.9% underestimated it [13]. It has also been shown that the probability of overweight/obese adolescents actually perceiving

The research leading to these results has received funding from the European Commission under grant agreement no. 691218. Corresponding author. E-mail addresses: [email protected] (G. Bardis), [email protected] (Y. Koumpouros), [email protected] (N. Sideris), [email protected] (A. Voulodimos), [email protected] (N. Doulamis). ☆ ⁎

https://doi.org/10.1016/j.entcom.2019.100324 Received 15 January 2019; Received in revised form 10 September 2019; Accepted 12 October 2019 Available online 19 October 2019 1875-9521/ © 2019 Elsevier B.V. All rights reserved.

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

themselves as overweight, has declined by 29%, between two groups of similar characteristics, interviewed during 1988–1994 and 2007–2012 respectively [12]. Obesity suits, designed to offer empathy with the obese version of oneself, through the experience of the everyday reality of obesity, is the closest one may come to obesity simulation in the real world [15]. However, in the digital realm, obesity simulators are mostly limited to chart representation of the evolution of the corresponding indices e.g. BMI (Body Mass Index, a function of height and weight), SBSI (Surfacebased Body Shape Index, a function of body surface area, height, waist circumference and vertical trunk circumference), weight, body fat percentage, etc. [16]. The actual mapping of the evolution of these indices, to the analogous gradual transformation of specific body parts in order to achieve a realistic visual outcome is far from trivial, with only a single example of a visualization attempt relying on a limited set of characteristics and values [17]. Nevertheless, an improved, custom adjusted avatar could have considerable positive impact on the affected individuals’ motivation and commitment, offering an intuitively accessible highly relatable projection of their future personal image [18]. The difficulty to produce a realistic depiction also stems from the fact that the same sets of consecutive values for the aforementioned indices may be expressed differently for different individuals, thus implying a large degree of ambiguity in their interpretation towards a visual outcome. It is, nonetheless, feasible, based on recorded statistics and relevant anthropomorphic data, to offer an acceptably realistic visual body representation/alteration of the evolution of these indices. The approach employed herein relies on a set of control points defined on users’ submitted images, used as markers for the image warping process. The points are pre-defined in terms of physiological representation, i.e. the body detail each represents, but custom defined in terms of placement in each user-submitted image. Path interpolation is addressed by taking into account the impact of the aforementioned indices on body parts, and their relation to the plan or lifestyle followed. The implementation of the proposed approach relies on WebGL/ JavaScript in order to offer maximum portability among platforms, adaptability to the overall application in present and future stages and a minimal footprint in terms of requirements for processing and storage resources. The limited scope of the technical requirements was chosen as a means to lower the adoption threshold in order to include older and low-end devices. The remainder of this paper is structured as follows: Section 2 provides a brief review of relevant work. In Section 3, the proposed system is presented, while in Section 4, an evaluation of the system is provided. Finally, Section 5 concludes the paper with a summary of findings as well as future directions.

The “Model my Diet” site attempts to offer body weight/loss visualization, allowing parameterization of pre-defined images, based on a limited set of characteristics, offering detailed categories for height and weight and only broad categories of age, body shape and bust size [17]. The approach relies on the visual appeal yet demonstrates a limited degree of realism. This is mainly due to lack of actual user images as well as due to limited or lack of consideration to characteristics contributing to a generic body figure state. Moreover, despite using a 3D computer generated human model, the effect of the parameters seems uniform, not utilizing the full potential of the three-dimensional source material. Finally, lifestyle and medical parameters are not taken into account, thus preventing a finer interpretation of the input. A more promising approach, although not concentrating on weight gain/loss, has been presented as a prototype, attempting to fully exploit benefits of 3D scanning and large dataset availability [28,29]. The scope is considerably wider, relying on anthropometric data obtained from more than 10,000 individuals. It is expected to offer a smartphone application which will suggest the user a body figure outline that his/her custom image must fit into. The user is expected to submit the custom image through the device’s camera, adjust it to the proposed outline and approve for further use - part of the interface is presented in Fig. 1(b). An approach using an avatar of varying weight is presented in [30]. The focus is on player motivation in the context of a game aimed to propel exercise (an exergame). The objective is to direct the player to physical activity through avatar alteration exhibiting higher or lower weight in response to idleness or movement, respectively. The scope does not include any reference to the relevant indices (BMI, WHR, etc.) and the authors aim for an intuitive rather than realistic depiction, simply adding a photo of the player's face as texture to an otherwise pre-defined generic body avatar. Dedicated avatar reshaping approaches focus principally on realism and successful rigging, while typically inducing high processing costs. The work in [31,32] uses previously scanned models for training machine learning mechanisms which allow the generation of realistic alternative poses of parametric body shapes. The emphasis is on the generated model's realism, in terms of the visual result as well as posing and motion, without any concern regarding the accurate mapping of a certain individual's dietary habits and energy consumption to the corresponding body shape change. Similarly, with respect to the latter point, the work in [33] relies on a body shape space to morph human scanned avatars to variable heights and body figures. The outcome is subsequently automatically rigged to allow for avatar animation while preserving texture and individual characteristics of the original scan. Again, the approach does not incorporate user profile information for the morphing operation, since dietary and exercise habits do not form part of the relevant interactive process. Moreover, due to the complexity of the models necessary to cover its scope, the computational requirements' footprint of the approach is significantly larger than that of the current work. The pipeline presented in [34] includes 3D scanning of real persons and subsequent reconstruction of avatars. The latter are then mapped to a skeletal rig in order to allow for smooth integration in standard game engines and VR frameworks. The scope of the approach does not include a morphing or reshaping stage that could benefit from lifestyle and dietary user profile information. Overall, the combination presented in the current work, i.e. (a) an animated 2D avatar featuring a custom user image, (b) reflecting weight gain/loss guided by WHO indices based on anthropometric, lifestyle and dietary data, (c) in a small footprint application appropriate even for low-end devices with no need for plug-ins or libraries installation, appears to occupy a niche not present in current literature.

2. Related work 2.1. Other approaches A recent study has identified more than 28,000 mobile applications pertaining to the topics of diet, physical activity, recording/monitoring of exercise, calorie intake and body weight. However, none of these apps mentioned specifically “obesity-prevention” or “prevent weightgain” [19]. Even the most popular applications related to this purpose [20–23], despite combining a series of key elements, fail to incorporate a visualization of the relevant indices and data in the form of an actual body figure representation [24,25]. Implementations involving simulations that could be used against obesity are restricted to chart-based tools, similar to the one presented in Fig. 1(a). Such tools provide a slightly enhanced view of the evolution of the related indices [16], according to a proposed Dynamic Mathematical Model. The latter relies on a number of laboratory measured quantities [26]. An alternative approach employs agent-based modeling, aiming to identify and assess the influence contributing to child obesity of several environmental factors [27].

2.2. Morphing and warping The former term in the field of Computer Graphics initially signified the transformation, i.e. the meta-morph-osis, of a 2D source image to a target one, through a series of automatically generated images, 2

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 1. (a) Body Weight Planner Result [16], (b) Suggested smartphone application prototype [28].

intuitively acceptable as intermediate stages of this transformation. A basic method to achieve this, i.e. mesh warping [1], relies on the presence of a mesh in both source and target images and proceeds by (a) identifying the position of the mesh vertices (or landmarks) as common features in both images, (b) defining a path for each of these vertices connecting its locations in the original image and the target one, (c)

generating intermediate warped images both for the target and destination image, each containing the vertices in their intermediate locations along their paths and (d) blending each couple of intermediate images to yield a single image by coloring each pixel in a weighted manner from both contributing images. A range of methods have evolved altering one or more of the 3

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

aforementioned steps. Field morphing, for example, relinquishes the requirement of a mesh in the participating images, replacing it by a set of curves signifying common features to be mapped between the images and influencing the rest of the image pixels according to their distance from the features [3]. Scattered data interpolation generalizes the method, resolving lines and curves to points and generating surfaces to blend for the intermediate stages [4]. Regenerative morphing addresses the problem as an optimization one employing an objective function that encompasses source similarity and temporal coherence [7]. In any case, image warping forms an intrinsic part of the morphing process and is now considered a special case since the term morphing has been extended to imply any transformation of a source image or model to a target one, through intermediate, intuitively acceptable, images/ models [6]. The applications of morphing under its extended interpretation are numerous including animation, medicine, engineering, etc. [2,6]. Morphing functionality is currently readily available either through dedicated applications or in photo/video processing software [8–10].

to support the visualization and realistic animated transformation of a user submitted image. Other than that, the aim was the lowest possible requirements, in terms of computational capability, network availability as well as storage capacity. The first part of the aforementioned scope suggested not only the inclusion of mobile devices in the target group of applicable hardware but also ensuring of the support of the low-end. In the same spirit, given the requirement for a minimum but existent capability of realistic graphics representation, it was decided to avoid a stand-alone application. Instead, we chose to take advantage of already existing software in the vast majority of the devices, to gain access to a quite powerful graphics engine. Namely, the (available) web browser offering access to the WebGL API. WebGL represents the result of the need to have access to two- and three-dimensional graphical content on the Web natively, without having to resort to plug-ins. It is the fruit of the cooperation of major web browser vendors – namely Apple (Safari), Google (Chrome), Microsoft (Internet Explorer/Edge), and Mozilla (Firefox) – under the umbrella of the WebGL Working Group, in turn part of the Khronos Group [35]. Technically, it is a specification for a low level 3D graphics API which is implemented by the aforementioned vendors in their corresponding browsers, starting from 2010 for Chrome, 2011 for Mozilla, 2013 for Internet Explorer, 2014 for Safari, 2015 for Edge, as well as in their mobile versions. Other popular browsers like Opera or Vivaldi are also compliant with the specification. WebGL is a slight variation of OpenGL ES (Embedded Systems) which in turn is a subset of OpenGL, the open standard API for 3D graphics applications. The WebGL API is accessible through Javascript/HTML and nothing additional is required in a WebGL enabled browser [36]. Any other solution offering 3D graphics on the web requires either a plugin (X3D/ VRML, Stage3D/Flash, Silverlight) or a Javascript library available during execution (X3DOM, three.js). As a side effect, it also offers an extremely viable alternative to stand-alone graphics applications. In the context of the current work, the combination of Javascript/HTML/ WebGL has offered a platform (a) with minimal computational, network and memory footprint, (b) readily available in the majority of users' devices and (c) offering the desired capability of displaying and processing the user's custom images. Moreover, given the wide and active support by major stakeholders, as well as the penetration to the wide public of all components of the combination, the platform appears as reasonably future-proof as possible. Finally, due to the inherent graphical capabilities of WebGL, future enhancements of the graphical part of the application (e.g. 3D avatars and transformations) are feasible within the scope of the same unaltered platform.

3. Proposed system 3.1. Functionality The proposed system aims to allow the user to foresee the evolution of his/her body image according to a given diet plan and/or exercise program and in compliance with the initial values of the indices connected with body weight. The functionality is based upon input of a user’s data, relevant to the weight control and monitoring process, i.e. height, weight, waist circumference, etc. In addition to this information, the user submits the values for certain parameters connected with the problem, e.g. current and planned daily energy input, current and planned daily exercise program or habits, etc. In order to visualize the anticipated results of the aforementioned information, a body image conforming to specific requirements is employed and accordingly adapted. This image is either submitted by the user or originates from a set of pre-defined images available in the environment. In the former case, where the user submits a custom image, he/she has to invoke an additional interface component to suggest the position of certain key features and sketch, through a limited set of points, the outline of the body figure in the submitted image. In the latter case, where a pre-defined image is used, the additional input is optional: the choice of the image is followed by the user’s anthropometric, dietary and exercise information with respect to features and body figure outline in order to allow adaptation to each individual user’s characteristics. The smart avatar module, which is the focus of the current work, relies on a set of units each pertaining to a subset of the tasks required in order to achieve the intended functionality, namely:

3.3. Architecture The functionality presented in Section 3.1 is implemented as a module within a wider web-based responsive application, aiming to motivate and support users in their struggle against overweight and obesity. The application combines the benefits of the realistic visual feedback offered by a warping avatar, within an integrated environment incorporating additional modules. These modules are responsible for communication with nutrition and exercise experts as well as individuals with common goals. Their functionality includes monitoring progress in textual and chart-based forms combined with goal setting and virtual rewards for accomplishing them. The functional features also comprise maintenance of profile data and calendar-based information while providing news and feedback from external sources. For the remainder of this section, we focus on the implementation specifics of the module units, discussing the issues addressed in order to effectuate the desired functionality.

• Image Import and Initialization unit • User Anthropometric Profile unit • Transformation unit • Interface and Visualization unit In the following we elaborate on the role of each unit and its interconnection with the rest within the scope of the module as well as the overall application. The overall structure of the module in regard to aforementioned functionality appears in Fig. 2. 3.2. Platform choice rationale Due to the widespread nature of the obesity issue addressed, our aim has been to make the application accessible for the widest possible range of devices. While this suggested a low level of technical complexity, it was counterbalanced by the desired functionality, discussed in the previous section. In particular, the chosen platform had to be able

3.3.1. Image Import and Initialization The specific unit is responsible for allowing the user to submit a custom image of his/her body figure. The stance has to be similar but 4

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 2. Module functionality and corresponding components.

not rigidly identical to what is known as the anatomical position [37] which refers to a standing position with open arms and uncrossed legs. In particular, before upload, the user is explicitly made aware of specific characteristics that the image as well as the body stance it depicts has to fulfil:

3.3.2. Triangulation Once the user has submitted an image, further processing is required in order to transform the suggested set of points into a mesh. The user’s proposed outline takes place in a sequential manner, indicating a set of edges connecting the selected points. Due to the nature of the human body figure and the requested stance for the input image, no intersections are necessary for its contour, at least at an accuracy deemed adequate for the scope of this work. Based on these, the user defined outline can be considered as one corresponding to a planar straight line graph (PSLG). Given the fact that WebGL natively supports triangular meshes, we have produced a constrained Delaunay triangulation of the input PSLG, preserving the implied edges of the outline without generating additional vertices [40]. Given the low complexity of the graph, the triangulation task is not taxing in terms of module performance. Our aim is to visualize the image by applying it as texture on the triangulated mesh through the WebGL API. This allows for the warping of the image in a controlled yet computationally flexible manner, by deliberately adjusting the position of the points participating in the outline. We have chosen to pay the price of thin triangles partaking in the mesh, in exchange for higher flexibility and control in the warping potential of the picture. In particular, the visual result is achieved by gradually altering the position of selected vertices, i.e. those participating in the representation of the body outline, according to dietary and exercise parameters interpretation and mapping. However, this is executed while maintaining the references of these vertices to the corresponding pixels of the input image/texture unaltered. In this manner, body parts that may expand or contract during the process, will rely on the initial corresponding visual information, deformed up to a degree, to achieve a plausible visualization of their new state. The mesh should be extended along the borders of the image if we wanted to preserve the background around the body figure and warp it accordingly. We have chosen to relinquish the background of the submitted image for the benefit of having a clear outline to triangulate. An advantageous side-effect of this choice is a, generally, more realistic rendering, in the sense that anything other than a plain background in the original image would seem unnatural when warped, even if the body itself sustained plausibility due to its relatively homogeneous texture. We instead use another image as a background layer to retain realism through consistency of the background in contrast to and during the body figure warping.

• It has to depict the entire body from a front view. • Any clothing present has to be form-fitting, i.e. closely follow the outline of the body in order for the results to be plausible, • The posture adopted by the individual in the picture must be upright standing with extended arms. • It must not comprise body parts overlaps (e.g. crossed legs, arm over chest, hand in pocket, etc.).

As soon as the image file is specified, it is imported and presented to the user. The next step is to assign specific pre-requisite features to image pixels, largely based on the homology concept [38] and the list provided by EUROFIT reference body template and skeleton [39]. These features, summarized in Table 1, aid the subsequent image alteration process. The feature-to-pixel mapping is achieved through an interactive environment where the user is requested to click on the appropriate image location representing a certain feature at a time. Once this process is complete, the user is requested to trace a rough outline of the body figure within the image. The coordinates gathered from this step will be used to triangulate the image and subsequently allow its warping according to user’s profile parameters. A continuous input method for the body outline was initially considered, based on real-time recording of free mouse movement, yielding degraded results. Table 1 Custom image features grouped by location. Feature(s)

Informal name(s)

Vertex, Acromion L, Acromion R Armpit L, Armpit R Palm L, Palm R Waist Girth L, Waist Girth R Crotch Knee L, Knee R Hip L, Hip R

Head top Left and right shoulder Waist extremes Inner leg connection point

5

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 3. (a) Textured user-defined body figure and (b) plain vertex color user-defined body figure outline (Human image Source [41]).

The results of the user’s input, after triangulation of the submitted vertex sequence, appear in Fig. 3. The fragmented, paper-cut-like outline of the body figure is evident mainly in Fig. 3(a), stemming from the inaccuracy of the input method used (consecutive mouse clicks), whereas Fig. 3(b) presents the mesh in plain colored vertices and accordingly shaded triangles. The background of the original picture also appears near the edges of the figure. An original image with no particular human characteristics was chosen as an example here in order to stress the aforementioned defects despite lacking realism (admittedly, the specific background image also does not contribute to realism!). An alternative input method was initially considered and evaluated by a number of users. It was based on real-time recording of free mouse movement, as opposed to clicking for tracing the outline, and it was tested with variable resolutions. The results were of considerably decreased accuracy, evidently due to the continuous nature of the input action, which suffered from the unintentional erratic movement of human hand.

Basic personal information comprises age and gender. At this stage, the dietary plan is directly submitted by the user as total amount of energy intake per day in calories. Energy consumption is optionally submitted explicitly, as a number, or implicitly, through user’s input concerning intuitive overall energy expenditure per day, based on lifestyle, as defined by FAO/WHO/UNU [42]. This information is available in the profile and used for body figure adjustment. The aforementioned quantities contribute to the calculation of indices including BMI (Body Mass Index) and WHR (Waist to Hip Ratio) placing the current state of the individual in the appropriate category regarding body weight (underweight/normal/overweight/obese 1 or 2). Moreover, they contribute to the calculation of the anticipated body weight evolution which guides the corresponding body figure alteration. This mapping of body weight to body features has been based upon anthropometric statistics recorded in relevant studies [43]. According to this study, reduction in energy intake (i.e. dieting) is generally more efficient towards BMI reduction (i.e. weight reduction, since height is constant) than increased energy consumption (i.e. exercise). In the case of men, and concentrating on the reduction of WHR, the opposite is true, implying that body figure, which is of interest herein, is more affected by exercise than by diet for male individuals [44]. In other words, a specific diet plan may have the same effect on a man's BMI with a specific exercise plan, but his WHR will benefit more (i.e. will be reduced more) by the exercise plan compared to the BMI-equivalent diet plan. For women, there is no systematic evidence of stronger connection of WHR reduction with increased energy consumption vs. reduced intake. The parameters available as input for the user profile module are summarized in Table 2. The assessment of the recommended energy intake (REI) to maintain a healthy BMI, in accordance to lifestyle, yields a range instead of a specific value due to the inherent differences among individuals from different population groups as well as the range of the healthy BMI per se. The specifics of the calculations require the determination of the Physical Activity Level (PAL) which is defined as the ratio of the Total Energy Expenditure (TEE) over the Basic Metabolic Rate (BMR), usually over the duration of a typical day. It is, therefore, enough to request or

3.3.3. User profile In order to offer a realistic alteration of an appropriately selected pre-defined image or the custom submitted one, a number of parameters pertaining to the specific individual have to be input. The user’s profile details that are relevant to the smart avatar described herein include (a) anthropometric information, (b) basic personal information and (c) dietary plan and exercise information. The anthropometric parameters include a set of body measurements connected to one or more relevant indices, namely:

• Height • Weight • Waist circumference • Hip circumference • Chest circumference • Bicep circumference • Thigh circumference Table 2 Profile input parameters. Age Groups Body Weight Groups (by BMI) Energy Consumption (implicit) Energy Consumption (explicit) Energy Intake

1: ≥60, 2: [30, 60), 3: [18, 30) Obese 2: ≥35, Obese 1: [30, 35), Overweight: [25, 30), Normal: [18.5, 25), Underweight: [0, 18.5) 1: Idle, 2: Light Activity, 3: Moderate Activity, 4: Vigorous Activity Amount in calories Amount in calories

6

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

In particular, if a is age, g is gender, w is weight (in kg), h is height (in m), f the function of BMR calculation according to the abovementioned table, we follow the sequence:

Table 3 Physical Activity Level (PAL) index values. Category

PAL value

Sedentary or light activity lifestyle Active or moderately active lifestyle Vigorous or vigorously active lifestyle

1.40–1.69 1.70–1.99 2.00–2.40

bmi =

Male (weight in kgs, result in kcal/day)

Female (weight in kg s, result in kcal/day)

18–30 30–60 ≥60

15.057⋅weight + 692.2 11.472⋅weight + 873.1 11.711⋅weight + 587.7

14.818⋅weight + 486.6 8.126⋅weight + 845.6 9.082⋅weight + 658.5

w = bmi h2

bmr = f (a, g , w ) rei = pal bmr

Table 4 Basic Metabolic Rate (BMR) formula according to age and gender. Age

w h2

Namely, for the range of healthy ΒΜΙ's 18.5 up to 24.9 and given the height of an individual we may calculate a range of healthy weights. Using this range and according to the individual's gender and age, we obtain the corresponding BMR. Finally, given the individual's lifestyle which implies the respective PAL index we may calculate the recommended energy intake. For example, for a 25 years old male individual of height 1.80 m we obtain a range of healthy weights from ~60 kg (for BMI 18.5) to ~81 kg (for BMI 24.9). This, in turn, leads to BMR range between 1595 and 1907 kcal/day. If we assume an active or moderately active lifestyle (e.g. PAL 1.85), the calculation eventually yields a recommended energy intake between 2950 and 3528 kcal/day. Energy intake within this range maintains weight if the latter falls already within the corresponding recommended range. In any case, the difference between the energy intake necessary to maintain current weight and the actual energy intake suggests the body weight alteration trend, at a rate connected to age, gender and lifestyle. It is worth noting an intricacy regarding visualization, further detailed in the next section: the individual’s current BMI, revealing his/ her current status (underweight-normal-overweight-obese) has to be combined with the ideal energy intake to correctly depict body change. The ideal energy intake, adjusted for any specific age and height, in order to maintain a healthy BMI (practically, a healthy weight), is calculated as described above. The combination with the current BMI is necessary because, regarding the task of visualizing body alteration, as well as expected future body state, an already obese individual who is practicing a high energy diet (i.e. receiving a lot of calories daily) is

assess the PAL of an individual, according to daily professional and leisure activity, and combine it with the BMR to obtain the required energy intake. The PAL values that can be sustained for a long period of time by free-living adult populations range from about 1.40 to 2.40, this range being usually subdivided in three sub-ranges appearing in Table 3. The actual calculation commences from the estimation of BMR (in kcal/day) according to gender and age, as shown in Table 4, concentrating here only on adults, which is also the main focus of the relevant study [41]. The BMR value is subsequently multiplied with the PAL suggested by the individual's lifestyle to yield the total energy expenditure as a value that indicates the daily energy necessary to maintain current weight. A healthy BMI is accepted to be between 18.5 and less than 25 which, given the height of an individual, implies a range of healthy weights. It is this range of healthy weights that, through the corresponding BMR, leads to a range of recommended energy intake.

Fig. 4. (a) Initial State, (b),(c) high energy diet/low exercise after 3 and 6 months (d),(e) low energy diet/low exercise after 3 and 6 months (Human image source [41], Background image source [45]). 7

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 5. Distribution of evaluation participants with respect to body weight, age, gender and device used.

Table 5 Statistic indices. Answer population A = {a1, a2, Mean

, aN },

N = |A|

µ=

Median

1 N

N i = 1 ai N+1 , a1 2 ak + ak + 1 N , k = , a1 2 2

ak , k =

M= Variance Standard Deviation

2

=

=

1 N 1 N

N i=1

(ai

N i=1

expected to experience less alteration to his/her body outline when compared to an individual of normal weight practicing the same high energy diet. Moreover, body weight alteration leads to modified influence of an otherwise steady energy intake since, for example, a normal weight individual receiving a high energy intake will eventually gain weight. Hence, the same high energy intake will gradually tend to maintain, instead of increasing, the already high weight. Therefore, the contribution of the actual energy intake against the recommended ones, as well as the actual BMI against the recommended ones, is also a part

(ai

a2 a2

aN

aN ,

1

aN

1

aN ,

N = 2n + 1, n N = 2n, n

µ)2 µ) 2

of the visualization and warping process, detailed in the next section. 3.3.4. Warping and animation After the mesh has been triangulated and the user’s profile details are available, the next tasks are (a) to isolate the vertices – also implying the corresponding triangles – which will be affected by respective body figure changes and (b) to apply these changes customized accordingly as implied by the individual's characteristics included in his/her user profile. 8

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 6. Results of questions 1–4 concerning the impact and realism achieved.

relation between vertex coordinates for purposes of identification and subsequent alteration, would be useful in the general case of a point cloud or a non-planar graph. However, given the nature of the body figure contour (simple, non-convex polygon), all vertices falling between, for example, the left armpit and left hip would belong to the same chain of the overall polygon. It becomes evident that, the identification check that has to take place during the warping process benefits from the vertex sequence inherent in the polygon. The acceleration stems from the graph planarity and from the fact that each vertex belongs to, at most, one group, altered in a specific manner, e.g. left waist extreme and its neighbors fall between the left armpit and the left hip. Each group consists of consecutive vertices, all participating in the alteration. It is, therefore, efficient to identify these groups based on their order in the vertex sequence. The only hindrance to this idea is the fact that the user submitted key features do not necessarily exactly coincide with the vertices in the body figure polygon, since the latter is submitted as a subsequent step. In order to overcome this, we transparently replace the user suggested feature points with their closest in the body figure polygon, retaining certain geometric properties, e.g. being local maxima/minima. In this way, the key features preserve their identity

Table 6 Statistic indices for questions 1–4.

Q1 Q2 Q3 Q4

Mean

Median

Variance

Standard deviation

3.67 3.29 3.57 3.05

4 3 4 3

0.89 1.35 1.10 1.00

0.94 1.16 1.05 1.00

The identification of the areas to be adjusted is assisted by features indicated by user in the previous stage. For example, the location of the left armpit, the left waist extreme and the left hip extreme allow isolation and grouping of vertices belonging to the left edge of the main body. The features mentioned in the aforementioned example also offer the capability to gradate the influence of body weight gain/loss in order to achieve a realistic deformation of the body outline and, thus, the corresponding warping of the image texture. Key feature vertices, like the top of the head or the crotch, allow distinction of interesting groups of vertices between, e.g. the left and right parts of the body. Relying on checks of × and y coordinates against such key features, employing the relatively straightforward

Fig. 7. Results of questions 5–7 concerning the functionality and interface (free text questions not included). 9

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

weight alteration which, in turn, yields modified parameter values taken into account for the remaining course of animation. An instance of the function for the armpit-to-hip body outline appears below. A set of contributing factors and metrics are calculated based on the user's profile and outline and contribute to the image warping, namely:

Table 7 Statistic indices for questions 5–8.

Q5 Q6 Q7 {1,3,5} Q8 {3,4,5}

Mean

Median

Variance

Standard deviation

3.10 2.90 4.33 4.73

3 3 5 5

1.32 1.51 1.27 0.20

1.15 1.23 1.13 0.44

• fat factor f: an adjusted measure of the distance of the user's current •

and are also used as markers in the polygon chain of vertices, playing the role of group delimiters or separators, offering instant access to the extremities of vertex groups. In the latter case, they offer a convenient way for enumerating the vertices falling between two delimiter feature points. The transition itself is dictated by anthropometric data submitted by the user and details of the current or future dietary and exercise plan he/she follows or intends to follow. During everyday use of the application, the initial user submitted image is gradually warped to simulate the gradual outcome of the aforementioned energy information. Alternatively, the user may select a specific time period in the future to yield a larger scale change and foresee his/her anticipated body form. In this case, the transformation is animated at a selectable rate, gradually adjusting each vertex towards its calculated final position. In any case, the overall energy information is translated in targeted changes connected with specific body features [43,44]. To be visualized, these changes are translated into fine adjustments of the coordinates of the relevant vertices in the body figure outline. The alterations are variable and explicit for each vertex, depending on the point’s distance from the corresponding key features and the response of the relevant body parts to energy intake/consumption balance and calculated weight changes, as suggested by the corresponding user profile information. Depending on the body outline chain considered, a modular sinusoid function serves as the basis to achieve the effect of varied response to weight change for each individual vertex within the same group, in an efficient and realistic manner. The function has been appropriately parameterized to allow customization for each user, by taking into account the proximity to the expected outcome of the contour alteration for individual body areas, through varied contributions of the examined parameters to the overall visual result. The individual custom parameter values for each user are calculated upon the user body characteristics, as implied by the original image contour and the explicitly submitted body indices and metrics. Moreover, the body outline evolution is accompanied by the estimated

• • • •

weight from the recommended range (based on BMI) for his/her height energy intake factor e: an adjusted measure of the distance of the user's current energy intake against the recommended range (based on BMR and PAL) for his/her height warp factor w: a measure of the expected alteration of the user's current body outline curve factor c(p,b): the custom degree of warping for each point p = (xp,yp) belonging to a specific body outline chain b defined by extreme points rb and sb. This function incorporates custom adjustments stemming from each user's individual characteristics. body chain tuning t(b): the custom degree of adjustment needed for similar but not equivalent body chains, typically left vs. right side, also covering asymmetries in the submitted outlines. animation denominator d: controls the evolution of the change in terms of warping rate Based on the above, we have:

w=f p '= p ·

e w ·cos(t (b)·c (p, b)) d

The latter represents the basic warping step for a point's p coordinates over time, which is repeatedly applied to achieve the alteration for the desired time period and adjusted with respect to animation rate. An example of the module’s results appears in Fig. 4, where the evolution of the body figure for an individual following a high and a low energy diet, combined with minimal exercise, is shown after 3 and 6 months respectively. 4. Module evaluation The module has undergone an evaluation process by a number of volunteers of various ages and body states. The aim was to assess the efficiency of the approach in terms of its impact towards initiating or

Fig. 8. Results of questions 11–12 concerning the technical aspects. 10

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

4.1. Evaluation process

Table 8 Statistic indices for questions 11–12.

Q11 Q12

Mean

Median

Variance

Standard deviation

4.52 3.76

5 4

0.73 0.85

0.85 0.92

The evaluation comprised accessing and using the module using each participant's device with no requirement for any additional installation. The participants were able to input custom information regarding lifestyle, energy intake to evaluate use with the available default image or, if desired, submit a custom image according to the reference pose accompanied by personal information including gender, age, weight and height. Questionnaires were completed after the evaluation to record and process results, while preserving the confidentiality of the participants' personal information.

maintaining a healthier lifestyle as well as regarding the extent to which it met the users' expectations with respect to the visual outcome. The module was made available as a pilot web application where the users could directly test compatibility issues and experiment with various input data and images. In the following we present the collected results. The evaluation module was transparently available for mobile as well as other computing devices due to the inherent compatibility of the WebGL/Javascript combination with most modern-day browsers. Indicatively, WebGL is natively supported by Google Chrome (since 2010), Mozilla Firefox (since 2011), Microsoft Internet Explorer (since 2013), Apple iOS Safari (since 2014), Microsoft Edge (since 2015), etc.

4.1.1. Participants characteristics A number of characteristics have been used to categorize the participants to the module evaluation, their values appearing in Table 2. These are the same characteristics incorporated in the module interface and follow the WHO categorization regarding obesity indices. The participants were requested to evaluate the outcome of the module according to various input values regarding energy consumption and intake, as applied to the default image appearing in Fig. 4 corresponding to an individual of age group 2 and with a normal BMI,

Fig. 9. Results of Questions 1,2 per body weight group of participants. 11

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

Fig. 10. Results of Questions 3,4 per body weight group of participants.

level scale (Poor, Fair, Average, Good, Excellent) where the lowest level always represented the negative aspect (e.g. no motivation inspired by the application, poor realism, high difficulty in use, etc.). The first group concerned the impact of the module and the realism it demonstrated in the depiction of body weight gain or loss. The second group concentrated on the functionality and the interface to achieve this functionality. Two free-text questions in this group requested the most interesting and the most lacking part of the module. The last group comprised two questions regarding the technical aspects of installation and execution of the module. The complete list of questions follows:

Table 9 Statistic indices for questions 1–4 by weight group.

Underweight Normal Overweight Obese 1 Obese 2

Mean

Median

Variance

Standard deviation

3.00 3.14 3.50 4.50 4.17

3 3 4 4.5 4

0.90 0.91 1.35 0.25 0.64

0.95 0.95 1.16 0.50 0.80

optionally processed by the users regarding body outline and key points, or an image of their choice, accompanied by the corresponding profile information and processing. The visual result was evaluated according to visual realism and correspondence to expected body figure alteration according to values. The distribution of the overall 21 participants with respect to body weight, age group, gender as well as platform used for the evaluation is summarized in Fig. 5.

4.1.2.1. Impact and realism 1. To what extent do you think the module could motivate you towards a healthier lifestyle? 2. To what extent do you think the module realistically depicts the result of the input parameters? 3. To what extent do you think the module realistically depicts the result of body weight gain? 4. To what extent do you think the module realistically depicts the result of body weight loss?

4.1.2. Questionnaire Most of the questions the participants had to answer were on a 512

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

4.1.2.2. Functionality and interface

granularity is limited. Nevertheless, the animated evolution of the body image according to input values seems to receive a highly positive evaluation, considered by 15/21 participants as a clear positive contribution to the module's impact. Of these evaluators, 11/15 consider the effect of this feature the highest possible. The free text answers regarding the most positive aspect of the module actually verify the animation feature as the most appealing one of the module whereas the ability to submit custom images receives also positive mentions. The free text answers regarding aspects that could be improved suggest an automated body outline input method, the improvement of the alteration of the interior of the body image as well as the elimination or relaxation of the restrictive requirement for the reference body pose regarding the custom submitted images. The results for the multiple-choice questions of this section appear in Fig. 7 and the statistic indices are summarized in Table 7.

5. How do you evaluate the range of input parameters with respect to the desired functionality? 6. How do you evaluate the input method for the body image outline and key points? 7. How do you consider the animation of the image contribution to the application impact? 8. To what degree do you consider the animated image improves the application impact? 9. Which aspect of the application do you consider the most interesting? 10. Which aspect of the application do you consider needs the most improvement? 4.1.2.3. Technical aspects

4.2.3. Technical aspects The last part of the questionnaire focused on the technical aspects of installation and use of the module yielding the results presented in Fig. 8. The question regarding the ease of installation was practically a question of availability of a WebGL compatible browser on the user's device. Given the large penetration of up-to-date web browsers, the vast majority of the participants appear to have met no problem accessing the module (18/21 positive, 2/21 neutral), thus largely justifying the choice of pure WebGL/Javascript for the implementation. Ease of use also received a positive evaluation (13/21 positive, 6/21 neutral) although not to the extent of the previous question, possibly hindered by the body outline input method as a use case impeding the overall ease of use of the module. The relevant statistical indices appear in Table 8.

11. How easy was to install the module on your device? 12. How easy was to use the module on your device? 4.2. Evaluation results The results from the first part of the evaluation reflect users' impressions after operating the module with custom parameters for a default body image. The results are presented below per question as well as with respect to key overall aspects of the module content and functionality. A set of statistical indices is also presented for completeness, summarized in Table 5. The statistical processing has been based upon the assumption of direct mapping of the 5-level scale {Poor, Fair, Average, Good, Excellent} to the {1,2,3,4,5} set. There are two exceptions to this mapping. In particular, question 7 offers the set {Negative, Neutral, Positive} as potential answers which have been mapped to the {1,3,5} set. Moreover, question 8 is a sub-question of question 7 when the answer to the latter is “Positive”, thus offering only {Moderately, Adequately, Ideally} as options, mapped to {3,4,5}.

4.2.4. Results per body weight group The first section of the questionnaire examines the effectiveness of the application in terms of motivation inspired to the user and realism achieved. Therefore, it is interesting to analyze further the responses of this specific section, identifying their distribution according to body weight group. This will allow to observe the effect of the application on the Overweight, Obese 1 and Obese 2 groups who represent the main focus of the outcome of this work. The results of the first section of the questionnaire, presented per body weight group, appear in Figs. 9 and 10. Concentrating on the three groups comprising individuals of a weight higher than normal (i.e. Overweight, Obese 1 and 2) it is interesting to observe that all of their members (9/9) evaluate as average or better the motivation triggered by the module towards a healthier body weight. In regard to the second question, the majority evaluates the realism of the representation of the input parameters as average (1/ 9) or better (6/9). Similar is the opinion of the specific groups' members regarding the realism achieved in body weight gain visualization: 2/9 consider it average whereas 6/9 consider it good or excellent. The opinions with respect to body weight loss representation appear to still positive, having 5/9 members of the group favoring the relevant visualization whereas 2/9 consider it average. Overall, the specific three groups, which may be considered as representatives of the main target group of the application, appear to appreciate the motivation and realism achieved by the module in the effort to provide a more tangible and, therefore, more impactful form of assistance in fighting obesity. The collective statistic indices for question 1 to 4 are summarized in Table 9.

4.2.1. Impact and realism (Questions 1–4) The first four questions concern the impact of the module and the realism upon its depiction of the body outline alteration with respect to submitted input and their results are presented in Fig. 6. The vast majority of the participants in the evaluation appear to consider positively (11/21) or at least neutrally (8/21) the potential of the module triggering their motivation for a healthier lifestyle (Q1), thus justifying to a considerable extent the effort of the present work. Realism is also considered good or excellent (10/21) or at least average (6/21) with respect to input values depiction (Q2) and even more for body weight gain (average 6/21, good or excellent 12/21) (Q3) whereas the depiction of body weight loss demonstrates a balanced evaluation among the participants. The relevant statistic indices for the first four questions are summarized in Table 6. 4.2.2. Functionality and interface The second part of the questionnaire concerns the evaluation of the interface and the functionality offered through it. The question regarding the range of input information receives an almost completely balanced evaluation. As it also becomes evident in the free text questions, the formalized information employed by the module, dictated almost entirely by the standardized definitions of BMI, BMR and LAR indices for a healthy weight and corresponding energy intake, seems restricting for the average user who seems to desire a more declarative yet more detailed way to input lifestyle and energy information. Question 6, concerning the body outline input method through tracing and key point highlighting also receives a balanced review. Again, the free text answers that follow reveal the need for a more transparent way of body outline extraction, with less user intervention, especially on mobile devices where the ability to trace the outline with high

5. Future work In regard to future module enhancements and with respect to interface, the acquisition process of the user defined contour could be aided or entirely replaced by an image processing mechanism relying on the user submitted anthropometric data and benefiting from the 13

Entertainment Computing 32 (2019) 100324

G. Bardis, et al.

anatomical position requirement. In terms of functionality a subsequent stage is to improve the body contour alteration function by extending the set of parameters as well as by finely granulating the focus of alteration on areas of the contour as well as the internal area through a machine learning mechanism. The warping mechanism could more actively incorporate the texture in the overall image alteration since certain body areas contribute to the contour modification but also carry visual information that, being appropriately distorted, could contribute to realism. The user could also be offered the option to submit two images corresponding to different, preferably distant, body states. These pictures could then yield plausible intermediate states and, based on observed feature changes and incremental learning, benefiting from multiple users submissions, lead to extrapolated states towards body weights outside the submitted images range. Another axis for future development would be to relinquish the requirement for the module to be deployable at low end mobile devices and explore the potential for animated warping of a custom 3D avatar. Such an approach would require additional input from the user in the form of alternative images of the same anatomical position already required by the current version of the module, in order to reconstruct the 3D model of the user’s body figure. To this end, a variety of methods could be employed, ranging from simple techniques, such as Structure from Motion (e.g. VisualSFM [46]), to state of the art approaches, such as Convolutional Neural Networks in conjunction with volumetric regression [47] or with dense semantic representation [48] generated from a skinned multi-person linear model (SMPL) [49]. Moreover, the process of reconstruction could be based, similar to the current version, on user feedback regarding the position of key body features in the submitted images. The computational part of body figure warping would have to be enhanced to realistically depict the alteration corresponding to the submitted exercise and energy intake habits in the 3D realm. Finally, in order to maximize the impact of the animated avatar, the user interface should offer continuous change of view and aspect adjustment during the animation. This could be achieved by offering interactive control of the angle of view, orientation and trajectory of the scene camera, coupled with the control of the warping speed already available in the current version of the module. Such combined functionality would allow the user to inspect the outcome from multiple vantage points and at various speeds, allowing him/her to focus on specific body areas as well as stages of the evolution of the warping process. Alternatively, a best viewing trajectory method could be employed [50] in order to offer an automatic intelligent navigation of the animated avatar evolution, based on the human anatomy morphological information combined with the geometry alteration computed for the specific user’s 3D model warping and relevant profile information.

also largely reflected in the subgroup of participants belonging to the overweight or obese part of the population. The second section has highlighted the animated weight gain or loss feature as the most impressive of the module, reported to greatly contribute to the module's appeal, whereas it has pinpointed the body outline tracing process as the functionality most in need for improvement. Finally, the highly positive results of the third section serve as a justification for our choice of a pure WebGL/Javascript approach for the implementation. Future enhancements could focus on the user image input mechanism, reducing user’s participation in the definition of body contour, as well as on the body alteration functionality using machine learning for improved contour and texture warping. At the cost of an increased application footprint, the avatar could be elevated to the 3D realm, combined with automated intelligent trajectory or dynamic interactive control of viewpoint during animation. Declaration of Competing Interest The authors declared that there is no conflict of interest. References [1] Thaddeus Beier, Shawn Neely, Feature-based image metamorphosis, SIGGRAPH Comput. Graph. 26 (2) (1992) 35–42, https://doi.org/10.1145/14292010.1145/ 142920.134003. [2] M.J. Wolf, A brief history of morphing, meta-morphing: visual transformation and the culture of quick change, University of Minnesota Press, 2000. [3] G. Wolbeg, Image morphing: a survey, Visual Comp. 1998 (14) (1998) 360–372. [4] B. Zope, S.B. Zope, A survey of morphing techniques, Int. J. Adv. Eng. Manage. Sci. (IJAEMS) 3 (2) (2017) 81–87. [5] M.L. Staten, S.J. Owen, S.M. Shontz, A.G. Salinger, T.S. Coffey, A comparison of mesh morphing methods for 3D shape optimization, in: W.R. Quadros (Ed.), Proceedings of the 20th International Meshing Roundtable, Springer, 2011, pp. 293–311. [6] Keith Penska, Les Folio, Rolf Bunger, Medical applications of digital image morphing, J. Digit. Imaging 20 (3) (2007) 279–283, https://doi.org/10.1007/ s10278-006-1050-5. [7] E. Shechtman, A. Rav-Acha, M. Irani, S.M. Seitz, Regenerative morphing, Proceedings of Computer Vision and Pattern Recognition, 2010, pp. 615–622. [8] http://www.fantamorph.com/, Abrasoft FantaMorph, May 2019. [9] http://www.makehumancommunity.org/, retrieved May 2019. [10] http://www.morpheussoftware.net/, Morpheus Photo Warper, Retrieved May 2018. [11] World Health Organization. Fact sheet N°311 – Obesity and Overweight. Updated January 2015. [12] Hui Lu, Yelena N. Tarasenko, Farrah C. Asgari-Majd, Cherell Cottrell-Daniels, Fei Yan, Jian Zhang, More overweight adolescents think they are just fine, Am. J. Prev. Med. 49 (5) (2015) 670–677, https://doi.org/10.1016/j.amepre.2015.03. 024. [13] V. Mogre, S. Aleyira, R. Nyaba, Misperception of weight status and associated factors among undergraduate students, Obes. Res. Clin. Pract. 9 (5) (2015) 466–474. [14] M. Rahman, A. Berenson, Self-perception of weight and its association with weightrelated behaviors in young, reproductive-aged women, Obstet. Gynecol. 116 (6) (2010) 1274–1280. [15] http://www.simusuit.com/, retrieved May 2019. [16] U.S. Department of Health and Human Services, Body Weight Planner: https:// www.supertracker.usda.gov/bwp/index.html, retrieved May 2018. [17] http://www.modelmydiet.com/ retrieved May 2019. [18] F.T. Waddell, S.S. Sundar, J. Auriemma, Can customizing an avatar motivate exercise intentions and health behaviors among those with low health ideals? Cyberpsychol., Behav. Soc. Netw. 18 (11) (2015) 687–690. [19] C K Nikolaou, M E J Lean, Mobile applications for obesity and weight management: current market characteristics, Int. J. Obes. 41 (1) (2017) 200–202, https://doi.org/ 10.1038/ijo.2016.186. [20] Lose Belly Fat in 30 Days - Flat Stomach, Google Play, 2019. [21] Lose It! - Calorie Counter, Google Play, 2019. [22] AktiBMI, Google Play, 2019. [23] My Weight Tracker, BMI, Google Play, 2019. [24] J.M. Peregrín-Alvarez, Long-Term Weight Loss by Mobile App: Current Status and Future Perspectives, EC Nutrition, SI.01: 41–46, 2017. [25] S. Zaidan, E. Roehrer, Popular mobile phone apps for diet and weight loss: a content analysis, JMIR mHealth uHealth 4 (3) (2016) e80. [26] K.D. Hall, G. Sacks, D. Chandramohan, C. Carson, Y. Chow, C. Wang, et al., Quantification of the effect of energy imbalance on bodyweight, Lancet 378 (2011) 826–837. [27] J. Zhang, L. Tong, P.J. Lamberson, R.A. Durazo-Arvizu, A. Luke, D.A. Shoham, Leveraging social influence to address overweight and obesity using agent-based models: the role of adolescent social networks, Soc. Sci. Med. 125 (2015) 203–213,

6. Conclusions The current work presents a web-based application module offering visualization of expected weight gain/loss upon variable energy intake and exercise information. Visualization is achieved through computational warping of a custom image serving as a smart avatar, relying on user input for a trace of the body figure, identification of key features and simple anthropometric data. The module is implemented in WebGL/JavaScript, eliminating the need for additional visualization plugins or libraries, due to native WebGL support in most modern web browsers, thus making it lightweight, deployable even at low-end mobile devices and highly portable. Long-term body change through animated warping offers the chance to evidence body figure transformation as a smooth process instead of static images. This combination does not appear in any current application reported in the relevant literature to the extent of our knowledge. The module has been evaluated regarding the impact and realism of the visual outcome, the functionality and interface provided and the technical aspects of its use. The results in the first section are encouraging, demonstrating a highly positive reception of the approach 14

Entertainment Computing 32 (2019) 100324

G. Bardis, et al. https://doi.org/10.1016/j.socscimed.2014.05.049. [28] A. Ballester, E. Parrilla, J.A. Vivas, S. Alemany, Low-cost data-driven 3D reconstruction and applications, 6th International Conference and Exhibition on 3D Body Scanning Technologies, (2015). [29] Institute of Biomechanics of Valencia, Anthropometry Research Group, http:// anthropometry.ibv.org/en/3d-surveys.html, retrieved May 2018. [30] A. Barenbrock, M. Herrlich, K.M. Gerling, J.D. Smeddinck, R. Malaka, Varying avatar weight to increase player motivation: challenges of a gaming setup, Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, (2018). [31] M. Loper, N. Mahmood, J. Romero, G. Pons-moll, M.J. Black, SMPL: A skinned multi-person linear model, ACM Trans. Graph. (Proc. SIGGRAPH Asia) 34 (6) (2015) pp. 248:1–248:16. [32] M. Kim, G. Pons-Moll, S. Pujades, S. Bang, J. Kim, M.J. Black, S.H. Lee, Data-driven physics for human soft tissue animation, ACM Trans. Graphics (TOG) 36 (4) (2017) 54. [33] A. Feng, D. Casasy, A. Shapiro, Avatar reshaping and automatic rigging using a deformable model, Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games, Paris, ISBN 978-1-4503-3991-9, 2015, pp. 57–64. [34] Fast Generation of Realistic Virtual Humans, in: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, ISBN 978-1-4503-5548-3, Article No. 12, pp. 1–10, 2017. [35] https://www.khronos.org/webgl/, retrieved May 2019. [36] A. Evans, M. Romeo, A. Bahrehmand, J. Agenjo, J. Blat, 3D graphics on the web: A survey, Comp. Graphics 41 (2014) 43–61. [37] Ali Ebneshahidi, Anatomical Terminology http://www.lamission.edu/lifesciences/ AliAnat1/Chap1-anatomical terminology.pdf, retrieved May 2018. [38] G.P. Wagner, The biological homology concept, Annu. Rev. Ecol. Syst. 20 (1989)

51–69. [39] EUROFIT reference body template and skeleton. http://www.eurofit-project.eu/ cms/upload/Public_Downloads/Body_reference_package.zip, retrieved May 2018. [40] J.R. Shewchuk, Delaunay refinement algorithms for triangular mesh generation, Comput. Geom. Theory Appl. 22 (1–3) (2002) 21–74. [41] D Modeling reference by Digital Wide Resource, https://www.pinterest.com/pin/ 185906626879275/, retrieved May 2018. [42] Human Energy Requirements, Report of a Joint FAO/WHO/UNU (Food and Agriculture Organization/World Health Association/United Nations University) Expert Consultation, Rome, 17–24 October 2001. [43] A. Trichopoulou, C. Gnardellis, A. Lagiou, V. Benetou, A. Naska, et al., Physical activity and energy intake selectively predict the waist-to-hip ratio in men but not in women, Am. J. Clin. Nutr. 74 (5) (2001) 574–578. [44] R.T. Sabo, C. Ren, S.S. Sun, Comparing height-adjusted waist circumference indices: the fels longitudinal study, Open J. Endocrine Metabol. Dis. 2 (3) (2012) 40–48. [45] Room Made of Brick Wall and Wooden Floor, Created by Rawpixel.com - Freepik. com, retrieved January, 2019. [46] C. Wu, Towards linear-time incremental structure from motion, 3DV 2013. [47] A.S. Jackson, C. Manafas, G. Tzimiropoulos, 3D human body reconstruction from a single image via volumetric regression, Proceedings of the European Conference on Computer Vision (ECCV), (2018). [48] Z. Zheng, T. Yu, Y. Wei, Q. Dai, Y. Liu, DeepHuman: 3D Human Reconstruction from a Single Image. arXiv preprint arXiv:1903.06473, 2019. [49] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, M.S.M.P.L. Black, A skinned multi-person linear model, ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34 (6) (2015) 248:1–248:16. [50] D. Sokolov, D. Plemenos, Virtual world explorations by using topological and semantic knowledge, Visual Comp. J. 23 (2007) 173–185.

15