Neurocomputing 177 (2016) 26–32
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Data-driven humanlike reaching behaviors synthesis Pei Lv a, Mingliang Xu a,n, Bailin Yang b, Mingyuan Li a, Bing Zhou a a b
School of Information Engineering, Zhengzhou University, Zhengzhou 450000, China School of Computer Science & Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
art ic l e i nf o
a b s t r a c t
Article history: Received 10 June 2015 Received in revised form 11 October 2015 Accepted 29 October 2015 Communicated by Yue Gao Available online 10 November 2015
Reaching is one of the most important behaviors in our daily life and has attracted plenty of researchers to work on it both in computer animation and robot research area. However, existing proposed methods either lack of flexibility or their results are not so convincing. In this paper, we present a novel controllerbased framework for reaching motion synthesis. Our framework consists of four stationary controllers to generate concrete reaching motion and three transition controllers to stitch these stationary controllers automatically. For each stationary controller, it can either be applied alone or combined with other stationary controllers. Due to this design, our method can imitate the inherent tentative process for human reaching effectively. And our controller is able to generate continuous reaching motion based on virtual character's previous status with no need to start from one same initial pose. Moreover, we involve an important gaze simulation model into each controller, which can guarantee the consistency between the head and hand movement. The experiments show that our framework is very easy to be implemented and can generate natural-looking reaching motion in real-time. & 2015 Elsevier B.V. All rights reserved.
Keywords: Data-driven Reaching Controller Optimization Transition
1. Introduction Although the problem of controlling full-body character to reach a given target has been exploited extensively, the results are still not so convincing due to lack of following details: (1) The human reaching often is a tentative process, which combines different behaviors as a consequent sequence. For example, when people want to reach a high target, he/she usually does it first while standing, once he/she finds that he/she cannot reach that target, he/she will choose to tiptoe or jump. However, most of existing techniques are not able to reflect this process. (2) For normal humans, it is very natural to reach multiple targets continuously in certain reaching way. But this humans’ instinct cannot often be seen in existing reaching results except in the scenario, where the character reaches targets while he/she is standing. (3) Moreover, as we know, the eyes of normal human subjects are highly correlated with the hand movement. But most of existing mocap data does not pay enough attention to the gaze movement in minute detail, so researchers often ignore this important fact. In order to solve aforementioned problems and synthesize naturallooking reaching motion, we present a novel controller-based reaching motion synthesis framework in this paper. Our framework is very easy to be implemented and its input includes a new target position and the character's current state, the output is a piece of reaching motion. Inspired from biomechanics literatures, we involve four kinds of
reaching strategies (standing, stepping, tiptoeing and jumping) enhanced as static reaching controller into our system. In addition, three transition controllers, which will be described in controller formulation section later, are also adopted to stitch these static controllers. All of these controllers are put into a uniform framework and the tentative process for human reaching can be simulated well. Compared to previous approaches, our reaching motion can start from any status during the reaching process naturally without particular tricks. That means, for any controller in our framework, it can treat the current state of the character as a starting point to reach a new target. If the target cannot be touched by one single controller, our framework will switch to another controller to continue this task. So our method shows great advantage for multiple continuous reaching. For normal human, the movement of head is closely related to the hand movement. People can easily recognize the inconsistent artifacts between head and hand movement. In our framework, we put an effective gaze simulation model into each controller. At each time step, the controller will calculate the head orientation and update the status of the head, so the synthesized reaching motion will look like more natural.
2. Related work 2.1. Reaching and grasping motion synthesis
E-mail addresses:
[email protected] (P. Lv),
[email protected] (M. Xu),
[email protected] (B. Yang),
[email protected] (M. Li),
[email protected] (B. Zhou). http://dx.doi.org/10.1016/j.neucom.2015.10.118 0925-2312/& 2015 Elsevier B.V. All rights reserved.
There exists a large amount of work on motion synthesis for human reaching and manipulation. Here we select some
P. Lv et al. / Neurocomputing 177 (2016) 26–32
representative literatures to explain this kind of work briefly. Aydin and Nakajima [1] use the forward and inverse kinematics to generate the grasping motion with a large motion database. Raunhardt and Boulic [2] deal with the reaching motion synthesis in low-dimensional data space and utilize the numerical iterative method to refine the final result. Kallmann et al. [3] use pathplanning technique with the mocap data to generate flexible and collision-free manipulation motion. Huang et al. [4] explore a blendable example space by a sampling-based planner, which is used to produce realistic reaching motion around obstacles. In order to create a highly skilled virtual character, Feng et al. [5] combine path planning, locomotion, reaching and grasping together into their motion system and that system can apply these skills in an interactive setting. Recently, researchers have noticed that unnatural reaching results are often caused by the simplification of the shoulder [6,7], so they are trying to simulate or reproduce more accurate movement of the shoulder using different approaches, such as [8]. These approaches are able to generate continuous reaching but are often limited to only one certain reaching scenario, such as reaching while the character is standing. By involving different reaching strategies, Lv et al. [9,10] present a biomechanics-based life-like reaching controller to generate different reaching motion. The advantage of their approach is that they extend the reachable space of virtual character largely. However, their approach cannot generate continuous and tentative reaching for their initial starting point is always a whole piece of certain motion. Some blending and slicing techniques are also used for human's upper-body motion synthesis, such as [11–13]. Same like above mentioned approaches, they still cannot realize the continuous and tentative reaching at the same time. And the biggest problem is that their reachable space is very limited. 2.2. Model-based motion synthesis Those methods presented in last section are designed to synthesize reaching motion specially, while model-based algorithms are more generative for various types of motion synthesis. Reaching is so common behavior in daily life that these algorithms can also be used for reaching motion synthesis. Rose et al. [14] explore an efficient inverse-kinematics method based on the interpolation of example motions and positions. Mukai and Kuriyama [15] treat motion interpolations as statistical predictions of missing data in an arbitrarily definable parametric space and introduce universal kriging to estimate the correlations between the dissimilarity of motions and the distance in the parametric space statistically. Grochow et al. [16] present a scaled Gaussian process latent variable model to learn human poses in an inverse kinematics system and can produce different styles of IK through training the model on different input data. Min et al. [17] present a new low-dimensional deformable motion model for human motion modeling and synthesis. Through continuously adjusting the values of deformable parameters to match user-specified constraints, their method can generate the desired animation. Levine et al. [18] present a probabilistic motion model to drive the character to perform user-specified tasks. Their method uses a low-dimensional space learned from example motion to control virtual character to accomplish some special tasks continuously. Above methods are all built upon motion capture data, the advantage is that they can use small amount of data to generate new motion through predicting or interpolating. In order to generate flexible and natural reaching motion, a large amount of example motion data is very necessary. However, model training of these methods based on such amount of data is extremely timeconsuming and the efficiency of their motion synthesis will also be decreased greatly. If their models are trained on different reaching
27
behavior data separately, although it can reduce the training data for single model, it will be very complex and difficult to combine these separate model together to generate flexible reaching motion naturally. 2.3. Composite controller for motion synthesis Controller-based motion synthesis methods show great power on dealing with complex problems and these controllers can often be combined together or reused. Faloutsos et al. [19] propose a framework for composing controllers in order to enhance the motor abilities of dynamic, anthropomorphic figures. The key contribution of their framework is an explicit model of the preconditions under which motor controllers are expected to function properly. Sok et al. [20] develop an optimization method to transform either motion-captured or kinematically synthesized biped motion into a physically-feasible, balance-maintaining simulated motion. Their controller learning algorithm facilitates the creation and composition of robust dynamic controllers. da Silva et al. [21] involve linear Bellman combination to reuse existing controllers. Given a set of controllers for related tasks, their combination approach can create an appropriate controller to perform a new task. Muico et al. [22] realize their composite controllers by tracking multiple trajectories in parallel instead of sequentially switching from one control to the other. The composite controllers can blend or transit between different paths at arbitrary time according to the current system state. Huang et al. [23] propose a hierarchical control system for coordinating the movement of arms, spine and legs of the articulated character by a novel controller scheduling algorithm. It is to be mentioned that these controllers are drawn from the analytical formula of the movement of body joints. Inspired by these methods, we also try to solve reaching motion synthesis using composite controller method. So we modify and extend the work in [9] and present a controller-based framework to generate more natural-looking reaching motion.
3. Framework overview Our reaching framework consists of four stationary controllers as standing, stepping, tiptoeing and jumping, and three transition controllers as stand2step, stand2tiptoe and tiptoe2jump. Each stationary controller can handle the reaching target in its own reachable space, if one new target can be reached by this controller, it will continue the reaching task from last reached position. If the new target cannot be reached by this controller, our framework will switch to another controller automatically and continue the reaching process. In Fig. 1, the initial state can be any pose during the reaching process, not only the default standing pose like in [9]. As the arrow shows, the transient processes are all unidirectional, which are designed in this way based on two considerations. The first is the fact that most of human beings follow these unidirectional behaviors to reach. The second is that each stationary controller can recover to the default standing pose directly and it is unnecessary to realize this goal through transition controller. In order to imitate realistic human reaching behavior, when a new target is given, our framework usually begins the reaching motion synthesis with the standing controller, but users can also choose a certain controller to do the same job.
4. Controller formulation As mentioned before, our data-driven reaching controllers are separated into two classes: stationary controller and transition
28
P. Lv et al. / Neurocomputing 177 (2016) 26–32
the target. It is to be mentioned that there is a little difference for Eq. (1) between jumping controller and other controllers. We involve a time threshold tthreshold in jumping controller. When t ot threshold , the weight is set as ωi ¼ 1=di þ Di , while t Z t threshold , the weight is set as ωi ¼ 1=di þ 1=Di . This threshold is based on the fact that when people begin to jump, they often bend their knees firstly and the wrist will be far away from the target during this period. After reaching the lowest position, the wrist will approach the target gradually. The threshold can be computed by formula (2), f ðPðtÞÞ is the forward kinematics function to compute the position of right wrist: arg max J f ðPðtÞÞ ptarget J
ð2Þ
t threshold
In order to locate k-nearest neighbors efficiently, the neighbor graph presented in [24] is adopted in our controller. 4.2. Transition controller Fig. 1. Framework overview.
controller. The first kind is used to generate different reaching motion. The second kind is used to realize the transient process between different stationary controllers. The definition of these controllers will be given in detail as following. 4.1. Stationary controller While a stationary controller is being implemented, the new pose at next time step can be inferred according to the character's current state and the target position. Before demonstrating these controllers, we will give some important annotations. PðtÞ ¼ fproot ; q1 ; q2 ; …; qN g is used to describe the character pose at time t, proot is the global root position of the virtual character, qi is the joint angle represented by quaternion, N is the number of joints. The difference between character poses is measured by Euclidean distance of the feature ξ ¼ 〈prwrist ; plknee ; prknee ; plheel ; prheel ; pCoM ; qhead ; gc〉, prwrist is the right wrist position, which is the most important constraint for reaching motion. plknee ; prknee are mainly used to distinguish whether the character is stepping or bending his/her knee. plheel ; prheel can judge the character is tiptoeing or jumping. pCoM is the center of mass of virtual character and is in charge of recognizing all of these controllers roughly. qhead is the head orientation represented by a quaternion, which is an important item to guarantee that the eyes of the character can move along with the hand. gc is used to describe the contact between virtual character and ground, which is a 4-component boolean (one or zero) array encoding the contact info for heel and toe of both feet. In order to obtain the gc vector, firstly we compute the position of heel and toe of both feet using forward kinematics. After these global positions ðx; y; zÞ are computed, we compare the values along the y-axis with the predefined height of the ground. If it is larger than the height of ground, we regard this joint is above the ground and we record its mark as one. Otherwise, we record its mark as zero. Here we set the height of ground to 0.01. It is to be mentioned that we only use the right hand to reach the target in this paper, for the human is highly symmetrical and it is very easy to mirror the same motion to the left hand. Given the current pose P at time t, our controller can predict a new pose Pðt þ 1Þ by searching k-nearest neighboring samples fP i j i ¼ 1; 2; …; kg around this pose and then combining them with different weights. The detail can be seen in the following formula: P ωP ð1Þ Pðt þ1Þ ¼ Pi i i i
ωi
ωi ¼ 1=di þ 1=Di , di is the Euclidean distance between ξðtÞ and ξi. The feature ξ is pre-computed for each frame using the forward kinematics. Di is the distance between the right wrist position and
The feature in each category of reaching behavior will form a feature space. The purpose of transition controller is to build a bridge between these feature spaces. If transition controller is not involved, the search space for each stationary will cover at least two feature spaces, the searching efficiency will be decreased a lot. As described in Fig. 1, three kinds of transition controller are constructed. They are stand2tiptoe ðT st Þ, stand2step ðT ss Þ and tiptoe2jump ðT tj Þ. These transition controllers are similar to stationary controller, but are trained from different datasets. It is to be mentioned that T tj is also divided into two stages, which is similar to jumping controller. Based on sufficient mocap data, such as reaching from standing to stepping, we can obtain the appropriate transition controller T automatically using the feature ξ. We observe that the transition from standing to tiptoeing often occurs at the highest position where the character's hand can reach when it is standing. So our algorithm detects the changes of the wrist position and heels at the same time. If the wrist position continues to change in the y-axis while the heels lose their contact with ground, the transition is recognized to be at the starting point. The similar method is used to compute the other transition controllers. The differences are that, for T ss , the positions of the right foot and the knee are detected while the position of CoM and the status of heel and tiptoe are detected for T tj .
5. Controller composition for reaching Before synthesizing the reaching motion, we must determine which one or multiple controllers to be involved. In order to solve this problem, we first divide the stationary controller feature ξ into different arm reachable spaces used in [9] according to the wrist position prwrist and use the KD-tree to index them. Once the target is given, we can quickly know which stationary controller can be used to reach this target successfully. And then, we have two ways to synthesize the final reaching motion. The first one is to select this controller to do this job such as [9]. The other way is to combine different controllers together to complete it. We use the following method to determine this process automatically: PðtÞ ¼ αðtÞP stand ðtÞ þ β i ðtÞT i ðtÞ þ γ ðtÞP final ðtÞ
ð3Þ
In formula (3), the controller P stand ðtÞ enables that all the reaching motion can start from the standing pose. T i ðtÞ represents the transition controller and there may be not only one. P final ðtÞ is the stationary controller which can reach the target. The key to our automatic method is the two-valued functions αðtÞ; βi ðtÞ and γ ðtÞ, which is used to concatenate our controller in sequence seamlessly. If we want to select the first way as mentioned before, we
P. Lv et al. / Neurocomputing 177 (2016) 26–32
29
Fig. 2. Stationary controller. (a) Single reaching while standing. (b) Multiple reaching while standing. (c) Single reaching while stepping. (d) Multiple reaching while stepping.
can set αðtÞ ¼ 0 and βi ðtÞ, otherwise, we use the P(t) to determine them automatically. These functions are defined as follows: (
αðtÞ ¼
1
t r t P 0stand
0
t 4 t P 0stand
(
βi ðtÞ ¼ (
γ ðtÞ ¼
1 0
6.1. Experiments setup ð4Þ
t P0i r t o t P i þ 1 t Z t Pi þ 1
1
t Z t P final
0
t o t P final
6. Experiments
ð5Þ
ð6Þ
where t i , t 0i are the starting and ending time for one controller. These parameters can be computed easily by detecting the status of virtual character's right wrist. Either stationary controller or transition controller, if the Di does not change more than a threshold ε ¼ 0:92 (measured by centimeter) during three iterations (in three frames) by one controller, then the framework will switch to another controller following the possibility in Fig. 1. Up to now, our framework can generate tentative reaching motion to reach targets at different places by single or multiple reaching. In order to have a complete process, we also synthesize a piece of motion which drives the character from the status of target reached to its initial pose. This is realized by setting the initial wrist position as the final target and using the last controller to reach it. For our data-driven method is based on nearest neighbor search, the target may be not reached accurately and there may be some jerky artifacts, especially at those transition points, so we involve an nonlinear optimization function to refine the final result: ! ) (Z N X 2 2 € € arg min p ðtÞ þ q i ðtÞ dt s:t: J f ðPðt success ÞÞ ptarget J ¼ 0 i¼1
ð7Þ where t success is the time when the character reach the target which can be inferred from formula (3).
The computing environment is on a common PC with quadcore 2.4 GHz CPU and 4 GB memory. The skeleton model used in our experiments has 26 bones and the hand is treated as fixed length and orientation. Its position can be computed by adding a translation to the wrist position. The mocap data is captured at the rate of 120 frames per second by a Vicon system and then downsampled to 30 frames per second for real-time display. Totally about 200 labeled pieces of reaching motion (100,000 frames) are used in our experiment to construct stationary controller. 50 pieces of mocap data (5000 frames) are used to construct transition controller. 15 nearest neighbor poses are used in our controller. Researchers have proposed many methods [25–29] to solve the nonlinear optimization problem and we find Sequential Quadratic Programming method is more suitable to solve our optimization problem in formula (7). Due to the good initial motion sequence generated by our controller, the converge speed of the optimization process is very fast and it only takes about 45–65 ms to deal with 100 frames of reaching motion. 6.2. Single reaching and continuous reaching by stationary controller Compared to previous work in [9], our stationary controller is more powerful in two aspects. First, the synthesized motion will have more variations. This is because the essential object for our controller is to find a smooth path from the database to reach the target by weighting appropriate character poses. And each pose in the reaching motion is decided by its previous one. That means given different starting poses, the final result will be a little different, but it can still reach the target, which can be seen in our complementary video. Figs. 2(a) and (c) shows a single reaching process, where the character is both starting and ending with the same standing pose. Second, our controller is able to support multiple continuous reaching naturally. Once a new target is given, our controller can predict the next pose of virtual character by searching its nearest
30
P. Lv et al. / Neurocomputing 177 (2016) 26–32
neighbor, no matter how the current pose it is. See Fig. 2(b) and (d) for the results generated by standing and stepping controllers. It is mentioned that the continuous reaching usually happens while people are standing, tiptoeing or stepping. For tiptoeing, during the process of collecting mocap data, the actor usually tries 3 6 times in this way and the subject will be tired. Too much continuous reaching seems unnatural by tiptoeing. Moreover, common people cannot reach multiple targets while they are jumping, except for those who have some special training. The more continuous reaching results can be seen in our video. 6.3. Tentative reaching by transition controller Stationary controller is in charge of generating concrete reaching task, it cannot reflect the tentative process for human reaching by itself. The transition controller can guarantee this inherent process by stitching these stationary controllers. When the target cannot be reached by one stationary controller, our reaching framework will adopt a new controller whose reachable space is larger. During our experiments, the T ss controller usually takes about 40 60 frames to switch standing to stepping controller. In the Fig. 3(a), the third character pose is the starting point of the T ss controller. Besides the hand is stretching out, the right leg is lifted a little and the right knee is bending, which is an obvious signal that the transition is proceeding. In Fig. 3(b), the second and fourth poses are the starting points for T st , T tj separately. The T st takes about 30 frames to next stationary controller in average, while T tj often takes more 30 frames than it. The reason is that from the ending pose of tiptoeing to the starting pose of jumping, there are more variations than that from standing to tiptoeing. 6.4. Gaze simulation model As mentioned by [30], the eyes of human beings play an important role in people's daily life and it is one of the most common ways for people to communicate with each other [31]. Different from Yeo's work [32] which deals with coordinated dynamics of sensing and movement of human upper body,such as eyes, head and torso, our approach mainly focuses on the full-body movement of the character. So we use an approximate model to do this work, but it is different from [30]. Our gaze simulation model is based on the following assumption. (1) The eyes are always moving along the hand. That means in our model the eyes will not look at the target first, just like the target location is in the character's mind. (2) During two consequent time steps, the eyes are on a plane with the two wrist position points and only rotate along the axis which is vertical to this plane. (3) After the target is reached, the eyes will move back to the origin along its original trajectory. At each time step, either stationary controller or transition controller, it first computes the hand's position and then update the head orientation. The change of head orientation is obtained by computing the angle
between two straight lines connected from two wrist position points with the point at the middle of eyes. Compared to the approximate model in [30], our gaze simulation model is simpler and the result in Fig. 5 is also acceptable and better than that without this mechanism as in Fig. 4.
7. Conclusion and future work We have developed a controller-based reaching framework to synthesize human reaching motion. Our controller is divided into stationary controller and transition controller. The framework can compose them in sequence to generate realistic reaching motion in real-time. Compared with previous methods, our composite controller method has several advantages. The single stationary controller is more powerful. It can generate four kinds of reaching motion by standing, tiptoeing, stepping or jumping. For a same target, our method can synthesize different reaching sequences by starting from different poses. Moreover, our method can generate continuous reaching without starting from a default pose. With the transition controller, this framework can imitate the tentative process for common people, which is never presented by previous work. This makes the synthesized motion more realistic and natural. We also involve an important gaze simulation model to synchronize the movement of eyes with the hand. Although our method can generate more natural-looking reaching than previous methods, there are still some limitations. It is obvious that our method is a data-driven method, so the limitation for this kind of approach still exists. However, our mocap data is obtained using the method in Ref. [9], so the reachable space is much larger and more accurate, which can reduce the limitation efficiently. At present, our method only use one hand to reach given targets, if both of the two hands are involved and two or more targets are given, this problem will be more interesting and complex. To extend this system, a locomotion planner based on similar approach like ours could be encompassed with our framework easily. Moveover, if the model from Ref. [32] could substitute for our current gaze simulation model, our controller can be greatly improved and deal with more complex task, such as catching the flying ball.
Fig. 4. Without Eye-hand collaboration.
Fig. 3. Stationary controller stitched by transition controller. (a) Transition between standing and stepping controller. (b) Transition between standing and tiptoeing, tiptoeing and jumping controller.
P. Lv et al. / Neurocomputing 177 (2016) 26–32
Fig. 5. With Eye-hand collaboration.
Acknowledgments We thank all anonymous reviewers for their constructive comments. This work is supported by the Natural Science Foundation of China (NSFC) (61502433, 61202207, 61472370, 61379079, 61170214) and Zhejiang Province Natural Science Foundation for Distinguished Young Scientists(LR12F02001).
References [1] Y. Aydin, M. Nakajima, Database guided computer animation of human grasping using forward and inverse kinematics, Comput. Graph. 23 (1) (1999) 145–154. [2] D. Raunhardt, R. Boulic, Motion constraint, Vis. Comput. 25 (2009) 509–518. [3] M. Kallmann, A. Aubel, T. Abaci, D. Thalmann, Planning collision-free reaching motions for interactive object manipulation and grasping, Comput. Graph. Forum 22 (3) (2003) 313–322. [4] Y. Huang, M. Mahmudi, M. Kallmann, Planning humanlike actions in blending spaces, in: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, 2011, pp. 2653–2659. [5] A.W. Feng, Y. Xu, A. Shapiro, An example-based motion synthesis technique for locomotion and object manipulation, in: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, ACM, New York, 2012, pp. 95–102. [6] N. Klopcar, J. Lenarcic, Bilateral and unilateral shoulder girdle kinematics during humeral elevation, Clin. Biomech. 21 (2006) 20–26. [7] N. Klopcar, M. Tomsic, J. Lenarcic, A kinematic model of the shoulder complex to evaluate the arm-reachable workspace, J. Biomech. 40 (1) (2007) 86–91. [8] S. Lee, E. Sifakis, D. Terzopoulos, Comprehensive biomechanical modeling and simulation of the upper body, ACM Trans. Graph. (TOG) 28 (4) (2009) 99. [9] P. Lv, M. Zhang, M. Xu, H. Li, P. Zhu, Z. Pan, Biomechanics-based reaching optimization, Vis. Comput. 27 (6-8) (2011) 613–621. [10] P. Lv, Generating character motion and posture based on low-dimensional feature (Ph.D. thesis), Zhejiang University, 2013. [11] R. Heck, L. Kovar, M. Gleicher, Splicing upper-body actions with locomotion, Comput. Graph. Forum 25 (3) (2006) 459–466. [12] R. Heck, M. Gleicher, Parametric motion graphs, in: Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games, I3D '07, ACM, New York, NY, USA, 2007, pp. 129–136. [13] B. van Basten, A. Egges, Flexible splicing of upper-body motion spaces on locomotion, Comput. Graph. Forum 30 (7). [14] C. Rose III, P. Sloan, M. Cohen, Artist-directed inverse-kinematics using radial basis function interpolation, Comput. Graph. Forum 20 (3) (2001) 239–250. [15] T. Mukai, S. Kuriyama, Geostatistical motion interpolation, ACM Trans. Graph. (TOG) 24 (3) (2005) 1062–1070. [16] K. Grochow, S. Martin, A. Hertzmann, Z. Popović, Style-based inverse kinematics, ACM Trans. Graph. (TOG) 23 (3) (2004) 522–531. [17] J. Min, Y. Chen, J. Chai, Interactive generation of human animation with deformable motion models, ACM Trans. Graph. (TOG) 29 (1) (2009) 1–12. [18] S. Levine, J.M. Wang, A. Haraux, Z. Popović, V. Koltun, Continuous character control with low-dimensional embeddings, ACM Trans. Graph. 31 (4) (2012) 28. [19] P. Faloutsos, M. Van De Panne, D. Terzopoulos, Composable controllers for physics-based character animation, in: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, ACM, New York, 2001, pp. 251–260. [20] K. Sok, M. Kim, J. Lee, Simulating biped behaviors from human motion data, ACM Trans. Graph. TOG 26 (3) (2007) 107. [21] M. da Silva, F. Durand, J. Popović, Linear Bellman combination for control of character animation, ACM Trans. Graph. (TOG) 28 (3) (2009) 82. [22] U. Muico, J. Popović, Z. Popović, Composite control of physically simulated characters, ACM Trans. Graph. (TOG) 30 (3) (2011) 16.
31
[23] W. Huang, M. Kapadia, D. Terzopoulos, Full-body hybrid motor control for reaching, in: 3rd International Conference on Motion in Games, MIG 2010, 2010, pp. 36–47. [24] J. Chai, J. Hodgins, Performance animation from low-dimensional control signals, ACM Trans. Graph. (TOG) 24 (3) (2005) 686–696. [25] Y. Zhang, Y. Yang, G. Ruan, Performance analysis of gradient neural network exploited for online time-varying quadratic minimization and equalityconstrained quadratic programming, Neurocomputing 74 (10) (2011) 1710–1719. [26] M. Wang, X. Yan, H. Shi, Spatiotemporal prediction for nonlinear parabolic distributed parameter system using an artificial neural network trained by group search optimization, Neurocomputing 113 (2013) 234–240. [27] A. Nazemi, N. Tahmasbi, A high performance neural network model for solving chance constrained optimization problems, Neurocomputing 121 (2013) 540–550. [28] K. Li, S. Kwong, A general framework for evolutionary multiobjective optimization via manifold learning, Neurocomputing. 146 (1) (2014) 65074. [29] B. Niu, J. Wang, H. Wang, Bacterial-inspired algorithms for solving constrained optimization problems, Neurocomputing. 148 (2015) 54–62. [30] K. Yamane, J. Kuffner, J. Hodgins, Synthesizing animations of human manipulation tasks, ACM Trans. Graph. TOG 23 (3) (2004) 532–539. [31] S. Lee, J. Badler, N. Badler, Eyes alive, ACM Trans. Graph. (TOG) 21 (3) (2002) 637–644. [32] S.H. Yeo, M. Lesmana, D.R. Neog, D.K. Pai, Eyecatch: simulating visuomotor coordination for object interception, ACM Trans. Graph. TOG 31 (4) (2012) 42:1–42:10.
Pei Lv is an assistant professor in the School of Information Engineering of Zhengzhou University, China. His research interests include character animation, crowd simulation and tracking. He received his Ph.D in 2013 from the State Key Lab of CAD & CG, Zhejiang University, China. Before that, he received his B.S. degree in computer science, from Zhengzhou University, Zhengzhou, China, in 2008.
Mingliang Xu is an associate professor in the School of Information Engineering of Zhengzhou University, China. His research interests include computer graphics and computer vision. Xu got his Ph.D. degree in computer science and technology from the State Key Lab of CAD & CG at Zhejiang University.
Bailin Yang is a professor in the School of Zhejiang Gongshang University, China. His research interests include computer graphics, virtual reality and argument reality. Yang got his Ph.D. degree in computer science and technology from the State Key Lab of CAD & CG at Zhejiang University in 2007.
Mingyuan Li received the B.Sc. degree from Zhengzhou University, China, in 2008. He is currently a Ph.D. candidate student in School of Information Engineering at Zhengzhou University. His current research interests include 3-D print and model simulation.
32
P. Lv et al. / Neurocomputing 177 (2016) 26–32 Bing Zhou received the B.S. and M.S. degrees from Xi'an Jiao Tong University in 1986 and 1989, respectively, and the Ph.D. degree in Beihang University in 2003, all in computer science. He is currently a professor at the School of Information Engineering, Zhengzhou University, Henan, China. His research interests cover video processing and understanding, surveillance, computer vision, multimedia applications.