The first initial–boundary value problem for Hessian equations of parabolic type on Riemannian manifolds

The first initial–boundary value problem for Hessian equations of parabolic type on Riemannian manifolds

Nonlinear Analysis 143 (2016) 45–63 Contents lists available at ScienceDirect Nonlinear Analysis www.elsevier.com/locate/na The first initial–bound...

684KB Sizes 0 Downloads 83 Views

Nonlinear Analysis 143 (2016) 45–63

Contents lists available at ScienceDirect

Nonlinear Analysis www.elsevier.com/locate/na

The first initial–boundary value problem for Hessian equations of parabolic type on Riemannian manifolds Gejun Bao, Weisong Dong, Heming Jiao ∗ Department of Mathematics, Harbin Institute of Technology, Harbin, 150001, China

article

info

Article history: Received 13 September 2015 Accepted 9 May 2016 Communicated by Enzo Mitidieri MSC: 35B45 35R01 35K20 35K96

abstract In this paper, we are concerned with the first initial–boundary value problem for a class of fully nonlinear parabolic equations on Riemannian manifolds. As usual, the establishment of the a priori C 2 estimates is our main part. Based on these estimates, the existence of classical solutions is proved under conditions which are nearly optimal. © 2016 Elsevier Ltd. All rights reserved.

Keywords: Riemannian manifolds Fully nonlinear parabolic equations First initial–boundary value problem a priori estimates

1. Introduction In this paper, we study the Hessian equations of parabolic type of the form f (λ(∇2 u + χ), −ut ) = ψ(x, t)

(1.1)

in MT = M × (0, T ] ⊂ M × R satisfying the boundary condition u = ϕ,

on PMT ,

(1.2)

where (M, g) is a compact Riemannian manifold of dimension n ≥ 2 with smooth boundary ∂M and M := M ∪ ∂M , PMT = BMT ∪ SMT is the parabolic boundary of MT with BMT = M × {0} and SMT = ∂M × [0, T ], f is a symmetric smooth function of n + 1 variables, ∇2 u denotes the Hessian of u(x, t) with respect to x ∈ M , ut = ∂u ∂t is the derivative of u(x, t) with respect to t ∈ [0, T ], χ is a smooth (0, 2) 1 , . . . , λ n ) denotes the eigenvalues of ∇2 u + χ with respect to the metric g. ¯ and λ(∇2 u + χ) = (λ tensor on M ∗ Corresponding author. E-mail addresses: [email protected] (G. Bao), [email protected] (W. Dong), [email protected] (H. Jiao).

http://dx.doi.org/10.1016/j.na.2016.05.005 0362-546X/© 2016 Elsevier Ltd. All rights reserved.

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

46

We assume f to be defined in an open convex cone Γ ⊂ Rn+1 with vertex at the origin satisfying Γn+1 ≡ {λ ∈ Rn+1 : each component λi > 0, 1 ≤ i ≤ n + 1} ⊆ Γ ̸= Rn+1 and furthermore, Γ is invariant under interchange of any two λi , i.e. it is symmetric. In this work, f is assumed to satisfy the following structural conditions as in [3] (see [9] also): ∂f > 0 in Γ , 1 ≤ i ≤ n + 1, ∂λi f is concave in Γ

fi ≡

(1.3) (1.4)

and δψ,f ≡ inf ψ − sup f > 0, MT

where sup f ≡ sup lim sup f (λ).

∂Γ

∂Γ

λ0 ∈∂Γ

(1.5)

λ→λ0

In this work we are interested in the existence of classical solutions to (1.1)–(1.2). Recent research on the Hessian equations of elliptic type (see [9,7]): f (λ(∇2 u + χ)) = ψ(x)

(1.6)

provides some ideas to deal with our Eq. (1.1) under nearly minimal restrictions on f . 1/k The most typical examples of f satisfying (1.3)–(1.5) are f = σk and f = (σk /σl )1/(k−l) , 1 ≤ l < k ≤ n + 1, defined in the G˚ arding cone Γk = {λ ∈ Rn+1 : σj (λ) > 0, j = 1, . . . , k}, where σk are the elementary symmetric functions  σk (λ) = λi1 . . . λik ,

k = 1, . . . , n + 1.

i1 <···
When f = σn+1

, Eq. (1.1) can be written as the parabolic Monge–Amp`ere equation: − ut det(∇2 u + χ) = ψ n+1 ,

(1.7)

which was introduced by Krylov in [19] when χ = 0 in Euclidean space. Instead of the determinant in (1.7), Ren [25] studied equations of the form − ut f (λ(∇2 u)) = ψ(x, t).

(1.8)

Our interest to study (1.1) is from their natural connection to the deformation of surfaces by some curvature functions. For example, Eq. (1.7) plays a key role in the study of contraction of surfaces by Gauss–Kronecker curvature (see Firey [5] and Tso [28]). For the study of more general curvature flows, the reader is referred to [1,2,14,24] and their references. (1.7) is also relevant to a maximum principle for parabolic equations (see Tso [29]). In [23], Lieberman studied the first initial–boundary value problem of Eq. (1.1) when χ ≡ 0 and ψ may depend on u and ∇u in a bounded domain Ω ⊂ Rn+1 under various conditions. Jiao and Sui [18] considered parabolic Hessian equations of the form f (λ(∇2 u + χ)) − ut = ψ(x, t)

(1.9)

on Riemannian manifolds under an additional condition which was introduced in [10] Tλ ∩ ∂Γ σ is a nonempty compact set, ∀λ ∈ Γ and sup f < σ < f (λ),

(1.10)

∂Γ

where ∂Γ σ = {λ ∈ Γ : f (λ) = σ} is the boundary of Γ σ = {λ ∈ Γ : f (λ) > σ} and Tλ denote the tangent plane at λ of ∂Γ f (λ) , for σ > sup∂Γ f and λ ∈ Γ . Eq. (1.9) in domains of Rn was also studied by Ivochkina

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

47

and Ladyzhenskaya in [15] (for the Monge–Amp`ere case) and [16]. A generalization of (1.9) was considered in [17]. Guan, Shi and Sui [12] extended the work of [18] using the idea of [7]. As we know, works about the elliptic Hessian equations usually provide useful techniques to deal with the parabolic version. The reader is referred to [21,30,8,9,7,10,11] and their references for examples. The motivation to assume that f is defined in the cone Γ is due to the consideration that many equations are elliptic (or parabolic) with respect to solutions in a cone but they are not in general (see the examples above). We mean an admissible function by u ∈ C 2 (MT ) satisfying (λ(∇2 u + χ), −ut ) ∈ Γ in MT , where C k (MT ) denotes the space of functions defined on MT which are k-times continuously differentiable with respect to x ∈ M and [k/2]-times continuously differentiable with respect to t ∈ (0, T ] and [k/2] is the largest integer not greater than k/2. We see that (1.1) is parabolic for admissible solutions (see [3]). We first recall the following notations  sup |∇β Dtr u|, |u|C k (MT ) = |β|+2r≤k MT

|u|C k+α (MT ) = |u|C k (MT ) +

sup

sup

|β|+2r=k

(x,s),(y,t)∈MT (x,s)̸=(y,t)

|∇β Dtr u(x, s) − ∇β Dtr u(y, t)|  α |x − y| + |s − t|1/2

and C k+α (MT ) denotes the subspace of C k (MT ) defined by C k+α (MT ) := {u ∈ C k (MT ) : |u|C k+α (MT ) < ∞}. Our main result is stated in the following theorem. Theorem 1.1. Suppose that ψ ∈ C ∞ (MT ), ϕ ∈ C ∞ (PMT ) for 0 < T < ∞, and there exists a function Θ ∈ C 2 (BMT ) such that Θ = −ϕt on ∂M × {0} and (λ(∇2 ϕ(x, 0) + χ(x)), Θ(x)) ∈ Γ

for all x ∈ M

(1.11)

and that f (λ(∇2 ϕ(x, 0) + χ(x)), −ϕt (x, 0)) = ψ(x, 0) for all x ∈ ∂M.

(1.12)

In addition to (1.3)–(1.5), assume that 

fj (λ) ≥ ν0 1 +

n+1 

 fi (λ)

for any λ ∈ Γ with λj < 0,

(1.13)

i=1

for some positive constant ν0 , n+1  i=1

n+1    fi λi ≥ −K0 1 + fi ,

∀λ ∈ Γ

(1.14)

i=1

for some constant K0 ≥ 0 and that there exists an admissible subsolution u ∈ C 2 (MT ) satisfying  2  f (λ(∇ u + χ), −ut ) ≥ ψ(x, t) in MT , u=ϕ on SMT ,   u≤ϕ on BMT .

(1.15)

Then there exists a unique admissible solution u ∈ C ∞ (MT ) of (1.1)–(1.2). Remark 1.2. Conditions (1.13) and (1.14) are only used to derive the gradient estimates which are commonly used, see [23,13,8,22,26,30] for examples. It would be an interesting problem to establish the gradient estimates without (1.13) and (1.14).

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

48

If f > sup∂Γ f > −∞ in Γ , it is easy to show that n+1 

fi λi ≥ sup f ≥ −K0 ,

∀λ ∈ Γ

∂Γ

i=1

by (1.3) and (1.4). Remark 1.3. As in [7], the existence of u is useful to construct some barrier functions which are crucial to our estimates. We can prove the short time existence as Theorem 15.9 in [23]. So without of loss of generality, we may assume that ϕ is defined on M × [0, t0 ] for some small constant t0 > 0 and f (λ(∇2 ϕ(x, 0) + χ(x)), −ϕt (x, 0)) = ψ(x, 0)

for all x ∈ M .

(1.16)

As usual, the main part of this paper is to derive the a priori C 2 estimates. We see that (1.1) is uniformly parabolic after establishing the C 2 estimates by (1.3) and (1.5). The C 2,α estimates can be obtained by applying Evans–Krylov theorem (see [4,20]). Finally Theorem 1.1 can be proved as Theorem 15.9 of [23]. The rest of this paper is devoted to the a prior C 2 estimates for admissible solutions of (1.1)–(1.2). In Section 2, we introduce some notations and one useful lemma. C 1 estimates are derived in Section 3. An a priori bound for |ut | is obtained in Section 4. In Sections 5 and 6 we deal with the global and boundary estimates for second order derivatives respectively. 2. Preliminaries Let F be the function defined by F (A, τ ) = f (λ(A), τ ) for A ∈ Sn , τ ∈ R with (λ(A), τ ) ∈ Γ , where Sn is the set of n × n symmetric matrices. It was shown in [3] that (1.4) implies the concavity of F . Throughout this paper ∇ denotes the Levi-Civita connection of (M, g). For simplicity we shall use the notations U = ∇2 u + χ, U = ∇2 u + χ and under an orthonormal local frame e1 , . . . , en , Uij ≡ U (ei , ej ) = ∇ij u + χij ,

U ij ≡ U (ei , ej ) = ∇ij u + χij .

Thus, (1.1) can be written in the form locally F (U, −ut ) = f (λ(Uij ), −ut ) = ψ.

(2.1)

Let F ij = F ij,kl =

∂F (U, −ut ), ∂Aij

∂2F (U, −ut ), ∂Aij ∂Akl

F ij,τ =

Fτ =

∂F (U, −ut ) ∂τ

∂2F (U, −ut ), ∂Aij ∂τ

F ττ =

∂2F (U, −ut ). ∂τ 2

By (1.3) we see that F τ > 0 and {F ij } is positive definite. We shall also denote the eigenvalues of {F ij } by f1 , . . . , fn when there is no possible confusion. We note that {Uij } and {F ij } can be diagonalized simultaneously and that     i , 2 , F ij Uij = fi λ F ij Uik Ukj = fi λ i ij

i

ijk

1 , . . . , λ n ). where λ({Uij }) = (λ We write µ(x, t) = (λ(U (x, t)), −ut (x, t)), λ(x, t) = (λ(U (x, t)), −ut (x, t))

i

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

49

and νλ ≡ Df (λ)/|Df (λ)| is the unit normal vector to the level hypersurface ∂Γ f (λ) for λ ∈ Γ . Since 1 ) such K ≡ {µ(x, t) : (x, t) ∈ MT } is a compact subset of Γ , there exist uniform constants β ∈ (0, 2√n+1 that νµ(x,t) − 2β1 ∈ Γn+1 ,

∀(x, t) ∈ MT

(2.2)

where 1 = (1, . . . , 1) ∈ Rn+1 (see [7]). We need the following lemma which is Lemma 2.1 in [7]. (An general version can be found in [12].) Lemma 2.1. Suppose that |νµ − νλ | ≥ β. Then there exists a uniform constant ε > 0 such that n+1 

n+1    fi (λ)(µi − λi ) ≥ ε 1 + fi (λ) .

i=1

(2.3)

i=1

Define the linear operator L locally by  Lv = F ij ∇ij v − F τ vt ,

for v ∈ C 2 (MT ).

ij

By Lemma 2.1 and Lemma 6.2 of [3] it is easy to derive that when |νµ(x,t) − νλ(x,t) | ≥ β,    F ii + F τ . L(u − u) ≥ ε 1 +

(2.4)

If |νµ − νλ | < β, we have νλ − β1 ∈ Γn+1 . It follows that fi ≥ √

n+1  β fj , n + 1 j=1

∀1 ≤ i ≤ n + 1.

(2.5)

3. The C 1 estimates n+1 n+1 First we note that Γ ⊂ {λ ∈ Rn+1 : i=1 λi > 0}. Indeed, if there exists λ ∈ Γ such that i=1 λi < 0, we can conclude that Γ = Rn+1 since it is a symmetric and convex cone which contradicts the n+1 fact that Γ ̸= Rn+1 . Thus, Γ ⊂ {λ ∈ Rn+1 : i=1 λi ≥ 0} and by the openness of Γ , we have  n+1 Γ ⊂ {λ ∈ Rn+1 : i=1 λi > 0}. It follows that u is a subsolution of  △h − ht + tr(χ) = 0, in MT , (3.1) h = ϕ, on PMT since u is admissible. Let h be the solution of (3.1). It follows from the maximum principle that u ≤ u ≤ h on MT . Therefore, we have sup |u| + sup |∇u| ≤ C0 , MT

(3.2)

PMT

where C0 depends on |u|C 1 (MT ) and |h|C 1 (MT ) . For the global gradient estimates, we can prove the following maximum principle. Theorem 3.1. Suppose that (1.3), (1.4), (1.13) and (1.14) hold. Let u ∈ C 3 (MT ) be an admissible solution of (1.1) in MT . Then sup |∇u| ≤ C1 (1 + sup |∇u|), MT

PMT

where C1 depends on |ψ|C 1 (MT ) , |u|C 0 (MT ) and other known data.

(3.3)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

50

Proof. Set W =

weφ ,

sup (x,t)∈MT

|∇u|2 2

where w = and φ is a function to be determined. It suffices to estimate W and we may assume that W is achieved at (x0 , t0 ) ∈ MT − PMT . Choose a smooth orthonormal local frame e1 , . . . , en about x0 such that ∇ei ej = 0 at x0 and U (x0 , t0 ) is diagonal. We see that the function log w + φ attains its maximum at (x0 , t0 ). Therefore, at (x0 , t0 ), we have ∇i w + ∇i φ = 0, for each i = 1, . . . , n, w wt + φt ≥ 0 w

(3.4) (3.5)

and ∇ii w  ∇i w 2 − + ∇ii φ ≤ 0. w w

(3.6)

Differentiating the Eq. (1.1), we get n 

F ii ∇k Uii − F τ ∇k ut = ∇k ψ

for k = 1, . . . , n

(3.7)

i=1

and n 

F ii (Uii )t − F τ utt = ψt .

(3.8)

i=1

Note that ∇i w =



∇k u∇ik u,

wt =



k

∇k u(∇k u)t ,

k

∇ii w =



(∇ik u)2 + ∇k u∇iik u

(3.9)



k

and that ∇ijk u − ∇jik u =



l Rkij ∇l u,

(3.10)

l l where Rkij = g im Rmjkl and Rmjkl = g(R(ek , el )ej , em ) are coefficients of the curvature tensor. We have, by (3.5), (3.7), (3.9) and (3.10),   F ii ∇ii w ≥ ∇k uF ii ∇iik u i

i,k

 =



∇k uF

ii



∇kii u −



i,k





l Riik ∇l u

l ii

∇k uF ∇k Uii − C|∇u|2



F ii

i,k

≥ −C|∇u| − C|∇u|2



≥ −C|∇u| − C|∇u|2



F ii +



F τ ∇k u∇k ut

k

F ii − wF τ φt ,

provided |∇u| is sufficiently large. Combining (3.4), (3.6) and (3.11), we obtain   C −C F ii − F ii (∇i φ)2 + Lφ. 0≥− |∇u|

(3.11)

(3.12)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

51

Let φ = δv 2 , where v = u + supMT |u| + 1 and δ is a small positive constant to be chosen. Thus, choosing δ sufficiently small such that 2δ − 4δ 2 v 2 ≥ c0 > 0 for some uniform constant c0 , and by (1.14), we see   Lφ − F ii (∇i φ)2 = 2δvLu + (2δ − 4δ 2 v 2 ) F ii (∇i u)2     ≥ c0 (3.13) F ii (∇i u)2 − Cδ 1 + F ii + F τ . It follows from (3.12) and (3.13) that     c0 F ii (∇i u)2 ≤ C 1 + F ii + F τ ,

(3.14)

provided |∇u| is sufficiently large. We may assume |∇1 u(x0 , t0 )| = max1≤i≤n |∇i u(x0 , t0 )|. It follows that |∇u(x0 , t0 )| ≤ n|∇1 u(x0 , t0 )|. Recalling that {Uij } is diagonal, by (3.4), we have U11 = −2δvw + n 2δ

∇k uχ1k <0 ∇1 u



|χ1k |. Then we can derive from (1.13) that    F 11 ≥ ν0 1 + F ii + F τ . √ Therefore, we obtain a bound |∇u(x0 , t0 )| ≤ Cn/ c0 ν0 by (3.14) so that (3.3) is proved. provided w >

maxM¯

k



Remark 3.2. We see that in the proof of Theorem 3.1, we do not need the existence of u. By (3.2) and (3.3), the C 1 estimates are established. 4. The estimates for |ut | In this section, we derive the estimates for |ut |. Theorem 4.1. Suppose that (1.3), (1.4) and (1.15) hold. Let u ∈ C 3 (MT ) be an admissible solution of (1.1) in MT . Then there exists a positive constant C2 depending on |u|C 1 (MT ) , |u|C 2 (MT ) , |ψ|C 2 (MT ) and other known data such that sup |ut | ≤ C2 (1 + sup |ut |).

(4.1)

PMT

MT

Proof. We first show that sup(−ut ) ≤ C(1 + sup |ut |)

(4.2)

PMT

MT

for which we set W = sup(−ut )eφ , MT

where φ is a function to be chosen. We may assume that W is attained at (x0 , t0 ) ∈ MT − PMT . As in the proof of Theorem 3.1, we choose an orthonormal local frame e1 , . . . , en about x0 such that ∇ei ej = 0 and {Uij (x0 , t0 )} is diagonal. We may assume −ut (x0 , t0 ) > 0. At (x0 , t0 ) where the function log(−ut ) + φ achieves its maximum, we have ∇i ut + ∇i φ = 0, for each i = 1, . . . , n, ut utt + φt ≥ 0, ut

(4.3) (4.4)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

52

and 0≥



F ii

i

 ∇ u  ∇ u 2 ii t i t + ∇ii φ . − ut ut

(4.5)

Combining (4.3)–(4.5), we find 0≥

  1  ii F ∇ii ut − F τ utt − F ii (∇i φ)2 + Lφ. ut

(4.6)

By (3.8) and (4.6), Lφ ≤ −

ψt  ii + F (∇i φ)2 . ut

(4.7)

1+α

Fix a positive constant α ∈ (0, 1) and let φ = δ 2 |∇u|2 + δu + b(u − u), where δ ≪ b ≪ 1 are positive constants to be determined. By straightforward calculations, we have  ∇i φ = δ 1+α ∇k u∇ik u + δ∇i u + b∇i (u − u), k

φt = δ

1+α



∇k u(∇k u)t + δut + b(u − u)t ,

k

∇ii φ = δ 1+α



∇ik u

2

+ δ 1+α

k



∇k u∇iik u + δ∇ii u + b∇ii (u − u).

k

It follows that, in view of (3.7) and (3.10),   δ 1+α   F ii Uii2 Lφ ≥ δ 1+α ∇k u F ii ∇iik u − F τ (∇k u)t + 2 i k   ii +δ F ∇ii u − δF τ ut − Cδ 1+α F ii + bL(u − u)   δ 1+α   ≥ −Cδ 1+α 1 + F ii Uii2 + δLu + bL(u − u). F ii + 2

(4.8)

Next, (∇i φ)2 ≤ Cδ 2(1+α) Uii2 + Cb2

(4.9)

since b ≫ δ. Thus, we can derive from (4.7)–(4.9) that bL(u − u) +

    C δ 1+α  ii 2 F Uii + δLu ≤ − + Cδ 1+α 1 + F ii + Cb2 F ii , 4 ut

(4.10)

provided δ is sufficiently small. Now we use the idea of [7] to consider two cases: (i) |νµ0 − νλ0 | ≥ β and (ii) |νµ0 − νλ0 | < β, where µ0 = µ(x0 , t0 ) and λ0 = λ(x0 , t0 ). In case (i), by Lemma 2.1, we see that (2.4) holds. Since −ut (x0 , t0 ) > 0, we have    δLu = δ F ii Uii − F ii χii − F τ ut   ≥δ F ii Uii − Cδ F ii  δ 1+α  ii 2 F Uii − Cδ F ii . 4

(4.11)

    C + Cδ 1−α 1 + F ii + Cb2 F ii . ut

(4.12)

≥ −δ 1−α



F ii −

Combining (4.11) and (4.10), we have bL(u − u) ≤ −

By (2.4), we can obtain a bound −ut (x0 , t0 ) ≤

Cδ 1−α bε

provided δ ≪ b ≪ 1.

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

53

In case (ii), we see that (2.5) holds. By (4.10), we find   δ 1+α  ii 2 F Uii + δ F ii Uii − F τ ut 4     C F ii . F ii + C(δ + b2 ) ≤ − + Cδ 1+α 1 + ut

bL(u − u) +

(4.13)

Note that   δ 1+α  ii 2 F Uii + δ F ii Uii ≥ −2δ 1−α F ii 8

(4.14)

L(u − u) ≥ 0

(4.15)

and

by the concavity of F . Therefore, by (4.13)–(4.15), we have  C δ 1+α  ii 2 F Uii − δF τ ut ≤ − + Cδ 1+α + C F ii . 8 ut

(4.16)

By the concavity of f , recalling that ut < 0, we get    − ut F ii + F τ ≥ f (−ut 1) − f (λ(U ), −ut ) + F ii Uii − F τ ut ≥ f (−ut 1) − f (λ(U ), −ut )    1  ii 2 + ut F ii + F τ + F Uii + F τ u2t , 4ut

(4.17)

where 1 = (1, . . . , 1) ∈ Rn+1 . Note that there exists a constant R > 0 depending only on |u|C 2 (MT ) such that |(λ(U ), −ut )| ≤ R on MT . It follows from (1.3) that f (2R1) − f (λ(U ), −ut ) ≥ f (2R1) − f (R1) := 2b0 .

(4.18)

We may assume −ut (x0 , t0 ) > 2R for otherwise we are done. Therefore, combining (4.17) and (4.18), we obtain    1  ii 2 F Uii + F τ u2t . − ut F ii + F τ ≥ b0 + (4.19) 8ut It follows from (2.5) and (4.19) that   γ0  ii 2 F ii + F τ + γ0 b0 + F Uii + F τ u2t 8ut  7 γ0  ii 2 = −γ0 ut F ii − γ0 ut F τ + γ0 b0 + F Uii 8 8ut  γ0  ii 2 ≥ −γ0 ut F ii + γ0 b0 + F Uii , 8ut

− F τ ut ≥ −γ0 ut

where γ0 :=

√β 2 n+1



(4.20)

> 0.

Substituting (4.20) in (4.16) we obtain  δ 1+α 8

+

  δγ0   ii 2 C F Uii − δγ0 ut F ii + δγ0 b0 ≤ − + Cδ 1+α + C F ii . 8ut ut

Choose δ sufficiently small such that δγ0 b0 − Cδ 1+α ≥ c1 > 0

(4.21)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

54

for some constant c1 . We can derive from (4.21) that −ut (x0 , t0 ) ≤ max

γ

0 , δα

C C , δγ0 c1

and therefore (4.2) holds. Similarly, we can show sup ut ≤ C(1 + sup |ut |),

(4.22)

PMT

MT

by setting W = sup ut eφ MT

and φ =

δ

1+α

2

|∇u|2 − δu + b(u − u).

Combining (4.2) and (4.22), we can see that (4.1) holds.



Since that ut = ϕt on SMT and (1.16), we can derive the estimate sup |ut | ≤ C3 ,

(4.23)

MT

where the constant C3 depends on C2 in (4.2) and |ϕ|C 2 (MT ) . 5. Global estimates for second order derivatives In this section, we derive the global estimates for the second order derivatives. In particular, we prove the following maximum principle. Theorem 5.1. Let u ∈ C 4 (MT ) be an admissible solution of (1.1) in MT . Suppose that (1.3), (1.4) and (1.15) hold. Then sup |∇2 u| ≤ C4 (1 + sup |∇2 u|),

(5.1)

PMT

MT

where C4 > 0 depends on |u|C 1 (MT ) , |ut |C 0 (MT ) , |ψ|C 2 (MT ) and other known data. Proof. Set W =

max

max

(x,t)∈MT ξ∈Tx M,|ξ|=1

(∇ξξ u + χ(ξ, ξ))eφ ,

where φ is a function to be determined. We may assume W is achieved at (x0 , t0 ) ∈ MT − PMT and ξ0 ∈ Tx0 M . Choose a smooth orthonormal local frame e1 , . . . , en about x0 as before such that ξ0 = e1 , ∇ei ej = 0, and {Uij (x0 , t0 )} is diagonal. We see that W = U11 (x0 , t0 )eφ(x0 ,t0 ) . We may also assume that U11 ≥ · · · ≥ Unn at (x0 , t0 ). Since the function log(U11 ) + φ attains its maximum at (x0 , t0 ), we have, at (x0 , t0 ), ∇i U11 + ∇i φ = 0 U11

for each i = 1, . . . , n,

(∇11 u)t + φt ≥ 0, U11

(5.2) (5.3)

and 0≥



F ii

∇ U  ∇ U 2  ii 11 i 11 − + ∇ii φ . U11 U11

(5.4)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

Therefore, by (5.3) and (5.4), we find  ∇ U 2   1  ii i 11 Lφ ≤ − . F ∇ii U11 − F τ (∇11 u)t + F ii U11 U11 By the formula    m m m ∇ijkl v − ∇klij v = Rljk ∇im v + ∇i Rljk ∇m v + Rlik ∇jm v m

m



+

m Rjik ∇lm v

+

m

55

(5.5)

m



m Rjil ∇km v

m

+



m ∇k Rjil ∇m v,

(5.6)

m

we have ∇ii U11 ≥ ∇11 Uii − CU11 .

(5.7)

F ij ∇11 Uij − F τ ∇11 ut + F ij,kl ∇1 Uij ∇1 Ukl + F τ τ (∇1 ut )2 − 2F ij,τ ∇1 Uij ∇1 ut = ∇11 ψ ≥ −C.

(5.8)

Differentiating Eq. (1.1) twice, we have

It follows from (5.5), (5.7) and (5.8) that Lφ ≤

 C +C F ii + E, U11

(5.9)

where E=

   ∇ U 2  1  ij,kl i 11 . F ∇1 Uij ∇1 Ukl − 2 F ij,τ ∇1 Uij ∇1 ut + F τ τ (∇1 ut )2 + F ii U11 U 11 ij ijkl

E can be estimated as in [9] using an idea of Urbas [30] to which the following inequality proved by Andrews [1] and Gerhardt [6] is crucial. Lemma 5.2. For any symmetric matrix η = {ηij } we have   ∂2f  fi − fj 2 F ij,kl ηij ηkl = ηii ηjj + ηij . ∂λ ∂λ λ − λ i j i j ij ijkl

i̸=j

The second term on the right hand side is nonpositive if f is concave, and is interpreted as a limit if λi = λj . ˆ by To proceed we define the (n + 1) × (n + 1) matrix U   U (x , t ) 0 ij 0 0 ˆ= U . 0 −ut (x0 , t0 ) Let J = {i : 3Uii ≤ −U11 } and K = {i : 3Uii > −U11 }. Therefore, by Lemma 5.2, we have   − F ij,kl ∇1 Uij ∇1 Ukl + 2F ij,τ ∇1 Uij ∇1 ut − F τ τ ∇1 ut ∇1 ut ij

ijkl





1≤i,j≤n,i̸=j

F ii − F jj (∇1 Uij )2 Ujj − Uii

 F ii − F 11 (∇1 Ui1 )2 U11 − Uii 2≤i≤n 3  ii ≥ (F − F 11 )(∇1 Ui1 )2 2U11 ≥2

i∈K

1  ii ≥ (F − F 11 )((∇i U11 )2 − C), U11 i∈K

(5.10)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

56

where the last inequality is derived from the fact that 2 2 (∇i U11 )2 ≤ (∇1 U1i )2 + C 3 3 which follows from (3.10). Thus, by (5.10) and (5.2), we obtain C  ii F 11  1  ii F (∇i U11 )2 + 2 F + 2 (∇i U11 )2 E ≤ 2 U11 U11 U11 i∈J i∈K i∈K    ii 2 ii 11 2 ≤ F (∇i φ) + C F +F (∇i φ) . i∈J

i∈K

(5.11)

(5.12)

i∈K

Let δ|∇u|2 + b(u − u), 2 where δ ≪ 1 ≪ b are positive constants to be determined. We have φ=

∇i φ ≤ δ∇i uUii + Cb. Thus, we can derive from (5.12) that    2 E ≤ Cb2 F ii + Cδ 2 F ii Uii2 + C F ii + C(δ 2 U11 + b2 )F 11 . i∈J

(5.13)

i∈K

On the other hand, by (3.7) and (3.10),    Lφ = δ F ii (∇ik u)2 + δ ∇k uF ii ∇iik u − δ ∇k uF τ (∇k u)t + bL(u − u) ik

≥δ



ik

k



F ii Uii2 + bL(u − u) − Cδ 1 +





F ii .

(5.14)

Combining (5.9), (5.13) and (5.14), we obtain     δ  ii 2 C F Uii + bL(u − u) ≤ + Cb2 F ii + Cb2 F 11 + C 1 + F ii 2 U11

(5.15)

i∈J

provided δ is sufficiently small. Note that |Ujj | ≥ 13 U11 , for j ∈ J. Therefore, by (5.15), we have    δ  ii 2 F ii F Uii + bL(u − u) ≤ C 1 + 4 2 ≥ max{12Cb2 /δ, 1}. when U11

(5.16)

Now let µ0 = µ(x0 , t0 ) and λ0 = λ(x0 , t0 ). If |λ0 − µ0 | ≥ β, we can obtain a bound of U11 (x0 , t0 ) by (2.4) as in [9] when b is sufficiently large.  = λ(U (x0 , t0 )). We may assume |λ|  ≥ |ut (x0 , t0 )|. By the If |λ0 − µ0 | < β, we see that (2.5) holds. Let λ concavity of f , we have      − f (λ(U ), −ut ) + |λ| F ii + F τ ≥ f (|λ|1) F ii Uii − F τ ut    − f (λ(U ), −u ) − |λ|  ≥ f (|λ|1) F ii + F τ . (5.17) t  ≥ 2R for otherwise we are done. By (5.17) and As in the estimate of |ut | (Section 4), we may assume |λ| (4.18), we have    |λ| F ii + F τ ≥ b0 , (5.18) where b0 is the constant defined in (4.18).

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

By (2.5), (4.15) and (5.16), we see that      2 F ii , 2c2 |λ| F ii + F τ ≤ C 1 +

57

(5.19)

where δβ c2 := √ . 8 n+1  from (5.18). Then we can derive a bound of |λ| Therefore in both cases we have U11 (x0 , t0 ) ≤ C.



6. Boundary estimates for second order derivatives In this section, we consider the estimates for second order derivatives on SMT . We may assume ϕ ∈ C (MT ). We shall establish the following estimate 4

max |∇2 u| ≤ C5 ,

(6.1)

PMT

for some positive constant C5 depending on |u|C 1 (MT ) , |ut |C 0 (MT ) , |u|C 2 (MT ) , |ϕ|C 4 (MT ) and other known data. Fix a point (x0 , t0 ) ∈ SMT . We choose smooth orthonormal local frames e1 , . . . , en around x0 such that when restricted to ∂M , en is normal to ∂M . Since u − u = 0 on SMT , we have ∇αβ (u − u) = −∇n (u − u)Π (eα , eβ ),

∀ 1 ≤ α, β < n on SMT ,

(6.2)

where Π denotes the second fundamental form of ∂M . Therefore, |∇αβ u| ≤ C,

∀ 1 ≤ α, β < n on SMT .

(6.3)

Let ρ(x) and d(x) denote the distance from x ∈ M to x0 and ∂M respectively and set MTδ = {X = (x, t) ∈ M × (0, T ] : ρ(x) < δ}. Clearly ρ(x) < δ implies that d(x) < δ. We shall use the following barrier function as in [9]. Ψ = A1 v + A2 ρ 2 − A3



|∇γ (u − ϕ)|2 ,

(6.4)

γ
where v = (u − u) + ad −

N d2 . 2

The following lemma is crucial to construct barrier functions and the idea is mainly from [7,10] (see [12] also). Lemma 6.1. Suppose that (1.3), (1.4) and (1.15) hold. Then for any constant K > 0, there exist uniform positive constants a, δ sufficiently small, and A1 , A2 , A3 , N sufficiently large such that Ψ ≥ K(d + ρ2 ) in MTδ and 

LΨ ≤ −K 1 +

n  i=1

i | + fi |λ

n  i=1

fi + F τ



in MTδ .

(6.5)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

58

Proof. For any fixed (x, t) ∈ MTδ , we may assume that Uij and F ij are both diagonal at (x, t). Firstly, we have (see [9] for details), n n     i | + L(∇k (u − ϕ)) ≤ C 1 + fi |λ fi + F τ , i=1

∀ 1 ≤ k ≤ n.

(6.6)

i=1

Therefore, 



L(|∇l (u − ϕ)|2 ) ≥

l
n n     i | + F ij Uil Ujl − C 1 + fi |λ fi + F τ . i

l
(6.7)

i

Using the same method of Proposition 2.19 in [9], we can show  1  2 fi λi , F ij Uil Ujl ≥ 2

(6.8)

i̸=r

l
 −ut ), where for some index r. Write µ = µ(x, t) and λ = λ(x, t) and note that µ = ( µ, −ut ) and λ = (λ, µ  = λ(U ). We shall consider two cases as before: (a) |νµ − νλ | < β and (b) |νµ − νλ | ≥ β. Case (a). By (2.5), we have n

fi ≥ √

 β  fk + F τ , n + 1 k=1

∀ 1 ≤ i ≤ n.

(6.9)

Now we make a little modification of the proof of Lemma 3.1 in [7] to show the following inequality      2 ≥ c3 2 − 1 fi λ fi + F τ fi λ (6.10) i i c3 i i̸=r

r < 0, we have for some c3 > 0. If λ 2 ≤ n λ r



2 + C, λ i

(6.11)

i̸=r

where C depends on the bound of ut since 

i − ut > 0. λ

Therefore, by (6.9) and (6.11), we have 2 ≤ nfr fr λ r

 i̸=r

√  n n + 1  2 2  fi λi + C fi λi + Cfr ≤ β i

(6.12)

i̸=r

and (6.10) holds. r ≥ 0. By the concavity of f , Now suppose λ  r ≤ fr µ fr λ r − F τ (ut − ut ) +



i ). fi ( µi − λ

(6.13)

i̸=r

Thus, by (6.9) and Schwarz inequality, we have       2  βf λ 2 2 2 τ 2 2 ≤ C f 2 µ  √ r r f ( µ + λ ) + (F ) fi + F τ ≤ fr2 λ  + f k i i r r r i n+1 k̸=r i̸=r      2 , fi λ ≤C fi + F τ fi + F τ + i i̸=r

(6.14)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

where C may depend on the bound of |ut |. It follows that    2 ≤ C 2 + C fr λ fi λ fi + F τ r i

59

(6.15)

i̸=r

and (6.10) holds. Next, recall that d < δ in MTδ . Since |∇d| ≡ 1, in view of (6.9), we have n



F ij ∇i d∇j d ≥ (min fi )|∇d|2 = min fi ≥ √ i

i,j

i

 β  fk + F τ . n + 1 k=1

It follows that when a and δ are sufficiently small such that C(a + N d) < 2√βN , we have, n+1     fi − N F ij ∇i d∇j d Lv ≤ L(u − u) + C(a + N d) ij

βN ≤− √ 2 n+1





fk + F τ .

(6.16)

 ≥ R for R sufficiently large. By (2.5) and (5.18), we see We first suppose |λ| 

 2 ≥ √ β b0 |λ| fi λ i n+1

(6.17)

when R is sufficiently large. Note that for any σ > 0, 

i | ≤ σ fi |λ



 2 + 1 fi λ fi . i σ

(6.18)

Therefore, It follows from (6.10), (6.16) and (6.18) that for any σ > 0,       βA1 N  A3   2 i | + fi λi + CA3 1 + fi |λ fi + F τ fk + F τ + CA2 LΨ ≤ − √ fi − 2 2 n+1 i̸=r  A c       βA1 N  3 3 2 + CA2 i | + ≤− √ fk + F τ − fi λ fi + CA3 1 + fi |λ fi + F τ i 2 2 n+1     A3 c 3    2 βA1 N fi λi fk + F τ + A3 σ − ≤− √ 2 2 n+1    A3   + C A2 + fi + CA3 1 + F τ . (6.19) σ Let σ = c3 /4, we find  A c    βA1 N  3 3 2 + C(A2 + A3 ) LΨ ≤ − √ fk + F τ − fi λ fi + F τ + CA3 i 4 2 n+1      A3 βc3 b0  βA1 N A3   ≤− √ fk + F τ − √ |λ| − fi |λi | + C(A2 + A3 ) fi + F τ + CA3 2 2 n+1 8 n+1         βA1 N A3 i | + C(A2 + A3 ) ≤− √ fi + F τ − 1+ fi |λ fi + F τ (6.20) 2 2 n+1 √ by choosing R ≥ 8 n + 1C/βc3 b0 + 1.  ≤ R, by (1.3) and (1.5), we have If |λ| cR I ≤ {F ij } ≤ CR ,

cR ≤ F τ ≤ CR

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

60

for some uniform positive constants cR , CR which may depend on R. Therefore, by (6.16), we have     LΨ ≤ C(−A1 + A2 + A3 ) 1 + fi + fi |λi | + F τ (6.21) where C depends on cR and CR . Case (b). By Lemma 2.1, we may fix a and δ sufficiently small such that v ≥ 0 in MTδ and   ε in MTδ . (6.22) fi + F τ Lv ≤ − 1 + 2 r ≥ 0. By (6.7), (6.8) and (6.22) we see for any 0 < B < A1 (see [10]), We first consider the case that λ      A3   2 i | + LΨ ≤ (A1 + B)Lv − BLv + CA2 fi − fi λi + CA3 1 + fi |λ fi + F τ 2 i̸=r         (A1 + B)ε A3   2 i + CB 1 + 1+ fi + F τ − B fi λ fi λi ≤− fi + F τ + CA2 fi − 2 2 i̸=r     i | + + CA3 1 + fi |λ fi + F τ    (A1 + B)ε  2  r − A3 fi λ 1+ fi + F τ − (B − CA3 )fr λ ≤− i 2 2 i̸=r     i |. + C(B + A2 + A3 ) 1 + fi + F τ + (B + CA3 ) fi |λ (6.23) i̸=r

Notice that 2   A3   2 i | − 2(B + CA3 ) fi λi ≥ 2(B + CA3 ) fi . fi |λ 2 A3 i̸=r

(6.24)

i̸=r

Thus, we derive from (6.23) and (6.24) that    (A1 + B)ε  i | LΨ ≤ − 1+ fi + F τ − (B − CA3 ) fi |λ 2   2(B + CA )2   3 + C(B + A2 + A3 ) 1 + fi + F τ + fi . A3

(6.25)

r < 0, similar to (6.25), we have If λ    (A1 − B)ε  i | 1+ fi + F τ − (B − CA3 ) fi |λ 2   2(B + CA )2   3 + C(B + A2 + A3 ) 1 + fi + F τ + fi . A3

LΨ ≤ −

(6.26)

Checking (6.20), (6.21), (6.25) and (6.26), we can choose A1 ≫ A2 ≫ A3 ≫ 1 and A1 − B ≫ B ≫ A3 in (6.25) and (6.26) such that (6.5) holds and Ψ ≥ K(d + ρ2 ) in MTδ . Therefore, Lemma 6.1 is proved.  By (6.6), we can use Lemma 6.1 to choose suitable A1 ≫ A2 ≫ A3 ≫ 1 such that in MTδ , L(Ψ ± ∇α (u − ϕ)) ≤ 0, and Ψ ± ∇α (u − ϕ) ≥ 0 on PMTδ . Then it follows from the maximum principle that Ψ ± ∇α (u − ϕ) ≥ 0 in MTδ and therefore |∇nα u(x0 , t0 )| ≤ ∇n Ψ (x0 , t0 ) ≤ C,

∀ α < n.

It remains to establish the estimate sup ∇nn u ≤ C SMT

(6.27)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

61

˜ (x, t) be the restriction of U (x, t) to Tx ∂M , the tangent since −ut + △u + trχ ≥ 0. For (x, t) ∈ SMT , let U ′ ˜ space of ∂M at x, and λ (U (x, t)) be the eigenvalues with respect to the induced metric. Similarly one can ˜ (x, t) and λ′ (U ˜ (x, t)). The proof of (6.27) is similar to that of the elliptic case using an idea of define U Trudinger [27] (see [9,12]), so we only provide a sketch here. Define ˜ , −ut ) := lim f (λ′ (U ˜ ), R, −ut ) F˜ (U R→∞

on SMT . We shall show that the following quantity   ˜ , −ut )(x, t) − ψ(x, t) F˜ (U m := min (x,t)∈SMT

is positive. Without loss of generality we assume that m is finite. It is easy to see that   ˜ , −u )(x, t) − ψ(x, t) > 0, ~ := min F˜ (U t (x,t)∈SMT

and we remark that ~ may be +∞. We may assume m < ~/2 (otherwise we are done). Suppose m is achieved at a point (x0 , t0 ) ∈ SMT . Choose a local orthonormal frame e1 . . . , en around x0 as before, and therefore ˜ = {Uαβ }, where 1 ≤ α, β ≤ n − 1. locally U From (6.2) we see Uαβ = U αβ − ∇n (u − u)σαβ

(6.28)

on SMT , where σαβ = ⟨∇α eβ , en ⟩, since σαβ = Π (eα , eβ ) on SMT . Let F˜0αβ and F˜0τ be the derivatives of ˜αβ (x0 , t0 ) and −ut (x0 , t0 ) respectively. By the concavity of F and that u = ut = ϕt on F˜ with respect to U t SMT , we have, at (x0 , t0 ), ∇n (u − u)F˜0αβ σαβ = F˜0αβ (U αβ − Uαβ ) ≥ F˜ (U αβ , −ut ) − F˜ (Uαβ , −ut ) ~ = F˜ (U αβ , −ut ) − ψ − m ≥ ~ − m ≥ . 2 Setting η =



αβ

(6.29)

F˜0αβ σαβ which is well defined in MTδ , by (6.29) we obtain η(x0 ) ≥

~ ≥ ϵ1 ~ > 0 2∇n (u − u)(x0 , t0 )

(6.30)

for some uniform constant ϵ1 > 0. Next, since F˜ is concave, we have     αβ  F˜0 Uαβ − Uαβ (x0 , t0 ) − F˜0τ ut − ut (x0 , t0 ) + ψ(x0 , t0 ) − ψ(x, t) αβ

≥ F˜ (Uαβ , −ut ) − ψ(x, t) − m ≥ 0 on SMT . Define Φ = −η∇n (u − ϕ) + Q, where Q≡



    F˜0αβ U αβ − Uαβ (x0 , t0 ) + F˜0τ ut (x0 , t0 ) − ϕt + ψ(x0 , t0 ) − ψ(x, t).

αβ

Therefore, by (6.31) and (6.28) we see that Φ(x0 , t0 ) = 0 and Φ ≥ 0 on SMT . Note that ∇n (u − ϕ) = 0 on BMT and we can derive as in [12] that Φ(x, 0) ≥ −Cd(x)

(6.31)

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

62

for (x, 0) ∈ BMTδ = MTδ ∩ M × {0}. Furthermore, by straightforward calculations, we have n n     i | + F τ . LΦ ≤ C 1 + fi + fi |λ i=1

(6.32)

i=1

Now applying Lemma 6.1 again, we get  L(Ψ + Φ) ≤ 0 in MTδ , Ψ + Φ ≥ 0 on PMTδ

(6.33)

for some A1 ≫ A2 ≫ A3 ≫ 1. Thus, by the maximum principle we have ∇n Φ(x0 , t0 ) ≥ −∇n Ψ (x0 , t0 ) ≥ −C and we obtain − C ≤ ∇n Φ(x0 , t0 ) ≤ −η(x0 )∇nn u(x0 , t0 ) + C.

(6.34)

Combining with (6.30), we see ∇nn u(x0 , t0 ) ≤

C . ϵ1 ~

By now we have got a priori upper bounds for all eigenvalues of {Uij (x0 , t0 )} and hence its eigenvalues are contained in a compact subset of Γ by (1.5). It follows that m > 0 by (1.3). Consequently, there exist positive constants c4 and R0 such that  (x, t)), R, −ut (x, t)) ∈ Γ (λ′ (U and  (x, t)), R, −ut (x, t)) ≥ ψ(x, t) + c4 f (λ′ (U for all R > R0 and (x, t) ∈ SMT and therefore (6.27) holds by Lemma 1.2 in [3]. Acknowledgment The third author is supported by the Program for Innovation Research of Science in Harbin Institute of Technology, No. PIRS OF HIT Q201501. References [1] B. Andrews, Contraction of convex hypersurfaces in Euclidean space, Calc. Var. Partial Differential Equations 2 (1994) 151–171. [2] B. Andrews, J. McCoy, Y. Zheng, Contracting convex hypersurfaces by curvature, Calc. Var. Partial Differential Equations 47 (2013) 611–665. [3] L.A. Caffarelli, L. Nirenberg, J. Spruck, Dirichlet problem for nonlinear second order elliptic equations III, functions of the eigenvalues of the Hessian, Acta Math. 155 (1985) 261–301. [4] L.C. Evans, Classical solutions of fully nonlinear, convex, second order elliptic equations, Comm. Pure Appl. Math. 25 (1982) 333–363. [5] W.J. Firey, Shapes of worn stones, Mathematika 21 (1974) 1–11. [6] C. Gerhardt, Closed Weingarten hypersurfaces in Riemannian manifolds, J. Differential Geom. 43 (1996) 612–641. [7] B. Guan, The Dirichlet problem for fully nonlinear elliptic equations on Riemannian manifolds, arXiv:1403.2133. [8] B. Guan, The Dirichlet problem for Hessian equations on Riemannian manifolds, Calc. Var. Partial Differential Equations 8 (1999) 45–69. [9] B. Guan, Second order estimates and regularity for fully nonlinear elliptic equations on Riemannian manifolds, Duke Math. J. 163 (2014) 1491–1524. [10] B. Guan, H.-M. Jiao, Second order estimates for Hessian type fully nonlinear elliptic equations on Riemannian manifolds, Calc. Var. Partial Differential Equations 54 (2015) 2693–2712. [11] B. Guan, H.-M. Jiao, The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds, Discrete Contin. Dyn.-A 36 (2016) 701–714.

G. Bao et al. / Nonlinear Analysis 143 (2016) 45–63

63

[12] B. Guan, S.-J. Shi, Z.-N. Sui, On estimates for fully nonlinear parabolic equations on Riemannian manifolds, Anal. PDE 8 (2015) 1145–1164. [13] B. Guan, J. Spruck, Interior gradient estimates for solutions of prescribed curvature equations of parabolic type, Indiana Univ. Math. J. 40 (1991) 1471–1481. [14] Q. Han, Deforming convex hypersurfaces by curvature functions, Analysis 17 (1997) 113–127. [15] N.M. Ivochkina, O.A. Ladyzhenskaya, On parabolic equations generated by symmetric functions of the principal curvatures of the evolving surface or of the eigenvalues of the Hessian. Part I: Monge–Amp` ere equations, St. Petersburg Math. J. 6 (1995) 575–594. [16] N.M. Ivochkina, O.A. Ladyzhenskaya, Flows generated by symmetric functions of the eigenvalues of the Hessian, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 221 (1995) 127–144. 258 (in Russian). English transl. in J. Math. Sci. 87 (1997), 3353–3365. [17] H.-M. Jiao, Second order estimates for Hessian equations of parabolic type on Riemannian manifolds, J. Differential Equations 259 (2015) 7662–7680. [18] H.-M. Jiao, Z.-N. Sui, The first initial–boundary value problem for a class of fully nonlinear parabolic equations on Riemannian manifolds, Int. Math. Res. Not. 2015 (9) (2015) 2576–2595. [19] N.V. Krylov, Sequences of convex functions and estimates of the maximum of the solution of a parabolic equation, Sibirsk. Mat. Zh. 17 (1976) 290–303. (in Russian). English transl. in Siberian Math. J. 17 (1976), 226–236. [20] N.V. Krylov, Boundedly inhomogeneous elliptic and parabolic equations in a domain, Izv. Akad. Nauk SSSR Ser. Mat. 47 (1) (1983) 75–108. English transl., Math. USSR-Izv 22 (1984), 1, 67–98. [21] Y.-Y. Li, Some existence results of fully nonlinear elliptic equations of Monge–Amp` ere type, Comm. Pure Appl. Math. 43 (1990) 233–271. [22] Y.-Y. Li, Interior gradient estimates for solutions of certain fully nonlinear elliptic equations, J. Differential Equations 90 (1991) 172–185. [23] G.M. Lieberman, Second Order Parabolic Differential Equations, World Scientific Publ., Singapore, 1996. [24] J. McCoy, Mixed volume preserving curvature flows, Calc. Var. Partial Differential Equations 24 (2005) 131–154. [25] C.-Y. Ren, The first initial boundary value problem for parabolic Hessian equations on Riemannian manifolds, Comm. Math. Res. 29 (2013) 305–319. [26] N.S. Trudinger, The Dirichlet problem for the prescribed curvature equations, Arch. Ration. Mech. Anal. 111 (1990) 153–179. [27] N.S. Trudinger, On the Dirichlet problem for Hessian equations, Acta Math. 175 (1995) 151–164. [28] K. Tso, Deforming a hypersurfaces by its Gauss–Kronecker curvaure, Comm. Pure Appl. Math. 38 (1985) 867–882. [29] K. Tso, On an Aleksandrov–Bakel’man type maximum principle for second-order parabolic equations, Comm. Partial Differential Equations 10 (1985) 543–553. [30] J.I.E. Urbas, Hessian equations on compact Riemannian manifolds, in: Nonlinear Problems in Mathematical Physics and Related Topics, II, Kluwer/Plenum, New York, 2002, pp. 367–377.