Vector Exponential Penalty Function Method for Nondifferentiable Multiobjective Programming Problems
Vector Exponential Penalty Function Method for Nondifferentiable Multiobjective Programming Problems
Tadeusz Antczak 0
Mathematics Subject Classification 0
0 Faculty of Mathematics and Computer Science, University of Łódz ́ , Banacha 22, 90238 Lodz , Poland
In this paper, a new vector exponential penalty function method for nondifferentiable multiobjective programming problems with inequality constraints is introduced. First, the case when a sequence of vector penalized optimization problems with vector exponential penalty function constructed for the original multiobjective programming problem is considered, and the convergence of this method is established. Further, the exactness property of a vector exact penalty function method is defined and analyzed in the context of the introduced vector exponential penalty function method. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered nondifferentiable multiobjective programming problem and the associated vector penalized optimization problem with the vector exact exponential penalty function. This equivalence is established for nondifferentiable vector optimization problems with inequality constraints in which involving functions are r invex. Communicated by Anton Abdulbasah Kamil.
Multiobjective programming; (weak) Pareto solution; Vector exact exponential penalty function; Penalized vector optimization problem; r Invex function

49M37 · 90C29 · 90C30 · 90C26
1 Introduction
The field of multiobjective programming, also known as vector programming, has
attracted a lot of attention since many realworld problems in decision theory,
economics, engineering problems, game theory, management sciences, physics, optimal
control can be modeled as nonlinear vector optimization problems. Therefore, many
approaches were developed in the literature to address these problems. The properties
of the objective function and the constraints determine the applicable technique.
Considerable attention has been given recently to devising new methods which solve the
given multiobjective programming problem by means of some associated optimization
problem (see, for example, [1–3]).
Exact penalty function methods are important analytic and algorithmic techniques
in nonlinear mathematical programming for solving a nonlinear constrained scalar
optimization problem. Exact penalty function methods transform the considered
optimization problem into a single unconstrained optimization problem or into a finite
sequence of unconstrained optimization problems, avoiding thus the infinite
sequential process of the classical penalty function methods. Nondifferentiable exact penalty
functions were introduced by Zangwill [4] and Pietrzykowski [5]. Much of the
literature on nondifferentiable exact penalty functions is devoted to the study of scalar
convex optimization problems (see, for example, [6–16], and others). However, some
results on exact penalty functions used for solving various classes of nonconvex
optimization problems have been proved in the literature recently (see, for example,
[17,18]). Namely, in [17], Antczak introduced a new approach for solving nonconvex
differentiable optimization problems involving r invex functions. He defined a new
exact absolute value penalty function method, called the exact exponential penalty
function method, for solving nonconvex constrained scalar optimization problems.
Further, under r invexity hypotheses, Antczak established the equivalence between
the sets of optimal solutions in the original scalar optimization problem with both
inequality and equality constraints and its associated penalized optimization problem
with the exact exponential penalty function. Furthermore, in [17], a lower bound on
the penalty parameter was provided such that this result is satisfied if the penalty
parameter is larger than this value.
In [19], Antczak defined a new vector exact l1 penalty function method and used
it for solving nondifferentiable convex multiobjective programming problems. He
gave conditions guaranteeing the equivalence of the sets of (weak) Pareto solutions
of the considered convex nondifferentiable multiobjective programming problem and
its associated vector penalized optimization problem with the vector exact l1 penalty
function.
An exponential penalty function method was proposed by Murphy [20] for solving
nonlinear differentiable scalar optimization problems. Exponential penalty function
methods have been used widely in optimization theory by several authors for solving
optimization problems of various types (see, for example, [21–29], and others).
The aim of this paper is to show that unconstrained global optimization methods
can be used also for solving nondifferentiable constrained multiobjective programing
problems, by resorting to an exact penalty approach. Namely, we extend the exact
exponential penalty function method introduced by Antczak [17] to the vectorial case.
Hence, we introduce a new vector exponential penalty function method, and we use it
for solving a class of nondifferentiable multiobjective programming problems
involving r invex functions (with respect to the same function η). This method is based on
such a construction of an exact absolute value penalty function, which is minimized in
the exponential penalized optimization problem constructed in this method. It is the
sum of a certain “merit” function (which reflects the objective function of the
original problem) and a penalty term which reflects the constraint set. The merit function
is chosen as the composition of the exponential function and the original objective
function, while the penalty term is obtained by multiplying a suitable function, which
represents the constraints (in this case, it is also the sum of the composition of an
exponential function and the suitable function, which represents the constraint) by a
positive parameter c, called the penalty parameter.
This work is organized as follows. In Sect. 2, some preliminary results are given that
are useful in proving the main results in the paper. In Sect. 3, a new vector exponential
penalty function method is introduced, and its algorithmic aspect is presented. The
convergence of the sequence of weak Pareto solutions of vector subproblems
generated in the described method is established. In Sect. 4, the exactness of the penalization
is extended to the case of an exact vector penalty function method. Then the results
for vector exterior exponential penalty function algorithm are reviewed, and the
relationship between the weak Pareto solution in the original multiobjective programming
problem and weak Pareto optimal solutions in the associated penalized optimization
subproblems is commented. Thus, the exactness property is defined for the introduced
vector exponential penalty function method. Namely, we prove that there exists a
finite lower bound of the penalty parameter c such that, for every penalty parameter c
exceeding this threshold, a (weak) Pareto solution in the considered nonconvex
nondifferentiable multiobjective programming problem coincides with an unconstrained
(weak) Pareto solution in its associated vector penalized optimization problem with
the vector exact exponential penalty function. Also under nondifferentiable r invexity,
the converse result is established for sufficiently large penalty parameter exceeding
the finite threshold. Hence, the equivalence between the nonconvex nondifferentiable
multiobjective programming problems is established for sufficiently large penalty
parameters under the assumption that all functions constituting the considered
nonsmooth multiobjective programming problem are r invex (with respect to the same
function η). The results established in the paper are illustrated by suitable examples of
nonconvex nondifferentiable vector optimization problems which we solve by using
the vector exact exponential penalty function method defined in this paper. Finally,
in Sect. 5, we discuss the consequences of the extension of the exact exponential
penalty function method defined by Antczak [17] for scalar optimization problems to
the vectorial case and its significance for vector optimization.
2 Preliminaries
The following convention for equalities and inequalities will be used throughout the
paper.
For any x = (x1, x2, . . . , xn)T , y = (y1, y2, . . . , yn)T , we define
(i) x = y if and only if xi = yi for all i = 1, 2, . . . , n;
(ii) x < y if and only if xi < yi for all i = 1, 2, . . . , n;
(iii) x y if and only if xi yi for all i = 1, 2, . . . , n;
(iv) x ≤ y if and only if x y and x = y.
Definition 1 A function f : Rn → R is locally Lipschitz at a point x ∈ Rn if there
exist scalars Kx > 0 and ε > 0 such that, the following inequality
 f (y) − f (z)
Kx y − z
holds for all y, z ∈ x + ε B, where B signifies the open unit ball in Rn, so that x + ε B
is the open ball of radius ε about x .
Definition 2 [30] The Clarke generalized directional derivative of a locally Lipschitz
function f : X → R at x ∈ X in the direction v ∈ Rn, denoted f 0 (x ; v), is given by
f 0(x ; v) = lim sup
y→x
λ↓0
f (y + λv) − f (y)
λ
.
Definition 3 [30] The Clarke generalized subgradient of a locally Lipschitz function
f : X → R at x ∈ X , denoted ∂ f (x ), is defined as follows:
∂ f (x ) =
ξ ∈ Rn : f 0(x ; d)
ξ, d for all d ∈ Rn .
Lemma 4 [30] Let f : X → R be a locally Lipschitz function on a nonempty open
set X ⊂ Rn, u be an arbitrary point of X and λ ∈ R. Then
∂ (λ f ) (u) = λ∂ f (u) .
Proposition 5 [30] Let fi : X → R, i = 1, . . . , k, be locally Lipschitz functions on
a nonempty set X ⊂ Rn, u be an arbitrary point of X ⊂ Rn. Then
Equality holds in the above relation if all but at most one of the functions fi is
strictly differentiable at u.
Corollary 6 [30] For any scalars λi , one has
∂
k
i=1
∂
k
i=1
and equality holds if all but at most one of the functions fi is strictly differentiable at
u.
Theorem 7 [30] Let the function f : Rn → R be locally Lipschitz at a point x ∈ Rn
and attain its (local) minimum at x . Then
0 ∈ ∂ f (x ) .
Proposition 8 [30] Let the functions fi : Rn → R, i ∈ I = {1, . . . , k} , be locally
Lipschitz at a point x ∈ Rn. Then the function f : Rn → R defined by f (x ) :=
max fi (x ) is also locally Lipschitz at x . In addition,
i=1,..,k
∂ f (x ) ⊂ conv {∂ fi (x ) : i ∈ I (x )} ,
where I (x ) := {i ∈ I : f (x ) = fi (x )}.
Now, for the reader’s convenience, we give the definition of a nondifferentiable
vectorvalued (strictly) r invex function (see [31] for a scalar case and [32] in the
vectorial case).
Definition 9 Let X be a nonempty subset of Rn and f : Rn → Rk be a vectorvalued
function such that each its component is locally Lipschitz at a given point x ∈ X . If
there exist a function η : X × X → Rn and a real number r such that, for i = 1, . . . , k,
the following inequalities:
hold for each ξi ∈ ∂ fi (x ) and all x ∈ X , then f is said to be a nondifferentiable
r invex vectorvalued function at x on X (with respect to η). If inequalities (
1
) are
satisfied at any point x ∈ X , then f is said to be a nondifferentiable r invex function
on X (with respect to η).
Each function fi , i = 1, . . . , k, satisfying (
1
), is said to be locally Lipschitz r invex
at x on X (with respect to η ).
Definition 10 Let X be a nonempty subset of Rn and f : Rn → Rk be a vectorvalued
function such that each its component is locally Lipschitz at a given point x ∈ X . If
there exist a function η : X × X → Rn and a real number r such that, for i = 1, . . . , k,
the following inequalities
1 er fi (x) > r1 er fi (x) [1 + r ξi , η (x , x ) ] , if r = 0
r
fi (x ) − fi (x ) > ξi , η (x , x ) , if r = 0
hold for each ξi ∈ ∂ fi (x ) and all x ∈ X, x = x , then f is said a nondifferentiable
vectorvalued strictly r invex function at x on X (with respect to η). If inequalities
(
2
) are satisfied at any point x ∈ X , then f is said to be a nondifferentiable strictly
r invex function on X (with respect to η).
Remark 11 In order to define an analogous class of vectorvalued r incave functions
with respect to η, the direction of the inequalities (
1
) should be changed to the opposite
one.
(
1
)
(
2
)
Remark 12 Note that in the case when r = 0, the definition of a (strictly) r invex
vectorvalued function reduces to the definition of nondifferentiable (strictly) invex
vectorvalued function (see, for example, [33,34]).
Remark 13 For more details on the properties of nondifferentiable r invex functions,
we refer, for example, to see Antczak [31] for a scalar case and Antczak [32] in the
vectorial case.
Now, we prove the useful result which we will use in proving the main result in the
paper.
Theorem 14 Let x ∈ X ⊂ Rn and q : X → R be a locally Lipschitz r invex function
at x ∈ X on X with respect to η : X × X → R. Further, let r1 erq+(x) − 1 :=
max 0, r1 erq(x) − 1 . Then the function r1 erq+(·) − 1 is a locally Lipschitz invex
function at x ∈ X on X with respect to the same function η.
Proof We consider the following cases:
(
1
) q (x ) > 0.
Then r1 erq+(x) − 1
= r1 erq(x) − 1 on some neighborhood of x , and so
∂ r1 erq+(x) − 1 = ∂ r1 erq(x) − 1 . By assumption, q is a locally
Lipschitz r invex function at x ∈ X on X with respect to η : X × X → R. Therefore,
for any ζ + ∈ ∂ r1 erq+(x) − 1 and all x ∈ X , we have
Then, by definition, r1 erq+(x) − 1
= 0 on some neighborhood of x , and so
= {0}. Therefore, for any ζ + ∈ ∂ r1 erq+(x) − 1
and all
0 = ζ +, η (x , x ) = r1 erq+(x) − 1
By assumption, q is a locally Lipschitz r invex function at x ∈ X on X with respect to the function η. Therefore, by definition, for any ζ ∈ ∂q+(x ) and all x ∈ X , we have
ζ, η (x , x )
= r1 erq(x) − 1
0 = 0, η (x , x ) = − r1 erq+(x) − 1
Hence, by (
6
) and (
7
), the following relations
− r1 erq+(x) − 1
− r1 erq+(x) − 1 .
(
7
)
λζ + (1 − λ) 0, η (x , x )
λ r1 erq+(x) − 1
− r1 erq+(x) − 1
− r1 erq+(x) − 1
= r1 erq+(x) − 1
+ (1 − λ) r1 erq+(x) − 1
− r1 erq+(x) − 1
hold for every λ ∈ [0, 1]. Thus, (
8
) implies that, for any ζ + ∈ ∂ r1 erq+(x) − 1
and all x ∈ X , the following inequality
holds. Hence, by (
3
), (
4
), (
9
) and Remark 12, we conclude that r1 erq+(·) − 1 is
a nondifferentiable invex function at x ∈ X on X with respect to η.
(
6
)
(
8
)
(
9
)
In general, the unconstrained nonsmooth vectorial optimization problem is
represented as follows:
f (x ) := ( f1(x ), . . . , fk (x )) → V − min
subject to x ∈ X,
(VP)
where the objective functions fi : X → R, i ∈ I = {1, . . . , k}, are locally Lipschitz
on X , where X is a nonempty open subset of Rn.
In general, the concept of an optimal solution defined in scalar optimization
problems does not work in multiobjective programming problems. For such multicriterion
optimization problems, an optimal solution is defined in terms of a (weak) Pareto
solution [(weakly) efficient solution] in the following sense:
Definition 15 A feasible point x is said to be a Pareto solution (efficient solution) for
a vector optimization problem if and only if there exists no other feasible solution x
such that
f (x ) ≤ f (x ).
Definition 16 A feasible point x is said to be a weak Pareto solution (weakly efficient
solution, weak minimum) for a vector optimization problem if and only if there exists
no other feasible solution x such that
f (x ) < f (x ).
It is easy to verify that every Pareto solution is a weak Pareto solution. The following result gives the necessary optimality condition for the unconstrained vectorial optimization problem (UVP) (see [35]).
Theorem 17 A necessary condition for the point x to be (weak) Pareto optimal in
the nondifferentiable vector optimization problem (UVP) is that there exists multiplier
λ ∈ Rk such that
Often, the feasible set of a multiobjective programming problem can be represented
by functional inequalities and, therefore, we consider the nondifferentiable constrained
vector optimization problem in the following form
f (x ) := ( f1(x ), . . . , fk (x )) → V min
subject to g(x ) := (g1(x ), . . . , gk (x )) 0,
x ∈ X,
(VP)
where fi : X → R, i ∈ I = {1, . . . , k} and g j : X → R, j ∈ J = {1, . . . , m}, are
locally Lipschitz functions on a nonempty open set X ⊂ Rn.
Let
D :=
x ∈ X : g(x )
0
denote the set of all feasible solutions of the constrained multiobjective programming
problem (VP).
It is well known (see, for example, [33–38]) that the following conditions, known
as the generalized form of the Karush–Kuhn–Tucker conditions, are necessary for a
(weak) Pareto solution in the considered nondifferentiable vector optimization
problem (VP).
Theorem 18 Let x ∈ D be a (weak) Pareto solution in problem (VP) and a constraint
qualification (see, for example, [30,36,39]) be satisfied at x . Then, there exist the
Lagrange multipliers λ ∈ Rk and μ ∈ Rm such that
k
i=1
μ j g j (x ) = 0,
m
j=1
j ∈ J,
0 ∈
λi ∂ fi (x ) +
μ j ∂g j (x ) ,
(
10
)
(
11
)
Definition 19 The point x ∈ D is said to be a Karush–Kuhn–Tucker point in the
considered multiobjective programming problem (VP) if there exist the Lagrange
multipliers λ ∈ Rk , μ ∈ Rm such that the Karush–Kuhn–Tucker necessary optimality
conditions (
10
)–(
12
) are satisfied at x .
3 Convergence of a New Vector Exponential Penalty Function Method
for a Multiobjective Programming Problem
For the considered nonlinear multiobjective programming problem (VP), we introduce
a new vector exponential penalty function method as follows:
j=1
Pr (x , c) = r1 er f (x) + c
m 1 erg +j(x) − 1 e → V min, (VPr (c))
r
where r is a finite real number not equal to 0 and e = (1, . . . , 1) ∈ Rk . Note that, for
a given constraint g j (x )
0, the function r1 erg +j(·) − 1 defined by
λ ≥ 0,
λi = 1, μ
r1 erg +j(x) − 1 =
0
r1 erg j (x) − 1
if g j (x ) 0
if g j (x ) > 0
is equal to zero for all x that satisfy the constraint and that it has a positive value
whenever this constraint is violated. Moreover, large violations in the constraint g j (x ) 0
result in large values for r1 erg +j(x) − 1 . Thus, the function r1 erg +j(·) − 1 has the
penalty features relative to the single inequality constraint g j . However, observe that at
points, where g j (x ) = 0, the foregoing objective function might not be differentiable,
even though g j is differentiable.
As it follows from (
13
), the vector penalized problem (VPr (c)) constructed in the
vector exponential penalty function method is such an unconstrained vector
optimization problem, in which a vector objective function is the sum of a certain vector “merit”
function (which reflects the vector objective function of the given multiobjective
programming problem) and the same penalty term which reflects the constraint set being
added to each component of the vector “merit” function. The vector merit function is
chosen as the composition of the exponential function and the original vector objective
function, while the penalty term (the same for each component of merit function) is
obtained by multiplying a suitable function, which represents the constraints, by a
positive parameter c, called the penalty parameter.
Remark 20 Note that Pr : Rn × R+ → Rk and Pr (x , c) = ( Pr1(x , c), . . . , Prk (x , c)),
where Pri (x , c) := r1 er fi (x) + c mj=1 r1 erg +j(x) − 1 , i = 1, . . . , k.
(
12
)
(
13
)
(
14
)
Remark 21 In the case when r = 0, the definition of the vector penalized problem
(VPr (c)) reduces to the following form:
m
g +j(x )e → V min. (VP0(c))
(
15
)
Thus, it can be set that in the case when r = 0, we obtain the definition of the classical
vector l1 penalty function method (see Antczak [19]).
Now, we show that a weak Pareto solution in the considered multiobjective
programming problem (VP) can be obtained by solving a sequence of problems (
13
) with
the penalty parameter c selected from an increasing sequence of parameters (cn).
Therefore, for the considered multiobjective programming problem (VP), we now
construct a sequence of vector penalized optimization problems (VPr (cn)) with the
vector exponential penalty function n = 1, 2, . . . as follows:
Pr (x , cn ) = r1 er f (x) + cn
j=1
m 1 erg +j(x) − 1 e → V min, (VPr (cn))
r
(
16
)
where cn > 0 and lim cn = ∞. Moreover, we denote by x n an approximate weak
n→∞
Pareto solution in the vector penalized optimization problem (VPr (cn)) with the vector
exact exponential penalty function.
An algorithmic framework that forms the basis for the introduced vector exponential
penalty function method is as follows:
Exponential penalty function method for a multiobjective programming
problem (VP).
Given c0, tolerance δ > 0, starting point x0s ;
FOR n = 0, 1, 2....
Find an approximate weak Pareto solution xn of Pr (x , cn ), starting at xns ;
IF mj=1 r1 erg +j(xn) − 1 δ THEN
STOP with approximate weak Pareto solution xn;
ELSE
Choose new penalty parameter cn+1 > cn;
Choose new starting point at xns+1;
END IF
END FOR
Now, we prove the convergence theorem for the introduced vector exponential
penalty function method. Namely, we show that if x ns is any convergence
subsequence of (x n) and lim x ns = x ∈ D, then x is a weak Pareto solution in the
s→∞
considered multiobjective programming problem (VP).
First, we show that any limit point x of the sequence (x n), that is, the sequence
of approximate weak Pareto solutions in vector penalized optimization problems
(VPr (cn)) with the vector exact exponential penalty function, is feasible in the
considered multiobjective programming problem (VP).
Lemma 22 Let cn > 0 and lim cn = ∞. If x = nl→im∞x n, then x is feasible in the
considered multiobjective prong→ra∞mming problem (VP).
Proof Let x = nl→im∞x n. Thus, there exists a subsequence {x ns } of {x n} such that x ns is
an approximate weak Pareto solution in the vector penalized problem (VPr (cns )), s =
1, 2, . . . , with the vector exponential penalty function and, moreover, lim x ns = x .
s→∞
We proceed by contradiction. Suppose, contrary to the result, that x ∈/ D. If we take
x ∈ D, then, according to the definition of a vector penalized problem (VPr (cns )) and
Definition 16, there exists ins ∈ {1, . . . , k} such that
1 er fins (x)+cns
r
j=1
m r1 erg +j(x) − 1
1 er fins (xns )+cns
r
j=1
m 1 erg +j(xns ) − 1 (
17
)
r
Since x ∈ D, by (14), we have
By assumption, x ∈/ D. This means that there exists j ∈ {1, 2, . . . , m} such that
g j (x ) > 0. Then, by (
14
), it follows that
Hence, (
19
) implies that
m r1 erg +j(x) − 1
= 0.
j=1
j=1
m 1 erg +j(x) − 1 > 0.
r
m 1 erg +j(x) − 1 > ε
r
for some ε > 0. By assumption, lim x ns = x . Since
s→∞
mj=1 r1 erg +j(x) − 1 , for all s sufficiently large, by (
20
), we have
mj=1 r1 erg +j(xns ) − 1
j=1
m 1 erg +j(xns ) − 1 > ε.
r
Therefore, using (
18
) and (
21
), for s sufficiently large, we have,
j=1
m r1 erg +j(xns ) − 1
⎡ 1 er fins (x)
− ⎣ r
+ cns
j=1
m r1 erg +j(x) − 1 ⎦⎤
> r1 er fins (xns ) − r1 er fins (x) + cns ε → ∞, as s → ∞.
(
18
)
(
19
)
(
20
)
→
(
21
)
This is a contradiction to (
17
). This means that x ∈ D, and the proof of this lemma is
completed.
The following theorem shows that if a sequence of approximate weak Pareto
solutions in the vector penalized problem (VPr (cn)) with the vector exponential penalty
function converges to x , then x is also a weak Pareto solution in the considered
multiobjective programming problem (VP).
Theorem 23 Let x n be an approximate weak Pareto optimal solution in the vector
penalized optimization problem (VPr (cn)) with the vector exponential penalty
function, n = 1, 2, . . .. If x ns is any convergent subsequence of (x n) and lim x ns = x ,
s→∞
then x is a weak Pareto solution in the considered multiobjective programming problem
(VP).
Proof Let x ns be any convergent subsequence of (x n) and lim x ns = x . By Lemma
s→∞
22, it follows that x is feasible in the considered multiobjective programming problem
(VP). We proceed by contradiction. Suppose, contrary to the result, that x is not a
weak Pareto solution in the considered multiobjective programming problem (VP).
Hence, by Definition 16, it follows that there exists x ∈ D such that
Thus,
1 er fi (x) < 1 er fi (x), i = 1, . . . , k. (
23
)
r r
Since x ns is a weak Pareto solution in vector penalized problem (VPr (cns )), there
exists ins ∈ {1, . . . , k} such that
r1 er fins (x) + cns
j=1
m r1 erg +j(x) − 1
1 er fins (xns ) + cns
r
j=1
m r1 erg +j(xns ) − 1 .
Since x ∈ D, by (
14
), we have
fi (x ) < fi (x ), i = 1, . . . , k.
j=1
m r1 erg +j(x) − 1
j=1
m r1 erg +j(xns ) − 1
= 0.
1 er fins (x)
r
1 er fins (xns ).
r
Also by (14), it follows that
Combining (
24
)–(
26
), we get
By (23), it follows that
1 er fins (x) < 1 er fins (x). (
28
)
r r
Since lim x ns = x , for sufficiently large s, (
28
) implies that the following inequality
s→∞
1 er fins (x) < 1 er fins (xns )
r r
(
29
)
holds, contradicting (
27
). This means that any limit point of any convergence
subsequence of (x n) is a weak Pareto solution in the considered multiobjective programming
problem (VP). The proof of this theorem is completed.
It turns out that the strategy for choosing the penalty parameter cn is crucial to
the practical success of the algorithm presented above. If the initial choice c0 is too
small, many cycles of the algorithm presented above may be needed to determine an
appropriate solution.
In order to illustrate the difficulties caused by an inappropriate value of c, we
consider the following multiobjective programming problem.
Example 24 Consider the following vector optimization problem:
f (x ) = (x − 1, x − 1) → V − min
g(x ) = 1 − x 0,
x ∈ X = x ∈ R : x 0 .
(VP0)
Note that D = x ∈ R : 0 x 1 and x = 1 is a Pareto solution in the considered
multiobjective programming problem (VP0). Note that all functions constituting
problem (VP0) are 1invex on R with respect to the same function η, where η (x , x ) = x −x .
We define the vector exponential penalty function P1(·, c) as follows:
Now, we consider various values of the penalty parameter c and draw graphs of
each component of the vector exponential penalty function P1(·, c) for the
considered multiobjective programming problem (VP0) according to these values of the
penalty parameter c on below figures (Fig. 1).
Note that each component of the vector exponential penalty function P1(·, c) is a
monotonically increasing function when c is smaller than 0.15. On the other hand,
when c is chosen from the interval (0.15, 1), then the vector exponential penalty
function has a Pareto solution, but not at the feasible solution x = 1. While the vector
exponential penalty function P1(·, c) has a Pareto solution at x = 1, when c > 1.
Therefore, if, for example, the current iterate in the above algorithm is x n = 41 and
the penalty parameter cn is chosen to be less than 1, then almost any implementation of
the vector exponential penalty method will give a step that moves away from the Pareto
solution x = 1. This behavior of the algorithm will be repeated, producing increasingly
poorer iterates, until the penalty parameter c is increased above the threshold equal to 1.
It turns out in our further considerations that this value of the threshold is not
accidental (see Remark 37 below).
4 Exactness of the Introduced Vector Exponential Penalty Function
Method
In order to avoid the need of a sequence of unbounded penalty parameters, in other
words, a sequence of unbounded penalized optimization problems, we now prove
that the introduced vector exponential penalty function method is exact in the sense
that a (weak) Pareto solution in the original multiobjective programming problem is
equivalent to a (weak) Pareto solution in the associated vector penalized problem,
for a finite (sufficiently large) value of the penalty parameter. In order to prove this
result, we assume that each component of the functions constituting the considered
multiobjective programming problem (VP) is locally Lipschitz r invex with respect
to the same function η.
Now, in a natural way, we extend the wellknown definition of exactness property
for a scalar exact penalty function to the vectorial case.
Definition 25 If a threshold value c
0 exists such that, for every c > c,
arg (weak) Pareto Pr (x , c) : x ∈ Rn
= arg (weak) Pareto { f (x ) : x ∈ D} ,
then the function Pr (x , c) is termed a vector exact exponential penalty function.
According to the definition of the function Pr (x , c), we call (VPr (c)), defined by
(
13
), the vector penalized problem with the vector exact exponential penalty function.
It is clear that, conceptually, if Pr (x , c) is a vector exact exponential penalty
function, we can find the constrained (weak) Pareto in the considered multiobjective
programming problem (VP), by looking for the unconstrained (weak) Pareto
solutions of the function Pr (x , c), for sufficiently large (but finite) values of the penalty
parameter c.
Now, for sufficiently large values of the penalty parameter c, we prove the
equivalence between the sets of (weak) Pareto solutions of problem (VP) and the vector
penalized problem (VPr (c)) with the vector exact exponential penalty function.
First, we establish that a Karush–Kuhn–Tucker point in the considered
multiobjective programming problem is a weak Pareto solution of the vector exact exponential
penalty function in the associated vector penalized problem (VPr (c)), for sufficiently
large penalty parameters c greater than the given threshold.
Theorem 26 Let x be a feasible solution in the nonsmooth multiobjective
programming problem (VP) and the generalized Karush–Kuhn–Tucker necessary optimality
conditions (
10
)–(
12
) be satisfied at x with the Lagrange multipliers λi , i ∈ I, μ j , j ∈
J . Furthermore, assume that the objective function f and the constraint function g are
r invex at x on X with respect to the same function η and M = max er fi (x), i ∈ I .
If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set
c M max μ j , j ∈ J ), then x is also a weak Pareto solution in any associated
vector penalized optimization problem (VPr (c)) with the vector exact exponential penalty
function.
Proof We proceed by contradiction. Suppose, contrary to the result, that x is not a
weak Pareto solution in the associated vector penalized problem (VPr (c)) with the
vector exact exponential penalty function. Therefore, by Definition 16, there exists
x ∈ X such that
Pr (x , c) < Pr (x , c) .
By definition of the vector penalized problem (VPr (c)) [see (
13
)], we have
r1 er f (x) + c
(
31
)
Since x is a feasible solution in the nonsmooth multiobjective programming problem
(VP), (
14
) yields
− r1 er fi (x) + c
j=1
By assumption, M = max er fi (x), i ∈ I and, moreover, c
Hence, (32) gives
r1 er( fi (x)− fi (x)) − 1
1 m
+ r
By the Karush–Kuhn–Tucker necessary optimality condition (
12
), it follows that
1
r λi er( fi (x)− fi (x)) − 1
and, for at least one i ∗ ∈ I , we have
m
j=1
j=1
m
j=1
Adding both sides of (
34
) and (
35
), we get
1 k
r
i=1
λi < 0.
(
36
)
1 m
+ r
j=1
k
i=1
By the Karush–Kuhn–Tucker necessary optimality condition (
12
), we have that
k
i=1 λi = 1. Hence, (
36
) yields
1 k
r
i=1
1 m
+ r
j=1
By assumption, the objective function f and the constraint function g are locally
Lipschitz r invex at x on X with respect to the same function η. Then, by Definition
9, the following inequalities
hold for all x ∈ X . Therefore, they are also satisfied for x = x ∈ X . Using the
Karush–Kuhn–Tucker necessary optimality condition (
12
), we get, respectively,
Adding both sides of the above inequalities, we get that the following inequality
1 k
r
i=1
1 m
+ r
j=1
k
i=1
m
j=1
λi ξi +
μ j ζ j , η (x , x )
holds for every ξi ∈ ∂ fi (x ) , i = 1, . . . , k, and ζ j ∈ ∂g j (x ) , j = 1, . . . , m. By the
Karush–Kuhn–Tucker necessary optimality condition (
10
), the following inequality
μ j er(g j (x)−g j (x)) − 1
0
1 k
r
i=1
holds. Thus,
1 k
r
i=1
1 m
+ r
j=1
μ j e μrj μ j g j (x)−μ j g j (x)
− 1
Hence, using the Karush–Kuhn–Tucker necessary optimality condition (
11
), we obtain
1 k
r
i=1
1 k
r
i=1
1 m
+ r
j=1
1 m
+ r
j=1
0.
0
holds, contradicting (
37
). Hence, the proof of this theorem is completed.
The following corollary follows directly from Theorem 26.
Corollary 27 Let x be a weak Pareto solution in the multiobjective programming
problem (VP) and all hypotheses of Theorem 26 be fulfilled. If the penalty parameter
c is assumed to be sufficiently large (it is sufficient to set c M max μ j , j ∈ J ),
then x is also a weak Pareto solution in the associated vector penalized optimization
problem (VPr (c)) with the vector exact exponential penalty function.
Now, under stronger assumptions, we establish the relationship between a Karush–
Kuhn–Tucker point in the considered multiobjective programming problem (VP) and
a Pareto solution in its associated vector penalized problem (VP(c)) with the vector
exact exponential penalty function.
Theorem 28 Let x be a feasible solution in the multiobjective programming problem
(VP) and the Karush–Kuhn–Tucker necessary optimality conditions (
10
)–(
12
) be
satisfied at x with the Lagrange multipliers λi , i ∈ I, μ j , j ∈ J . Furthermore, assume
that one of the following hypotheses is satisfied:
(i) the Lagrange multipliers λi , i ∈ I , associated to the objectives fi , are positive
real numbers and, moreover, the objective function f and the constraint function
g are r invex at x on X with respect to η,
(ii) the objective function f is strictly r invex at x on X with respect to η and the
constraint function g is r invex at x on X with respect to η,
If M = max er fi (x), i ∈ I and the penalty parameter c is assumed to be
sufficiently large (it is sufficient to set c M max μ j , j ∈ J ), then x is also a Pareto
solution in the associated vector penalized problem (VPr (c)) with the vector exact
exponential penalty function.
Proof Proof of this theorem is similar to that of Theorem 26.
Corollary 29 Let x be a Pareto solution in the multiobjective programming problem
(VP) and all hypotheses of Theorem 28 be fulfilled. If the penalty parameter c is
assumed to be sufficiently large (it is sufficient to set c M max μ j , j ∈ J ), then
x is a Pareto solution also in the associated vector penalized problem (VPr (c)) with
the vector exact exponential penalty function.
Now, under stronger assumptions, we establish the converse results to those ones
proved above. Namely, we prove that, for sufficiently large values of the penalty
parameter c, if x is a (weak) Pareto solution in the vector penalized problem (VPr (c))
with the vector exact exponential penalty function, then it is also a (weak) Pareto
solution in the original multiobjective programming problem (VP). To prove this
result, we assume that both the objective function and the constraint functions are
r invex at x on X with respect to the same function η. We also show that there exists
the finite threshold c of the penalty parameter c such that, for every penalty parameter
c exceeding this threshold, a (weak) Pareto solution in the associated vector penalized
optimization problem (VPr (c)) with the vector exact exponential penalty function is a
(weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective
programming problem (VP).
Theorem 30 Let D be a compact subset of Rn and x be a weak Pareto solution in the
vector penalized problem (VPr (c)) with the vector exact exponential penalty function,
where c is assumed to be sufficiently large. Further, assume that the objective function
f and the constraint function g are r invex at x on X with respect to the same function
η. Then x is also a weak Pareto solution in the given multiobjective programming
problem (VP).
Proof By assumption, x is a weak Pareto solution in the vector penalized problem
(VPr (c)) with the vector exact exponential penalty function. We consider two cases.
First, assume that x ∈ D. Then, by definition of the vector penalized problem (VPr (c))
with the vector exact exponential penalty function, it follows that
j=1
j=1
Hence, by (
14
), (
38
) implies that the relation
∼ ∃x∈D f (x ) < f (x )
(
39
)
holds, by which we conclude that x is a weak Pareto optimal in the considered
multiobjective programming problem (VP). Then, for any c c (where c is equal to
the penalty parameter for which the vector penalized problem (VPr (c)) is defined),
a weak Pareto solution x in each vector penalized problem (VPr (c)) with the vector
exact exponential penalty function is a weak Pareto solution also in the considered
multiobjective programming problem (VP).
In the case when x ∈ D, the result follows directly from (39).
Now, suppose that x ∈/ D. Since x is a weak Pareto solution in the vector penalized
problem (VPr (c)), by Theorem 17, there exists λ ∈ Rk , λ ≥ 0, ik=1 λi = 1, such
that
k
i=1
0 ∈
λi ∂ Pri (x , c).
(40)
By definition of the vector exact exponential penalty function, it follows that
0 ∈
k
i=1
⎛
λi ∂ ⎝ r1 er fi (x) + c
j=1
m r1 erg +j(x) − 1 ⎠⎞ .
By assumption, all functions g j , j = 1, . . . , m, are locally Lipschitz on X . Then, by
definition, the functions r1 erg +j(·) − 1 , j = 1, . . . , m, are also locally Lipschitz on
X . Since all λi are nonnegative, equality holds in Corollary 6. Thus, (41) yields
(41)
(43)
(44)
m r1 erg +j(x) − 1 ⎠⎞
k
i=1
λi .
(42)
i=1
i=1
k
k
i=1
λi ∂
λi ∂
0 ∈
0 ∈
k
i=1
Then, by Lemma 4, it follows that
Hence, by Proposition 5, we have
Thus, by Theorem 2.3.9 [30], it follows that
0 ∈
λi er fi (x)∂ ( fi (x )) + c
Thus,
Since
By assumption, the objective function f and the constraint function g are r invex at
x on X with respect to the same function η. Since the constraint functions g j , j ∈ J ,
are locally Lipschitz on X and r invex at x on X with respect to the same function η,
by Theorem 14, the functions r1 erg +j(·) − 1 , j ∈ J , are invex at x on X with respect
to the same function η. Then, the following inequalities
j=1
m
j=1
∂
∂
r1 erg +j(x) − 1
r1 erg +j(x) − 1
.
.
1 er fi (x)
r
1 erg +j(x)
r
1 er fi (x) [1 + r ξi , η (x , x ) ] , ∀ξi ∈ ∂ fi (x ) ,
r
− 1 − r1 erg +j(x) − 1 !ζ j+, η (x , x )"
i = 1, . . . , k, (45)
∀ζ j+ ∈ ∂
, j = 1, . . . , m
(46)
hold for all x ∈ X . Multiplying (46) by c > 0, we get
c r1 erg +j(x) − 1 − c r1 erg +j(x) − 1
Adding both sides of the inequalities (47), we obtain
c
m 1
j=1 r
∀ζ j+ ∈ ∂
m 1
j=1 r
c !ζ j+, η (x , x )"
r1 erg +j(x) − 1
c
m
j=1
erg +j(x) − 1 − c
erg +j(x) − 1
!ζ j+, η (x , x )" .
(48)
Thus, by (45) and (48), for any i = 1, . . . , k, the following inequalities
− 1 − λi r1 er fi (x) +c
m 1 erg +j(x) −1
j=1 r
!λi er fi (x)ξi +cλi
m
j=1ζ j+, η (x , x)" .
Adding both sides of the above inequalities, we obtain
1
r
1
− r
#
mj=1 r1 erg +j(x) − 1
k
i=1λi
ik=1λi , η (x , x )$ .
k
i=1λi
,
j = 1, . . . , m. (47)
(49)
∈
(50)
(51)
Since
k
i=1 λi = 1, (51) implies that the inequality
1
r
j=1 r
1 erg +j(x)
j=1 r
− 1
holds for every ξi ∈ ∂ fi (x ), i = 1, . . . , k, and ζ j+ ∈ ∂ r1 erg +j(x)
1, . . . , m. Hence, by (44), the following inequality
, j =
1
r
j=1 r
1 erg +j(x)
− 1
j=1 r
1 erg +j(x)
− 1
0
holds for all x ∈ X . By (
14
), for each x ∈ D, it follows that
Combining (53) and (54), we get that the following inequality
1 k
r
i=1
c
j=1
m 1 erg +j(x)
r
− 1
holds for all x ∈ D. Since x is not feasible in the given multiobjective programming
problem (VP), by (
14
), it follows that
m 1 erg +j(x)
r
− 1
= 0.
j=1
m r1 erg +j(x) − 1 > 0.
(53)
(54)
(55)
(56)
(57)
(58)
By assumption, c is sufficiently large. Let c satisfy
⎧
c > c = max ⎨
⎩
r1 er fi (x) − r1 er fi (x)
m 1 erg +j(x)
j=1 r
− 1
⎫
: i ∈ I, x ∈ D⎬ .
⎭
Now, we prove that c 0. Indeed, by assumption, x is a weak Pareto solution in
the vector penalized optimization problem (VPr (c)) with the vector exact exponential
penalty function. Thus, by (
39
), it follows that, for every x ∈ D, there exist at least
oneWie∈noIwsuschhowthathtar1ter fi (x) − r1 er fi (x) 0. Hence, in fact, (57) implies that c 0.
c
Suppose, contrary to the result, that
Combining (57), (58) and (60), we have
Since (57) is fulfilled for all c > c, there exists the penalty parameter c∗ > c such that
c < M max μ j , j ∈ J .
c∗ = M max μ j , j ∈ J .
Hence, (61) gives
Since mj=1 r1 erg +j(x) − 1
inequality
⎭
< c∗.
∀x∈D ∀i ∈ I 1 er fi (x)
r
j=1
= 0 for all x ∈ D, (62) implies that the following
∀x∈D ∀i ∈ I r1 er fi (x) + c∗
m r1 erg +j(x) − 1 < r1 er fi (x) + c∗
m r1 erg +j(x) − 1 ,
j=1
j=1
holds, which contradicting the weak efficiency of x in the vector penalized optimization
problem (VPr (c∗)). Thus, (58) is satisfied.
By assumption, x is a weak Pareto solution in the vector penalized optimization
problem (VPr (c)) with the vector exact exponential penalty function for sufficiently
large c (as it follows by ( 57), for all c > c). Hence, by Definition 16, it follows that
∼ ∃x∈D r1 er f (x) < r1 er f (x) + c
j=1
(59)
(60)
(61)
(62)
(63)
(64)
(65)
∼ ∃x∈X Pr (x , c) < Pr (x , c).
∼ ∃x∈X r1 er f (x) + c
j=1
j=1
Since D ⊂ X , the inequality (64) yields
(67)
The above relation is equivalent to
Thus, (66) implies that the following relation
∀x∈D ∃i∈I c
1 er fi (x) 1 er fi (x)
r − r
mj=1 r1 erg +j(x) − 1
holds, contradicting (57). Therefore, the case x ∈/ D is impossible. This means that x
is feasible in the multiobjective programming problem (VP). Hence, by the feasibility
of x in constrained optimization problem (VP), (66) yields
∀x∈D ∃i∈I fi (x )
fi (x ).
By Definition, 16, the above inequality implies that x is a weak Pareto solution in the
multiobjective programming problem (VP). This completes the proof of this theorem.
Theorem 31 Let D be a compact subset of Rn and x be a Pareto solution in the
vector penalized problem (VPr (c)) with the vector exact exponential penalty function.
Further, assume that the objective function f is strictly r invex at x on X and the
constraint function g is r invex at x on X with respect to the same η. If the penalty
parameter c is sufficiently large, then x is also a Pareto solution in the considered
multiobjective programming problem (VP).
Proof Proof of this theorem is similar to the proof of Theorem 30.
Corollary 32 Let all hypotheses of Corollary 27 (or Corollary 29) and Theorem 30 (or
Theorem 31) be fulfilled. Then the sets of weak Pareto solutions (Pareto solutions) in the
given multiobjective programming problem (VP) and its associated vector penalized
optimization problem (V Pr (c)) with the exact vector exponential penalty coincide.
The importance of this result is to guarantee the existence of a finite penalty
parameter c such that the sets of weak Pareto solutions (Pareto solutions) in the given
multiobjective programming problem (VP) and its associated vector penalized
optimization problem (VPr (c)) with the exact vector exponential penalty are the same.
Remark 33 Note that since the lower bound for the penalty parameter is finite,
therefore, the sequence of vector penalized subproblems (
16
) generated by the above
presented algorithm is also finite, in opposition to nonexact penalty function methods.
Now, we illustrate the result established above by means of a nondifferentiable
multiobjective programming problem with r invex functions, which we solve by using
the introduced vector exact exponential penalty function method.
Example 34 Consider the following nonsmooth multiobjective programming
problem:
f (x ) = ln x12 + x1 + x22 + 1 , ln x12 + x22 − x2 + 1 ,
ln x12 + arctan2 (x2) + x2 + 1 → V min
g1(x ) = ln x12 − x1 + 1 0,
g2(x ) = ln x22 − x2 + 1 0.
(VP1)
Note that D = (x1, x2) ∈ R2 : 0 x1 1 ∧ 0 x2 1 and x = (0, 0) is a Pareto
solution in the considered nonconvex nondifferentiable multiobjective programming
problem (VP1). Further, it is not difficult to prove, by Definition 10, that the objective
function f is strictly 1invex on R2 with respect to the function η : R2 × R2 → R2
and, by Definition 9, the constraints g1 and g2 are 1invex on R2 with respect to the
same function η, where
η (x , x ) =
η1 (x , x )
η2 (x , x )
We use the vector exact exponential penalty method for solving the considered
nonconvex nondifferentiable vector optimization problem (VP1). Hence, we construct the
following unconstrained vector penalized problem (V P11(c)) with the vector exact
exponential penalty function as follows:
2 2
x1 + x2 − x2 + 1 + c
j=1
eg +j(x) − 1
= max 0, x12 − x1 + max 0, x22 − x2 .
Further, the Karush–Kuhn–Tucker necessary optimality conditions (
10
)–(
12
) are
satisfied at x = (0, 0) with the Lagrange multipliers λ = λ1, λ2, λ3 ≥ 0 and
μ = (μ1, μ2) satisfying λ1ξ1−μ1 = 0, −λ2+λ3ξ3−μ2 = 0, λ1+λ2+λ3 = 1, where
ξ1 ∈ [−1, 1] , ξ3 ∈ [−1, 1] . As it follows from these relations, max μ j , j = 1, 2 =
1 and M = 1. Therefore, if we set c 1, then, by Corollary 27, x = (0, 0), being a
Pareto solution in the considered multiobjective programming problem (VP1), is also a
Pareto solution in each its associated vector penalized optimization problem (VP1(c))
with the vector exact exponential penalty function. Since also hypotheses of Theorem
31 are fulfilled, the converse result is also true. Note that for the considered constrained
multiobjective programming problem (VP1), it is not possible to use the similar result
established under convexity assumptions by Antczak [17]. This follows from the fact
that none of the functions constituting the considered nonsmooth vector optimization
problem (VP1) is convex on R2. It is also difficult to show that the functions involved
in problem (VP1) are invex with respect to the same function η : R2 × R2 → R2.
Therefore, it would be difficult to prove the similar result under invexity assumption.
However, the results proved in the paper are applicable for the considered
nonconvex multiobjective programming problem (VP1), since the functions involved in it
are 1invex on R2 with respect to the same function η. Thus, the introduced vector
exact exponential penalty function method is applicable to a larger class of nonconvex
vector optimization problems than the classical vector exact penalty function method
considered in [19].
Now, we consider an example of such a nondifferentiable vector optimization
problem in which not all functions constituting it are r invex. We show in such a case that
there is no equivalence between the sets of Pareto solution in the considered
nondifferentiable vector optimization problem and its associated vector penalized problem
constructed in the introduced vector exponential penalty function method.
Example 35 Consider the following nondifferentiable multiobjective programming
problem:
x ∈ X = {x ∈ R : x > −3} .
(VP2)
Note that D = x ∈ X : 0 x 1 and x = 0 is a Pareto solution in the considered
nonconvex nondifferentiable multiobjective programming problem (VP2) with the
optimal value f (x ) = (ln 27, ln 81). Further, none of the objective functions is r invex
with respect to any function η : X × X → R (see Theorem 12 [31]). However, we
use the vector exact exponential penalty method for solving the considered nonconvex
nondifferentiable vector optimization problem (VP2). Therefore, we construct the
following vector penalized optimization problem (VP2r (c)) with the vector exact
exponential penalty function as follows:
r r
Pr (x , c) = r1 x 3 + 27
r
1 1 3
r 3 x + 81
+ c max 0, r1
+ c max 0, r1
x 2 − x + 1
x 2 − x + 1
r
− 1
− 1
,
→ V min.
(VP2r (c))
It is not difficult to see that Pr (x , c) does not have a Pareto solution at x = 0 for any
c > 0. This follows from the fact that the downward order of growth of f exceeds the
upward of growth of g at x when moving from x towards smaller values. Indeed, note
that, for any r > 0, Pr (x , c) → r1 c (13r − 1) , r1 c (13r − 1) < r1 27r , r1 81r
=
Pr (0, c) when x → −3 for c ∈ 0, 132r7−r1 whereas for any r < 0, Pr (x , c) →
(−∞, −∞) when x → −3 for any c < 0 As it follows even from this example,
r invexity notion is an essential assumption to prove the equivalence between the sets
of (weak) Pareto solutions in the original multiobjective programming problem and
its exact penalized vector optimization problem with the vector absolute value penalty
function for any penalty parameters exceeding the given threshold.
In the next example, we compare the presented exact vector exponential penalty
function method and the classical exact vector l1 penalty function method.
Example 36 Consider the following nondifferentiable multiobjective programming
problem:
g1(x ) = ln x 2 − x + 1
x ∈ R.
(VP3)
Note that D = x ∈ R : 0 x 1 and x = 0 is a Pareto solution in the considered
nonconvex nondifferentiable vector optimization problem (VP3). It can be shown by
definition that the objective function is strictly 1invex and the constraint function is
1invex with respect to the same function η : R × R → R, where η(x , x ) = x − x .
If we use the presented exact vector exponential penalty function method for solving
(VP3), then we construct the following unconstrained vector optimization problem
(VP31(c)):
P1 (x , c) =
x 2 + x  + 1 + c max 0, x 2 − x + 1 ,
x 2 − 21 x + 1 + c max 0, x 2 − x + 1
(VP31(c))
If we use the classical exact vector l1 penalty function method for solving (VP3), then
we have to solve the following unconstrained vector optimization problem (VP30(c)):
P0 (x , c) = ln x 2 + x  + 1
+ c max 0, ln x 2 − x + 1
,
ln x 2 − 21 x + 1
+ c max 0, ln x 2 − x + 1
→ V min.
Note that, in the first case, we have to solve a convex vector optimization problem,
whereas, in the second case, an unconstrained vector optimization problem constructed
in the classical exact l1 penalty function method is not convex. Therefore, it is not
possible to use for solving it the methods for solving convex unconstrained vector
optimization problems.
Remark 37 Now, let us return yet to Example 24. One of strategies to overcome
the difficulties associated with too small value of penalty parameter in the presented
algorithm is just to set the penalty parameter cn in the presented algorithm to be large
than the threshold equal to M max {μ, j ∈ J }. For the multiobjective programming
problem (VP0) considered in Example 24, μ = 1 and M = e0 = 1 and, therefore,
the threshold of penalty parameter is equal to 1. Thus, it is now clear, why the vector
exponential penalty function P1(·, c) in Example 24 has a Pareto solution at x¯ = 1 for
all penalty parameters c > 1.
(VP30(c))
5 Conclusion
In this paper, the exact vector exponential penalty function method has been used
for solving nonconvex nondifferentiable multiobjective programming problems with
inequality constraints. The convergence of the introduced vector exponential penalty
function method has been established. Further, it has been proved that there exists
the finite threshold value c of the penalty parameter c such that, for every penalty
parameter c exceeding this threshold, any (weak) Pareto solution in the considered
nonconvex nondifferentiable multiobjective programming problem (VP) is a (weak)
Pareto solution in its associated vector penalized problem (VPr (c)) with the vector
exact exponential penalty function. We have established this result for
nondifferentiable multiobjective programming problems involving r invex functions with respect
to the same function η. Also the converse result has been established for such
nonconvex nondifferentiable multiobjective programming problems and under assumption
that the penalty parameter c is sufficiently large. Thus, the equivalence between the
set of (weak) Pareto optimal solutions in the considered nonconvex multiobjective
programming problem (VP) and its associated vector penalized problem (VPr (c)) with
the vector exact exponential penalty function has been proved for sufficiently large
penalty parameters c. Thus, the vector exact exponential penalty function method
analyzed in the paper turns out to be useful for solving a class of nonconvex nonsmooth
multiobjective programming problems with r invex functions (with respect to the same
function η), that is, the larger class of vector optimization problems than convex and
invex ones. Also, in some cases, the exact vector exponential penalty function method
turns out more useful than the classical exact vector l1 penalty function method. This is
consequence of the fact that, in some cases, a vector penalized problem with the exact
vector exponential penalty function is easier to solve than a vector penalized problem
constructed in the classical exact vector l1 penalty function method. This property of
the introduced vector exact exponential penalty function method is, of course,
important from the practical point of view. In this way, due to the importance of the exact
l1 penalty function methods in nonlinear scalar programming, similarly results have
been established in the vectorial case, which show that we are in a position to solve
of nonconvex nonsmooth vector optimization problems by using such methods, in the
considered case, the vector exact exponential penalty function method.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if changes were made.
1. Antczak , T. : An ηapproximation method in nonlinear vector optimization . Nonlinear Anal. Theory Methods Appl . 63 , 225  236 ( 2005 )
2. Antczak , T. : An ηapproximation method for nonsmooth multiobjective programming problems . Anziam J . 49 , 309  323 ( 2008 )
3. Jahn , J.: Scalarization in vector optimization . Math. Program . 29 , 203  218 ( 1984 )
4. Zangwill , W.I. : Nonlinear programming via penalty functions . Manag. Sci . 13 , 344  358 ( 1967 )
5. Pietrzykowski , T. : An exact potential method for constrained maxima . SIAM J. Numer. Anal. 6 , 299  304 ( 1969 )
6. Bazaraa , M.S. , Sherali , H.D. , Shetty , C.M. : Nonlinear Programming: Theory and Algorithms . Wiley, New York ( 1991 )
7. Bertsekas , D.P. : Constrained Optimization and Lagrange Multiplier Methods . Academic Press Inc, New York ( 1982 )
8. Charalambous , C. : On conditions for optimality of the nonlinear l1problem . Math. Program. 17 , 123  135 ( 1979 )
9. Charalambous , C. : A lower bound for the controlling parameters of the exact penalty functions . Math. Program . 15 , 278  290 ( 1978 )
10. Di Pillo , G. , Grippo , L. : Exact penalty functions in constrained optimization . SIAM J. Control Optim . 27 , 1333  1360 ( 1989 )
11. Evans , J.P. , Gould , F.J. , Tolle , J.W. : Exact penalty functions in nonlinear programming . Math. Program. 4 , 72  97 ( 1973 )
12. Fletcher , R.: An exact penalty function for nonlinear programming with inequalities . Math. Program. 5 , 129  150 ( 1973 )
13. Han , S.P. , Mangasarian , O.L. : Exact penalty functions in nonlinear programming . Math. Program . 17 , 251  269 ( 1979 )
14. Mangasarian , O.L. : Sufficiency of exact penalty minimization . SIAM J. Control Optim . 23 , 30  37 ( 1985 )
15. Peressini , A.L. , Sullivan , F.E. , Uhl Jr., J.J. : The Mathematics of Nonlinear Programming . Springer, New York ( 1988 )
16. Rosenberg , E.: Exact penalty functions and stability in locally Lipschitz programming . Math. Program . 30 , 340  356 ( 1984 )
17. Antczak , T.: A new exact exponential penalty function method and nonconvex mathematical programming . Appl. Math. Comput . 217 , 6652  6662 ( 2011 )
18. Antczak , T. : The exact l1 penalty function method for nonsmooth invex optimization problems . In: Hömberg, D. , Trö ltzsch , F. (eds.) System modelling and optimization , pp. 461  471 . 25th IFIP TC conference , CSMO 2011 , AITC 391. Springer, Berlin. Sept 2011 ( 2013 )
19. Antczak , T. : The vector exact l1 penalty method for nondifferentiable convex multiobjective programming problems . Appl. Math. Comput. 218 , 9095  9106 ( 2012 )
20. Murphy , F. : A class of exponential penalty functions . SIAM J. Control 12 , 679  687 ( 1974 )
21. Alvarez , F. : Absolute minimizer in convex programming by exponential penalty . J. Convex Anal . 7 , 197  202 ( 2002 )
22. Alvarez , F. , Cominetti , R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming . Math. Program. Ser. A 93 , 87  96 ( 2002 )
23. Bertsekas , D.P. , Tseng , P. : On the convergence of the exponential multiplier method for convex programming . Math. Program. 60 , 1  19 ( 1993 )
24. Jayswal , A. , Choudhury , S.: An exact l1 exponential penalty function method for multiobjective optimization problems with exponentialtype invexity . J. Oper. Res. Soc. China 2 , 75  91 ( 2014 )
25. Jayswal , A. , Choudhury , S. : Convergence of exponential penalty function method for multiobjective fractional programming problems . Ain Shams Eng. J . 5 , 1371  1376 ( 2014 )
26. Mandal , P. , Giri , B.C. , Nahak , C. : Variational problems and l1 exact exponential penalty function with ( p, r )ρ(η, θ )invexity . AMO 16 , 243  259 ( 2014 )
27. Liu , S. , Feng , E. : The exponential penalty function method for multiobjective programming problems . Optim. Methods Softw . 25 , 667  675 ( 2010 )
28. Parwadi , M. : Exponential penalty methods for solving linear programming problems . In: Proceedings of the World Congress on Engineering and Computer Science 2011 Vol II WCECS 2011 . San Francisco. 1921 Oct 2011
29. Strodiot , J.J. , Nguyen , V.U. : An exponential penalty method for nondifterentiable minimax problems with general constraints . J. Optim. Theory Appl . 27 , 205  219 ( 1979 )
30. Clarke , F.H. : Optimization and Nonsmooth Analysis . Wiley, New York ( 1983 )
31. Antczak , T. : Lipschitz r invex functions and nonsmooth programming . Numer. Funct. Anal. Optim . 23 , 265  283 ( 2002 )
32. Antczak , T. : Optimality and duality for nonsmooth multiobjective programming problems with V r  invexity . J. Glob. Optim . 45 , 319  334 ( 2009 )
33. Giorgi , G. , Guerraggio , A. : The notion of invexity in vector optimization: smooth and nonsmooth case . In: Crouzeix, J.P. , MartinezLegaz , J.E. , Volle , M. (eds.) Generalized convexity, generalized monotonicity . Proceedings of the fifth symposium on generalized convexity, Luminy . Kluwer Academic Publishers ( 1997 )
34. Kim , D.S. , Schaible , S. : Optimality and duality for invex nonsmooth multiobjective programming problems . Optimization 53 , 165  176 ( 2004 )
35. Craven , B.D. : Nonsmooth multiobjective programming . Numer. Funct. Anal. Optim . 10 , 49  64 ( 1989 )
36. Ishizuka , Y. , Schimizu , K. : Necessary and sufficient conditions for the efficient solutions of nondifferentiable multiobjective problems . IEEE Trans. Syst. Man Cybern . 14 , 625  629 ( 1984 )
37. Minami , M. : Weak Paretooptimal necessary conditions in a nondifferentiable multiobjective program on a Banach space . J. Optim. Theory Appl . 41 , 451  461 ( 1983 )
38. Luc , D.T.: Theory of Vector Optimization. Lecture Notes in Economics and Mathematical Systems , vol. 319 . Springer, Berlin ( 1989 )
39. Craven , B.D. : Invex functions and constrained local minim . Bull. Aust. Math. Soc . 24 , 357  366 ( 1981 )