Optimization and Nonsmooth Analysis

Free download. Book file PDF easily for everyone and every device. You can download and read online Optimization and Nonsmooth Analysis file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Optimization and Nonsmooth Analysis book. Happy reading Optimization and Nonsmooth Analysis Bookeveryone. Download file Free Book PDF Optimization and Nonsmooth Analysis at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Optimization and Nonsmooth Analysis Pocket Guide.

In consequence, the dual action, which is a nonsmooth and nonconvex functional, has proven to be a valuable tool in the study of classical Hamiltonian systems, notably in the theory of periodic solutions. It can be defined for a very general class of functions and will be in Chapter 2. Our purpose here is to give a nontechnical summary of the main definitions for those whose primary interest lies in the results of later chapters. We begin the discussion with the simplest setting, that of a locally Lipschitz real-valued function defined on.

R" n-dimensional Euclidean space. This is also referred to as a Lipschitz condition of rank K. One might conceive of using other expressions in defining a generalized directional derivative. In fact, it is something that one seeks to avoid, just as in differential calculus, where one rarely computes derivatives from the definition. Instead, one appeals to a body of theory that characterizes generalized gradients of certain kinds of functions, and to rules which relate the generalized gradient of some compound function e.

This calculus of generalized gradients is developed in detail in Chapter 2. We shall see that there are several equivalent ways of defining the generalized gradient. One alternate to the route taken above hinges upon Rademacher's Theorem, which asserts that a locally Lipschitz function is 1. We shall obtain in Section 2. An interesting, nonsmooth, Lipschitz function related to C is its distance function dc, defined by The generalized directional derivative defined earlier can be used to develop a notion of tangency that does not require C to be smooth or convex.

The tangent cone Tc x to C at a point x in C is defined as follows: It can be shown that this condition does in fact specify a closed convex cone. Having defined a tangent cone, the likely candidate for the normal cone is the one obtained from T c x by polarity. We shall prove in Section 2. It will be shown that one has An overbar denotes closure. This is probably the easiest way to see how the normal cones of Figure 1. It turns out moreover that these analytic and geometric concepts can be linked to one another.

This is an important factor in their utility, just as chess pieces gain in strength when used cooperatively. Thus far the functions we have been considering have been locally Lipschitz. This turns out to be a useful class of functions, and not only because a very satisfactory theory can be developed for it. As we shall see, the local Lipschitz property is a reasonable and verifiable hypothesis in a wide range of applications, and one that is satisfied in particular when smoothness or convexity is present.

Locally Lipschitz functions possess the important property of being closed under the main functional operations sums, compositions, etc. Nonetheless, there are good reasons for wanting to develop the theory for non-Lipschitz functions as well, and we shall do so in Chapter 2.

For now, let us mention only that the key is to use Eq. If the set in Figure 1. At E, df is empty, while at F it coincides with the real line. We propose now to describe them and discuss their origins and relative merits. It is a beautiful body of mathematics in which the origins of several of today's mathematical subdisciplines can be found, and to which most of mathematics' historic names have contributed.

The basic problem in the calculus of variations is the following: to minimize, over all functions x in some class mapping [a, b] to Rn, the integral functional In the main developments of the theory which, along with other matters, will be discussed in the following section , the given function L is smooth four times continuously differentiable will do. Another classical version of the problem is the so-called problem of Bolza, in which the functional to be minimized is given by An arc is an absolutely continuous function mapping [a, b] to R".

We shall adopt the problem of minimizing the functional 2 over all arcs x as our first paradigm; we label it PB. Thus the fixed endpoint problem is a special case of PB. In similar fashion, we may define L to take account of such constraints as, say, While the extension of the problem of Bolza just described takes us beyond the pale of the classical calculus of variations, the formal resemblance that we take with us is a valuable guide in building an extended theory, as we shall see presently.

Let us first define the second paradigm. Let the word control signify any member of some given class of functions u - mapping [a, b] to a given subset U of Rm. The problem Pc consists of choosing u and XQ and hence ;c so as to minimize a given functional of the type subject to the condition that x b lie in a given set C,. A large number of physical systems can be modeled by differential equations in which some parameter values can be chosen within a certain range.

This is precisely the framework of Eq. From the strictly mathematical point of view, the optimal control problem subsumes the classical calculus of variations. Accordingly, it seems that optimal control theory can simply be viewed as the modern face of the calculus of variations. Yet we would argue that this fails to reflect the actual state of affairs, and that in fact a "paradigm shift" has taken place which outweighs in importance the mathematical links between the two.

There are significant distinctions in outlook; many of the basic questions differ, and different sorts of answers are required. We have found that by and large the methodology and the areas of application of the two disciplines have 16 Introduction and Preview little in common.


  1. Absence And Light: Meditations From The Klamath Marshes (Environmental Arts and Humanities)!
  2. Bestselling Series.
  3. Optimization And Nonsmooth Analysis by Frank H. Clarke.
  4. The Roman Revolution of Constantine.
  5. Oxidation in Foods and Beverages and Antioxidant Applications. Management in Different Industry Sectors.
  6. Hasty Pudding, Johnnycakes, and Other Good Stuff: Cooking in Colonial America.
  7. Correlated Interplanetary and Magnetospheric Observations: Proceedings of the Seventh ESLAB Symposium Held at Saulgau, W. Germany, 22–25 May, 1973.

We believe that this is due in large part to the fact that [as Young has pointed out] the smoothness requirements of the classical variational calculus are inappropriate in optimal control theory. Indeed, we would argue that in establishing a bridge between the two, nondifferentiability intervenes fundamentally; more anon. We now complete our triad of central problems.

In contrast to PB and Pc, this problem has very little history or tradition associated with it. It is in a sense the antithesis of PB, which suppresses all explicit constraints by incorporating them via extended values in the objective functional. The only explicit minimization in PD, on the other hand, involves one endpoint; all else is constraint. The optimal control problem Pc is a hybrid from this point of view. We now turn to the relationships among these three problems and their roles in what follows.

Equivalence: Why Three? For example, let us suppose that a problem Pc is given, and let us attempt to recast it in the mold of PD. Thus far in the discussion we have been neglecting an important factor: the hypotheses under which each of the three problems is analyzed. For example, the function L produced by Eq. Thus the practical import of such reductions depends upon hypotheses, and these in turn depend upon the issue under discussion e.

By and large, the relations embodied in 6 accurately reflect the hierarchy that exists.


  1. Governance.
  2. Introduction to Nonsmooth Optimization!
  3. Lewis Hamilton: My Story?
  4. Competence Assessment in Dementia?
  5. Nonsmooth Analysis & Applied Optimization (Seminar).

An example of a formal reduction that is incompatible with our hypotheses in general is the one in which PB is phrased in the form of Pc by taking We ignore boundary terms. Why treat three problems if one of them subsumes the other two? One reason is that the differing structure of the problems together with some 18 Introduction and Preview prodding from established custom leads to certain results e. Many problems encountered in applications can be phrased as any one of the three central types we have defined. As we shall illustrate, the methods which correspond to one problem may yield more precise information than those of another.

Each of the three paradigms has advantages specific to it. PB, by its very form, facilitates our drawing inspiration from the calculus of variations. It is possible within the framework of PB to achieve a unification of the classical calculus of variations and optimal control theory. Indeed, the extension to PB of classical variational methods, which requires consideration of nonsmoothness, turns out to be a powerful tool in optimal control theory.

A disadvantage of PB is that a rather high level of technical detail is required to deal with the full generality of the integrands L that may arise. The standard optimal control problem Pc is a paradigm that has proven itself natural and useful in a wide variety of modeling situations. It is the only one of the three in which consideration of nondifferentiability can be circumvented to a great extent by certain smoothness hypotheses. As we shall see in the next section and in Section 5.

The main advantage of PD derives from the fact that its structure is the simplest of the three. For this reason, most of the results that we shall obtain for the problems PB and Pc are derived as consequences of the theory developed for PD in Chapter 3. PD, being of a novel form, has of course not been used very much in modeling applications. It is our belief that some problems can most naturally be modeled this way; an example is analyzed in Section 3. We turn now to a discussion of the main issues that arise in connection with variational and control problems, and the existing theory of such problems.

We shall then be able to place in context the results of the three chapters on dynamic optimization. There are of course other issues that depend on the particular structure of the problem such as controllability and sensitivity, which will also be studied , but we shall organize the present discussion around the three themes given above. We shall lead off with necessary conditions; we begin at the beginning.

When L is sufficiently smooth, any local solution to this problem satisfies the Euler-Lagrange equation Another condition that a solution x must satisfy is known as the Weierstrass condition. This is the assertion that for each t, the following holds: The other main necessary condition is known as the Jacobi condition; it is described in Section 4. Classical mechanics makes much use of a function called the Hamiltonian.

Optimization And Nonsmooth Analysis

It is the function H derived from L the Lagrangian via the Legendre transform, as follows. Then we define It follows that if x satisfies the Euler-Lagrange equation 2 , then the function satisfies a celebrated system of differential equations called Hamilton 's equations. In order to facilitate future comparison, let us depart from classical variational theory by defining the pseudo-Hamiltonian Hp: The reader may verify that the Euler-Lagrange equation 2 and the Weierstrass 20 Introduction and Preview condition 3 can be summarized in terms of HP as follows: Now suppose that side constraints of the type, say, g x, Jc Suppose that we now constrain the arcs x under consideration to satisfy where u t belongs to Then L x, x can be expressed in the form F x, u , and the basic problem of minimizing 1 , but subject now to 8 , is seen to be the optimal control problem Pc described in the preceding section.

If in the definition 6 of the pseudo-Hamiltonian Hp we take account of Eq. We have seen two approaches to giving necessary conditions for constrained problems: the multiplier rule which is a Lagrangian approach and the maximum principle which employs the pseudo-Hamiltonian, and which is therefore essentially Lagrangian in nature also. Clearly, neither of these is a direct descendant of the Hamiltonian theory. What in fact is the true Hamiltonian H of a constrained problem?

To answer this question, we first note the extension of the Legendre transform defined by Fenchel. Applied to L, it expresses the Hamiltonian H as follows: 1. Because it is nondifferentiable in general, in contrast to HP, which is as smooth as the data of the problem. This causes no difficulty, however, if the nonsmooth calculus described in Section 1. We shall prove that if x solves the optimal control problem, then an arc p exists such that one has which reduces to the classical Hamiltonian system 5 when H is smooth.

We shall confirm in discussing existence and sufficiency that H is heir to the rich Hamiltonian theory of the calculus of variations. Thus, the three main problems Pc, PD, and PB all admit necessary conditions featuring 13 ; one need only interpret the Hamiltonian appropriately. We are omitting in this section other components of the necessary conditions that deal with boundary terms. The approach just outlined above is different than, for example, that of the maximum principle; the resulting necessary conditions are not equivalent.

Existence and Sufficient Conditions The classical existence theorem of Tonelli in the calculus of variations postulates two main conditions under which the basic problem of minimizing 1 admits a solution. It follows that growth conditions on L can be expressed in terms of H. For example, 14 is equivalent to In Section 4.

Rockafellar of the classical existence theory to the generalized problem of Bolza PB. Since Pc is a special case of PB, this result applies to the optimal control problem as well see Section 5. Let us now turn to the topic of sufficient conditions. There are four main techniques in the calculus of variations to prove that a given arc actually does solve the basic problem. We examine these briefly with an eye to how they extend to constrained problems. This refers to the situation in which we know that a solution exists, and in which we apply the necessary conditions to eliminate all other candidates.

This method is common to all optimization problems. Note the absolute necessity of an appropriate existence theorem, and of knowing that the necessary conditions apply. Every optimization problem has its version of the general rule that for "convex problems," the necessary conditions are also sufficient. As simple and useful as it is, this fact is not known in the classical theory.

This approach can be extended to derive sufficient conditions for the optimal control problem see Section 5. Conjugacy and Fields. There is a beautiful chapter in the calculus of variations that ties together certain families of solutions to the Euler-Lagrange equation fields , the zeros of a certain second-order differential equation conjugate points , and solutions of a certain partial differential equation the Hamilton-Jacobi equation , and uses these ingredients to derive sufficient conditions. There has been only partial success in extending this theory to constrained problems e.

Recently Zeidan has used the Hamiltonian approach and the classical technique of canonical transformations to develop a conjugacy approach to sufficient conditions for PB see Section 4. Hamilton-Jacobi Methods. The classical Hamilton-Jacobi equation is the following partial differential equation for a function W t, x : It has long been known in classical settings that this equation is closely linked to optimality. A discretization of the equation leads to the useful numerical technique known as dynamic programming.

Our main interest in it stems from its role as a verification technique. Working in the context of the differential inclusion problem PD, we shall use generalized gradients and the true Hamiltonian to define a generalized Hamilton-Jacobi equation see Section 3. To a surprising extent, it turns out that the existence of a certain solution to this extended Hamilton-Jacobi equation is both a necessary and a sufficient condition for optimality.

Chapter Two Generalized Gradients What is now proved was once only imagined. The theory and the calculus of generalized gradients are developed in detail, beginning with the case of a real-valued "locally Lipschitz" function defined on a Banach space. We also develop an associated geometric theory of normal and tangent cones, and explore the relationship between all of these concepts and their counterparts in smooth and in convex analysis. Later in the chapter special attention is paid to the case in which the underlying space is finite-dimensional, and also to the generalized gradients of certain important kinds of functionals.

Examples are given, and extensions of the theory to non-Lipschitz and vector-valued functions are also carried out. At the risk of losing some readers for one of our favorite chapters, we will concede that the reader who wishes only to have access to the statements of certain results of later chapters may find the introduction to generalized gradients given in Chapter 1 adequate for this purpose.

Those who wish to follow all the details of the proofs in this and later chapters will require a background in real and functional analysis. In particular, certain standard constructs and results of the theory of Banach spaces are freely invoked. Both the prerequisites and the difficulty of many proofs are greatly reduced if it is assumed that the Banach space is finite-dimensional.

The Lipschitzl Condition Let 7 be a subset of X. We conclude which establishes a. Finally, let any v and w in X be given. We have 2. To prove c , we calculate: TTie Generalized Gradient The Hahn-Banach Theorem asserts that any positively homogeneous and subadditive functional on X majorizes some linear functional on X. Under the conditions of Proposition 2. The following summarizes some basic properties of the generalized gradient. Assertion a is immediate from our preceding remarks and Proposition 2.

For illustrative purposes, however, let us nonetheless so calculate the generalized gradient of the absolute-value function on the reals.

See a Problem?

Support Functions As Proposition 2. This is an instance of a general fact: closed convex sets are characterized by their support functions. Assertion a is evident from Propositions 2. In order to prove b , let any v in X be given. Part c of the proposition is an immediate consequence of b , so we turn now to d.

This contradicts assertion b. The usual one-sided directional derivative of F at x in the direction v is when this limit exists. Let us note that this is equivalent to saying that the difference quotient converges for each u, that one has and that the convergence is uniform with respect to v in finite sets the last is automatically true.

If the word "finite" in the preceding sentence is replaced by "compact," the derivative is known as Hadamard; for "bounded" we obtain the Frechet derivative. In general, these are progressively more demanding requirements. This last condition is automatic if F is Lipschitz near x. Note that ours is a "Hadamard-type strict derivative. The following are equivalent'. Assume a. The equality in b holds by assumption, so to prove b we need only show that F is Lipschitz near jc. Thus b holds. We now posit b. Let V be any compact subset of X and e any positive number.

D We are now ready to explore the relationship between the various derivatives defined above and the generalized gradient. Then Proof. The required conclusion now follows from Proposition 2. To prove the converse, it suffices to show that the condition of Proposition 2. We now calculate: This establishes the limit condition of Proposition 2. Invoke the corollary to Proposition 2. It follows from this and from upper semicontinuity see Proposition 2.

Proof Roberts and Varberg, The former can be written as where 5 is any fixed positive number. Let I x denote the set of indices which i. The proof, which can be based upon the mean value theorem, Theorem 2. Then f is convex on U iff the multifunction df is monotone on U', that is, iff 38 Generalized Gradients 2. Note that sf is also Lipschitz near x. By Proposition 2. In view of Proposition 2. Corollary I Equality holds in Proposition 2. In this case, as is easily seen, we actually have so that the two sets in the statement of the proposition have support functions that coincide, and hence are equal.

Invoke Proposition 2. Regularity It is often the case that calculus formulas for generalized gradients involve inclusions, such as in Proposition 2. The addition of further hypotheses can serve to sharpen such rules by turning the inclusions to equalities. For instance, equality certainly holds in Proposition 2. However, one would wish for a less extreme condition, one that would cover the convex nondifferentiable case, for example in which equality does hold. A class of functions that proves useful in this connection is the following.

To see how regularity contributes, let us note the following addendum to Proposition 2. Equality then holds in Corollary 2 as well, if in addition each st is nonnegative. The proof made clear that equality would hold provided the two sets had the same support functions. In the future we shall often leave implicit such dual results. Here are some first observations about regular functions. The assertion d is evident from Proposition 2.

We shall extend c in Section 2. We denote by Lemma. The fact that g is Lipschitz is plain. The two closed convex sets appearing in Eq. We may calculate the latter by appealing to Propositions 2. We deduce which is the assertion of the theorem take 2. As an immediate illustration of its use, consider the function of Example 2. All sums below are from 1 to n. In this case it follows that f is regular at x. In this case the co is superfluous. In this case it follows that f is regular at x, and the co is superfluous.

The support function of either S or co S evaluated at a point v in X is easily seen to be given by the quantity It suffices by Proposition 2. These observations combine with Eq. We may now calculate as follows: which completes the proof of the lemma. Having completed the proof of the general formula, we turn now to the additional assertions, beginning with the situation depicted by i.

Consider the quantity q0 defined by Eq. We have, since the a are all nonnegative, 2. The equality follows, and i is disposed of. The reader may verify that the preceding argument will adapt to case hi as well. This leaves case ii , in which Dsg h x is a scalar a. We find, much as above, an immediate consequence of the strict differentiability of g at h x As before, this yields the equality in the theorem. The co is superfluous in cases ii and iii , since when dg h x or else each dhf x consists of a single point, the set whose convex hull is being taken is already convex it fails to be so in general and closed.

Suppose that F is strictly differentiable at x and that g is Lipschitz near F x. Equality also holds if F maps every neighborhood of x to a set which is dense in a neighborhood of F x for example, if DsF x is onto. Now suppose that g is regular. Apply the theorem with F being the imbedding map from X to Y. It suffices now to apply Theorem 2. Because g is convex, it is regular at h x see Proposition 2. The assertions regarding equality and regularity follow from Theorem 2.

D The next result is proven very similarly: 2. For regular functions, however, a general relationship does hold between these sets. Since g is regular in fact, convex , it follows from Theorem 2. The preceding remarks and Proposition 2. The function dc is certainly not differentiable in any of the standard senses; it is, however, globally Lipschitz, as we are about to prove. We shall use the generalized gradient of dc to lead us to new concepts of tangents and normals to an arbitrary set C.

Subsequently, we shall characterize these normals and tangents topologically, thus making it clear that they do not depend on the particular norm or distance function that we are using. We shall prove that the new tangents and normals defined here reduce to the known ones in the smooth or convex cases. We have Since e is arbitrary, and since the argument can be repeated with x and y switched, the result follows. Tangents Suppose now that ;c is a point in C.

The set of all tangents to C at x is denoted Tc x. Of course, only the local nature of C near x is involved in this definition. It is an immediate consequence of Proposition 2. Normals We define the normal cone to C at A: by polarity with We have the following alternate characterization of Nc x in terms of generalized gradients: 2. The definition of Tc x , along with Proposition 2. S and suppose that f attains a minimum over C at x.

Optimization And Nonsmooth Analysis

Let us prove the first assertion by supposing the contrary. D Corollary Suppose that f is Lipschitz near x and attains a minimum over C at x. Thus The result now follows from Proposition 2. We now resume the proof of the proposition. Corollary If C is convex, then Proof. The proof showed that dc is convex, and therefore regular by Proposition 2.

D An Intrinsic Characterization of Tc x We now show that the tangency concept defined above is actually independent of the norm and hence distance function used on X. Knowing this allows us to choose in particular circumstances a distance function which makes calculating tangents or normals more convenient.

We must produce the sequence u, alluded to in the statement of the theorem. Now for the converse. Let v have the stated property concerning sequences, and choose a sequence y converging to x and ti decreasing to 0 such that Our purpose is to prove this quantity nonpositive, for then v e Tc x by definition. Let ct in C satisfy It follows that c, converges to x. But then, since dc is Lipschitz, by We deduce that the limit 3 is nonpositive, which completes the proof. Then 2. The first equality follows readily from the characterization of the tangent cone given in the theorem, and the second then follows by polarity.

Regularity of Sets In order to establish the relationship between the geometric concepts defined above and the previously known ones in smooth contexts, we require a notion of regularity for sets which will play the role that regularity for functions played in Section 2. We recall first the contingent cone Kc x of tangents to a set C at a point x.

It follows immediately from Theorem 2. The latter may not be convex, however. The second corollary to the following theorem will confirm the fact that Nc and Tc reduce to the classical notions when C is a " smooth" set. X'- f y If C is defined as y e If f is regular at x, then equality holds, and C is regular at x.

Optimization and Nonsmooth Analysis by Frank H. Clarke | Waterstones

In order to derive the extra assertions in this case, it will suffice to prove that any member v of Kc x belongs to the left-hand side of 5. So let v belong to Kc x. Corollary 1 Iff is regular at x, then equality holds. Since taking polars reverses inclusions, the result follows immediately from 5. Corollary 2 Let C be given as follows: 2. It is possible, however, for there to be no hypertangents at all.

Then the set of all hypertangents to C at x coincides with int Tc x. Proof Rockafellar, Let K denote the set of all hypertangents to C at jc; it follows easily that K is an open set containing all positive multiples of its elements, and that K c Tc x. We therefore establish 6. We claim that 7 is valid for this e. To see this, let v be any element of the left-hand side of 7.

Since 2. A different link can be forged through the notion of epigraph. The following confirms that tangency is consistent with the generalized directional derivative. By Theorem 2. Thus We rewrite this as Taking limits, and recalling Eq. We now turn to ii. The corollary would then guarantee that this new definition is consistent with the previous one for the locally Lipschitz case.

We proceed to succumb to temptation: 2. The regularity assertion follows easily from the definition as well. Our most important 2. It facilitates greatly the calculation of df in finite dimensions. We recall Rademacher's Theorem, which states that a function which is Lipschitz on an open subset of Rn is differentiable almost everywhere a.

Then The meaning of Eq. Let us note to begin with that there are "plenty" of sequences xt which converge to x and avoid S U fy, since the latter has measure 0 near x. Further, because df is locally bounded near x Proposition 2. The limit of any such sequence must belong to df x by the closure property of df proved in Proposition 2. It follows that the set is contained in df x and is nonempty and bounded, and in fact compact, since it is rather obviously closed.

cudupyzeqyqy.tk Takeda: Efficient DC Algorithm for Nonconvex Nonsmooth Optimization Problems

Since df x is convex, we deduce that the left-hand side of 1 contains the right. Now, the convex hull of a compact set in R" is compact, so to complete the proof we need only show that the support function of the left-hand side of 1 i. This is what the following lemma does: Lemma. Then since Df exists a. We deduce which completes the proof.

D Corollary 2. For this example therefore we have We saw in Proposition 2. In view of Theorem 2. We can also use Theorem 2. Recall that dc was shown to be globally Lipschitz of rank 1 in Proposition 2. Thus x does not lie in cl C, and admits at least one closest point c0 in cl C i. Then, for all c in cl C, one has Proof. We have by definition, for all c in cl C, 2. Then ddc x equals the convex hull of the origin and the set Proof.

The gradient of dc, whenever it exists, is either 0 or a unit vector of the type described in Proposition 2. We immediately derive from Theorem 2. This follows from Proposition 2. Corollary 1 If x lies on the boundary of clC, then ddc x contains nonzero points. Letting y converge to x i. Corollary 2 If x lies on the boundary of clC, then Nc x contains nonzero points.

A Characterization of Normal Vectors The alternate characterization of normal vectors to sets in R" given below is useful in many particular calculations; it is a geometric analogue of Theorem 2. Then Nc x is the closed convex cone generated by the origin and the set Proof. Proposition 2. The result follows from Theorem 2. D Interior of the Tangent Cone 2. To see this we shall use the characterization of Tc given by Theorem 2. Let xt be a sequence in C converging to x, tt a sequence decreasing to 0. Then the multifunction Nc is closed at x; that is, Proof.

Invoke Corollary 1 and the corollary to Theorem 2. As before, Rademacher's Theorem asserts that F is differentiable i. We shall continue to denote the set of points at which F fails to be differentiable by S2F. We shall endow this space with the norm and we denote by BmXn the open unit ball in RmX". We proceed to summarize some properties of dF. To be consistent with the generalized Jacobian, df should consist of 1 X n matrices i.

This distinction is irrelevant as long as we 2. Assertions a and d together follow easily, the former as in the first part of the proof of Theorem 2. It is clear that c subsumes b. To prove c , suppose, for a given e, that no such 8 existed. Conceivably although we doubt it , an altered definition in which the points x, are also restricted to lie outside a null set S could lead to a different generalized Jacobian dsF x.

The possibility that the generalized Jacobian is nonintrinsic in this sense is unresolved, f In most applications, it is the images of vectors under the generalized Jacobian that enter into the picture. In this sense, we now show that the putative ambiguity of the generalized Jacobian is irrelevant. We shall prove the first of these, the other being analogous. Since both sets in question are compact and convex, and since dF includes dsF, we need only show that for any point u in Rm, the value o, of the support function of dF x v evaluated at u does not exceed the corresponding value o2 for dsF x v.

One has t In fact the intrinsic nature of the generalized Jacobian has been confirmed by J. Warga; see J. Recall that Vg, when it exists, is a row vector see Remark 2. For y which is precisely o The following is a straightforward extension of the vector mean-value theorem. Note that as a consequence of Proposition 2. Measurements and their Uncertainties. Ifan Hughes. Lectures on the Foundations of Mathematics: Cambridge.

Ludwig Wittgenstein. Wilfrid Hodges. Introducing Infinity. Brian Clegg. Stewart Shapiro. Category Theory. Steve Awodey. Sets, Logic and Categories. Peter J. Great at My Job but Crap at Numbers. Heidi Smith. Foundations of Science Mathematics. Deviderjit Singh Sivia. Your review has been submitted successfully. Not registered? Clarke then applies these methods to obtain a powerful, unified approach to the analysis of problems in optimal control and mathematical programming.

Examples are drawn from economics, engineering, mathematical physics, and various branches of analysis in this reprint volume. Get A Copy. Paperback , pages. More Details Original Title. Other Editions 1. Friend Reviews. To see what your friends thought of this book, please sign up. To ask other readers questions about Optimization And Nonsmooth Analysis , please sign up. Be the first to ask a question about Optimization And Nonsmooth Analysis. Lists with This Book. This book is not yet featured on Listopia. Community Reviews.

Optimization and Nonsmooth Analysis (Classics in Applied Mathematics)

Showing Rating details. All Languages.

admin