Stochastic Modelling and Applied Probability

Ngày phát hành gần nhất: 24 tháng 6, 2014
Loạt sách
76
Sách

Giới thiệu về bộ sách điện tử này

This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
Deterministic and Stochastic Optimal Control
Sách 1 · thg 12 2012 ·
0,0
This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
Applied Functional Analysis: a, Edition 2
Sách 3 · thg 12 2012 ·
4,0
In preparing the second edition, I have taken advantage of the opportunity to correct errors as well as revise the presentation in many places. New material has been included, in addition, reflecting relevant recent work. The help of many colleagues (and especially Professor J. Stoer) in ferreting out errors is gratefully acknowledged. I also owe special thanks to Professor v. Sazonov for many discussions on the white noise theory in Chapter 6. February, 1981 A. V. BALAKRISHNAN v Preface to the First Edition The title "Applied Functional Analysis" is intended to be short for "Functional analysis in a Hilbert space and certain of its applications," the applications being drawn mostly from areas variously referred to as system optimization or control systems or systems analysis. One of the signs of the times is a discernible tilt toward application in mathematics and conversely a greater level of mathematical sophistication in the application areas such as economics or system science, both spurred undoubtedly by the heightening pace of digital computer usage. This book is an entry into this twilight zone. The aspects of functional analysis treated here are rapidly becoming essential in the training at the advance graduate level of system scientists and/or mathematical economists. There are of course now available many excellent treatises on functional analysis.
Stochastic Processes in Queueing Theory
Sách 4 · thg 12 2012 ·
0,0
The object of queueing theory (or the theory of mass service) is the investigation of stochastic processes of a special form which are called queueing (or service) processes in this book. Two approaches to the definition of these processes are possible depending on the direction of investigation. In accordance with this fact, the exposition of the subject can be broken up into two self-contained parts. The first of these forms the content of this monograph. . The definition of the queueing processes (systems) to be used here is dose to the traditional one and is connected with the introduction of so-called governing random sequences. We will introduce algorithms which describe the governing of a system with the aid of such sequences. Such a definition inevitably becomes rather qualitative since under these conditions a completely formal construction of a stochastic process uniquely describing the evolution of the system would require introduction of a complicated phase space not to mention the difficulties of giving the distribution of such a process on this phase space.
Statistics of Random Processes I: General Theory
Sách 5 · thg 11 2013 ·
0,0
A considerable number of problems in the statistics of random processes are formulated within the following scheme. On a certain probability space (Q, ff, P) a partially observable random process (lJ,~) = (lJ ~/), t :;::-: 0, is given with only the second component n ~ = (~/), t:;::-: 0, observed. At any time t it is required, based on ~h = g., ° s sst}, to estimate the unobservable state lJ/. This problem of estimating (in other words, the filtering problem) 0/ from ~h will be discussed in this book. It is well known that if M(lJ;)
Statistics of Random Processes: I. General Theory, Edition 2
Sách 5 · thg 4 2013 ·
0,0
At the end of 1960s and the beginning of 1970s, when the Russian version of this book was written, the 'general theory of random processes' did not operate widely with such notions as semimartingale, stochastic integral with respect to semimartingale, the ItO formula for semimartingales, etc. At that time in stochastic calculus (theory of martingales), the main object was the square integrable martingale. In a short time, this theory was applied to such areas as nonlinear filtering, optimal stochastic control, statistics for diffusion type processes. In the first edition of these volumes, the stochastic calculus, based on square integrable martingale theory, was presented in detail with the proof of the Doob-Meyer decomposition for submartingales and the description of a structure for stochastic integrals. In the first volume ('General Theory') these results were used for a presentation of further important facts such as the Girsanov theorem and its generalizations, theorems on the innovation pro cesses, structure of the densities (Radon-Nikodym derivatives) for absolutely continuous measures being distributions of diffusion and ItO-type processes, and existence theorems for weak and strong solutions of stochastic differential equations. All the results and facts mentioned above have played a key role in the derivation of 'general equations' for nonlinear filtering, prediction, and smoothing of random processes.
Statistics of Random Processes II: Applications
Sách 6 · thg 4 2013 ·
0,0
Statistics of Random Processes II: Applications, Edition 2
Sách 6 · thg 3 2013 ·
0,0
At the end of 1960s and the beginning of 1970s, when the Russian version of this book was written, the 'general theory of random processes' did not operate widely with such notions as semimartingale, stochastic integral with respect to semimartingale, the Ito formula for semimartingales, etc. At that time in stochastic calculus (theory of martingales), the main object was the square integrable martingale. In a short time, this theory was applied to such areas as nonlinear filtering, optimal stochastic control, statistics for diffusion type processes. In the first edition of these volumes, the stochastic calculus, based on square integrable martingale theory, was presented in detail with the proof of the Doob-Meyer decomposition for submartingales and the description of a structure for stochastic integrals. In the first volume ('General Theory') these results were used for a presentation of further important facts such as the Girsanov theorem and its generalizations, theorems on the innovation pro cesses, structure of the densities (Radon-Nikodym derivatives) for absolutely continuous measures being distributions of diffusion and ItO-type processes, and existence theorems for weak and strong solutions of stochastic differential equations. All the results and facts mentioned above have played a key role in the derivation of 'general equations' for nonlinear filtering, prediction, and smoothing of random processes.
Game Theory: Lectures for Economists and Systems Scientists
Sách 7 · thg 12 2012 ·
0,0
The basis for this book is a number of lectures given frequently by the author to third year students of the Department of Economics at Leningrad State University who specialize in economical cybernetics. The main purpose of this book is to provide the student with a relatively simple and easy-to-understand manual containing the basic mathematical machinery utilized in the theory of games. Practical examples (including those from the field of economics) serve mainly as an interpretation of the mathematical foundations of this theory rather than as indications of their actual or potential applicability. The present volume is significantly different from other books on the theory of games. The difference is both in the choice of mathematical problems as well as in the nature of the exposition. The realm of the problems is somewhat limited but the author has tried to achieve the greatest possible systematization in his exposition. Whenever possible the author has attempted to provide a game-theoretical argument with the necessary mathematical rigor and reasonable generality. Formal mathematical prerequisites for this book are quite modest. Only the elementary tools of linear algebra and mathematical analysis are used.
Optimal Stopping Rules
Sách 8 · thg 9 2007 ·
0,0
Along with conventional problems of statistics and probability, the - vestigation of problems occurring in what is now referred to as stochastic theory of optimal control also started in the 1940s and 1950s. One of the most advanced aspects of this theory is the theory of optimal stopping rules, the development of which was considerably stimulated by A. Wald, whose Sequential ~nal~sis' was published in 1947. In contrast to the classical methods of mathematical statistics, according to which the number of observations is fixed in advance, the methods of sequential analysis are characterized by the fact that the time at which the observations are terminated (stopping time) is random and is defined by the observer based on the data observed. A. Wald showed the advantage of sequential methods in the problem of testing (from independent obser- tions) two simple hypotheses. He proved that such methods yield on the average a smaller number of observations than any other method using fixed sample size (and the same probabilities of wrong decisions). Furth- more, Wald described a specific sequential procedure based on his sequ- tial probability ratio criterion which proved to be optimal in the class of all sequential methods. By the sequential method, as applied to the problem of testing two simple hypotheses, we mean a rule according to which the time at which the observations are terminated is prescribed as well as the terminal decision as to which of the two hypotheses is true.
Gaussian Random Processes
Sách 9 · thg 12 2012 ·
0,0
The book deals mainly with three problems involving Gaussian stationary processes. The first problem consists of clarifying the conditions for mutual absolute continuity (equivalence) of probability distributions of a "random process segment" and of finding effective formulas for densities of the equiva lent distributions. Our second problem is to describe the classes of spectral measures corresponding in some sense to regular stationary processes (in par ticular, satisfying the well-known "strong mixing condition") as well as to describe the subclasses associated with "mixing rate". The third problem involves estimation of an unknown mean value of a random process, this random process being stationary except for its mean, i. e. , it is the problem of "distinguishing a signal from stationary noise". Furthermore, we give here auxiliary information (on distributions in Hilbert spaces, properties of sam ple functions, theorems on functions of a complex variable, etc. ). Since 1958 many mathematicians have studied the problem of equivalence of various infinite-dimensional Gaussian distributions (detailed and sys tematic presentation of the basic results can be found, for instance, in [23]). In this book we have considered Gaussian stationary processes and arrived, we believe, at rather definite solutions. The second problem mentioned above is closely related with problems involving ergodic theory of Gaussian dynamic systems as well as prediction theory of stationary processes.
Linear Multivariable Control: A Geometric Approach, Edition 3
Sách 10 · thg 12 2012 ·
0,0
In wntmg this monograph my aim has been to present a "geometric" approach to the structural synthesis of multivariable control systems that are linear, time-invariant and of finite dynamic order. The book is ad dressed to graduate students specializing in control, to engineering scientists involved in control systems research and development, and to mathemati cians interested in systems control theory. The label "geometric" in the title is applied for several reasons. First and obviously, the setting is linear state space and the mathematics chiefly linear algebra in abstract (geometric) style. The basic ideas are the familiar system concepts of controllability and observability, thought of as geometric prop erties of distinguished state subspaces. Indeed, the geometry was first brought in out of revulsion against the orgy of matrix manipulation which linear control theory mainly consisted of, around fifteen years ago. But secondly and of greater interest, the geometric setting rather quickly sug gested new methods of attacking synthesis which have proved to be intuitive and economical; they are also easily reduced to matrix arithmetic as soon as you want to compute. The essence of the "geometric" approach is just this: instead of looking directly for a feedback law (say u = Fx) which would solve your synthesis problem if a solution exists, first characterize solvability as a verifiable property of some constructible state subspace, say Y. Then, if all is well, you may calculate F from Y quite easily.
Linear Multivariable Control: a Geometric Approach: A Geometric Approach, Edition 2
Sách 10 · thg 12 2012 ·
0,0
In writing this monograph my aim has been to present a "geometric" approach to the structural synthesis of multivariable control systems that are linear, time-invariant and of finite dynamic order. The book is addressed to graduate students specializing in control, to engineering scientists engaged in control systems research and development, and to mathemati cians with some previous acquaintance with control problems. The present edition of this book is a revision of the preliminary version, published in 1974 as a Springer-Verlag "Lecture Notes" volume; and some of the remarks to follow are repeated from the original preface. The label "geometric" in the title is applied for several reasons. First and obviously, the setting is linear state space and the mathematics chiefly linear algebra in abstract (geometric) style. The basic ideas are the familiar system concepts of controllability and observability, thought of as geometric properties of distinguished state subspaces. Indeed, the geometry was first brought in out of revulsion against the orgy of matrix manipulation which linear control theory mainly consisted of, not so long ago. But secondly and of greater interest, the geometric setting rather quickly suggested new methods of attacking synthesis which have proved to be intuitive and econo mical; they are also easily reduced to matrix arithmetic as soon as you want to compute.
Brownian Motion
Sách 11 · thg 12 2012 ·
0,0
Following the publication of the Japanese edition of this book, several inter esting developments took place in the area. The author wanted to describe some of these, as well as to offer suggestions concerning future problems which he hoped would stimulate readers working in this field. For these reasons, Chapter 8 was added. Apart from the additional chapter and a few minor changes made by the author, this translation closely follows the text of the original Japanese edition. We would like to thank Professor J. L. Doob for his helpful comments on the English edition. T. Hida T. P. Speed v Preface The physical phenomenon described by Robert Brown was the complex and erratic motion of grains of pollen suspended in a liquid. In the many years which have passed since this description, Brownian motion has become an object of study in pure as well as applied mathematics. Even now many of its important properties are being discovered, and doubtless new and useful aspects remain to be discovered. We are getting a more and more intimate understanding of Brownian motion.
Conjugate Direction Methods in Optimization
Sách 12 · thg 12 2012 ·
0,0
Shortly after the end of World War II high-speed digital computing machines were being developed. It was clear that the mathematical aspects of com putation needed to be reexamined in order to make efficient use of high-speed digital computers for mathematical computations. Accordingly, under the leadership of Min a Rees, John Curtiss, and others, an Institute for Numerical Analysis was set up at the University of California at Los Angeles under the sponsorship of the National Bureau of Standards. A similar institute was formed at the National Bureau of Standards in Washington, D. C. In 1949 J. Barkeley Rosser became Director of the group at UCLA for a period of two years. During this period we organized a seminar on the study of solu tions of simultaneous linear equations and on the determination of eigen values. G. Forsythe, W. Karush, C. Lanczos, T. Motzkin, L. J. Paige, and others attended this seminar. We discovered, for example, that even Gaus sian elimination was not well understood from a machine point of view and that no effective machine oriented elimination algorithm had been developed. During this period Lanczos developed his three-term relationship and I had the good fortune of suggesting the method of conjugate gradients. We dis covered afterward that the basic ideas underlying the two procedures are essentially the same. The concept of conjugacy was not new to me. In a joint paper with G. D.
Stochastic Filtering Theory
Sách 13 · thg 4 2013 ·
0,0
This book is based on a seminar given at the University of California at Los Angeles in the Spring of 1975. The choice of topics reflects my interests at the time and the needs of the students taking the course. Initially the lectures were written up for publication in the Lecture Notes series. How ever, when I accepted Professor A. V. Balakrishnan's invitation to publish them in the Springer series on Applications of Mathematics it became necessary to alter the informal and often abridged style of the notes and to rewrite or expand much of the original manuscript so as to make the book as self-contained as possible. Even so, no attempt has been made to write a comprehensive treatise on filtering theory, and the book still follows the original plan of the lectures. While this book was in preparation, the two-volume English translation of the work by R. S. Liptser and A. N. Shiryaev has appeared in this series. The first volume and the present book have the same approach to the sub ject, viz. that of martingale theory. Liptser and Shiryaev go into greater detail in the discussion of statistical applications and also consider inter polation and extrapolation as well as filtering.
Controlled Diffusion Processes
Sách 14 · thg 9 2008 ·
4,0
Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. ~urin~ that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in Wonham [76]). At the same time, Girsanov [25] and Howard [26] made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4]. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8], Mine and Osaki [55], and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.
Stochastic Storage Processes: Queues, Insurance Risk, Dams, and Data Communication, Edition 2
Sách 15 · thg 12 2012 ·
0,0
This is a revised and expanded version of the earlier edition. The new material is on Markov-modulated storage processes arising from queueing and data commu nication models. The analysis of these models is based on the fluctuation theory of Markov-additive processes and their discrete time analogues, Markov random walks. The workload and queue length processes, omitted from the earlier edition, are also presented. In addition, many sections have been rewritten, with new re sults and proofs, as well as further examples. The mathematical level and style of presentation, however, remain the same. Chapter I contains a comprefensive treatment of the waiting time and related quantities in a single server queue, combining Chapters 1 and 2 of the earlier edition. In Chapter 2 we treat the (continuous time) workload and queue length processes using their semiregenerative properties. Also included are bulk queues omitted from the earlier edition, but included in its Russian translation. The queue MIMIl is presented in Chapter 3. This is the so-called simple queue, but its treat ment in most of the literature is far from simple. Our analysis of the queue length process is elementary and yields explicit results for various distributions of interest. are treated in Chapter 4, combining Chapters 3 Continuous time storage models and 4 of the earlier edition. We present extensive new material, omitting much of the old Chapter 4. This has resulted in a streamlined account of this important class of models.
Stochastic Storage Processes: Queues, Insurance Risk and Dams
Sách 15 · thg 12 2012 ·
0,0
This book is based on a course I have taught at Cornell University since 1965. The primary topic of this course was queueing theory, but related topics such as inventories, insurance risk, and dams were also included. As a text I used my earlier book, Queues and Inventories (John Wiley, New York, 1965). Over the years the emphasis in this course shifted from detailed analysis of probability models to the study of stochastic processes that arise from them, and the subtitle of the text, "A Study of Their Basic Stochastic Processes," became a more appropriate description of the course. My own research into the fluctuation theory for U:vy processes provided a new perspective on the topics discussed, and enabled me to reorganize the material. The lecture notes used for the course went through several versions, and the final version became this book. A detailed description of my approach will be found in the Introduction. I have not attempted to give credit to authors of individual results. Readers interested in the historical literature should consult the Selected Bibliography given at the end of the Introduction. The original work in this area is presented here with simpler proofs that make full use of the special features of the underlying stochastic processes. The same approach makes it possible to provide several new results. Thanks are due to Kathy King for her excellent typing of the manuscript.
Statistical Estimation: Asymptotic Theory
Sách 16 · thg 11 2013 ·
0,0
when certain parameters in the problem tend to limiting values (for example, when the sample size increases indefinitely, the intensity of the noise ap proaches zero, etc.) To address the problem of asymptotically optimal estimators consider the following important case. Let X 1, X 2, ... , X n be independent observations with the joint probability density !(x,O) (with respect to the Lebesgue measure on the real line) which depends on the unknown patameter o e 9 c R1. It is required to derive the best (asymptotically) estimator 0:( X b ... , X n) of the parameter O. The first question which arises in connection with this problem is how to compare different estimators or, equivalently, how to assess their quality, in terms of the mean square deviation from the parameter or perhaps in some other way. The presently accepted approach to this problem, resulting from A. Wald's contributions, is as follows: introduce a nonnegative function w(0l> ( ), Ob Oe 9 (the loss function) and given two estimators Of and O! n 2 2 the estimator for which the expected loss (risk) Eown(Oj, 0), j = 1 or 2, is smallest is called the better with respect to Wn at point 0 (here EoO is the expectation evaluated under the assumption that the true value of the parameter is 0). Obviously, such a method of comparison is not without its defects.
Optimization—Theory and Applications: Problems with Ordinary Differential Equations
Sách 17 · thg 12 2012 ·
3,0
This book has grown out of lectures and courses in calculus of variations and optimization taught for many years at the University of Michigan to graduate students at various stages of their careers, and always to a mixed audience of students in mathematics and engineering. It attempts to present a balanced view of the subject, giving some emphasis to its connections with the classical theory and to a number of those problems of economics and engineering which have motivated so many of the present developments, as well as presenting aspects of the current theory, particularly value theory and existence theorems. However, the presentation ofthe theory is connected to and accompanied by many concrete problems of optimization, classical and modern, some more technical and some less so, some discussed in detail and some only sketched or proposed as exercises. No single part of the subject (such as the existence theorems, or the more traditional approach based on necessary conditions and on sufficient conditions, or the more recent one based on value function theory) can give a sufficient representation of the whole subject. This holds particularly for the existence theorems, some of which have been conceived to apply to certain large classes of problems of optimization. For all these reasons it is essential to present many examples (Chapters 3 and 6) before the existence theorems (Chapters 9 and 11-16), and to investigate these examples by means of the usual necessary conditions, sufficient conditions, and value function theory.