Selected topics on analysis which may be useful in applied math.
Schedule and Syllabus:link
Our seminar doesn't need anyone to do the presentation and have no homework and tasks. You only need to learn the basic idea about mathmatical analysis and advanced algebra. You can gain knowledge about applied analysis and stochastic calculus. If you want to join us, contact firstname.lastname@example.org
Time:Monday, Tuesday, Thursday, Friday Night (4-8hours per week,to be determined@1570)
In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (A special case are ordinary differential equations (ODEs), which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations. In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake. At any given time, a dynamical system has a state given by a tuple of real numbers (a vector) that can be represented by a point in an appropriate state space (a geometrical manifold). The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
Evans L C. Partial differential equations[M]// Partial differential equations /. Interscience, 1964:167-180. Broer H W, Gils S A V, Hoveijn I, et al. Nonlinear Dynamical Systems and Chaos[M]. Birkhauser, 1996.
The Latin American School of Mathematics (ELAM) is one of the most important mathematical events in Latin America. It has been held every other year since 1968 in a different country of the region, and its theme varies according to the areas of interest of local research groups. The subject of the 1986 school was Partial Differential Equations with emphasis on Microlocal Analysis, Scattering Theory and the applications of Nonlinear Analysis to Elliptic Equations and Hamiltonian Systems.
Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. It is used to model systems that behave randomly. The best-known stochastic process to which stochastic calculus is applied is the Wiener process (named in honor of Norbert Wiener), which is used for modeling Brownian motion as described by Louis Bachelier in 1900 and by Albert Einstein in 1905 and other physical diffusion processes in space of particles subject to random forces. Since the 1970s, the Wiener process has been widely applied in financial mathematics and economics to model the evolution in time of stock prices and bond interest rates. The main flavours of stochastic calculus are the Itô calculus and its variational relative the Malliavin calculus. For technical reasons the Itô integral is the most useful for general classes of processes but the related Stratonovich integral is frequently useful in problem formulation (particularly in engineering disciplines.) The Stratonovich integral can readily be expressed in terms of the Itô integral. The main benefit of the Stratonovich integral is that it obeys the usual chain rule and therefore does not require Itô's lemma. This enables problems to be expressed in a coordinate system invariant form, which is invaluable when developing stochastic calculus on manifolds other than Rn. The dominated convergence theorem does not hold for the Stratonovich integral, consequently it is very difficult to prove results without re-expressing the integrals in Itô form.
Karatzas, Shreve, Brownian Motion and Stochastic Calculus, Springer
An introduction to stochastic differential equations Lawrence C. Evans
In mathematics and economics, transportation theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781. In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection Transportation Planning Volume I for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space". Major advances were made in the field during World War II by the Soviet/Russian mathematician and economist Leonid Kantorovich. Consequently, the problem as it is stated is sometimes known as the Monge–Kantorovich transportation problem. The linear programming formulation of the transportation problem is also known as the Hitchcock–Koopmans transportation problem.
Villani,Cedr,ic. Optimal Transport[J]. Grundlehren Der Mathematischen Wissenschaften, 2008
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. Calculus of variations is a field of mathematical analysis that deals with maximizing or minimizing functionals, which are mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is zero. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action. Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in a solution of soap suds. Although such experiments are relatively easy to perform, their mathematical interpretation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.
Ekeland I, Temam R. Convex analysis and variational problems[M]. SIAM, 1999.
In engineering, mathematics, physics, chemistry, bioinformatics, computational biology, meteorology and computer science, multiscale modeling or multiscale mathematics is the field of solving problems which have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids,solids, polymers,proteins,nucleic acids as well as various physical and chemical phenomena (like adsorption, chemical reactions, diffusion).
Grigorios A. Pavliotis, Andrew M. Stuart (auth.) Multiscale Methods: Averaging and Homogenization
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one recorded by a seismograph or heart monitor. Generally, wavelets are intentionally crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "reverse, shift, multiply and integrate" technique called convolution, with portions of a known signal to extract information from the unknown signal.
As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets will decompose data without gaps or overlap so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss.In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square integrable functions. This is accomplished through coherent states.
Ten lectures on wavelets
Mallat S. A Wavelet Tour of Signal Processing[J]. 1999, 31(3):83-85.
Demmel J W. Applied numerical linear algebra[J]. 1997, 2(2-3):219-230.
Strikwerda J C. Finite difference schemes and partial differential equations[M]. Wadsworth & Brooks/Cole Advanced Books & Software, 1989.