AllBestEssays.com - All Best Essays, Term Papers and Book Report
Search

Scaling of Symmetric Rank one Method in Solving Unconstrained Optimization Problem

Essay by   •  November 12, 2013  •  Research Paper  •  6,301 Words (26 Pages)  •  1,821 Views

Essay Preview: Scaling of Symmetric Rank one Method in Solving Unconstrained Optimization Problem

Report this essay
Page 1 of 26

CHAPTER 1

INTRODUCTION

Background Of the Study

In mathematics, optimization problem is a problem where it consists of maximizing or minimizing a real function by systematically choose an input values within an allowed set and compute the value of the function. An additional, it also means solve the problem so that we can the goal as quickly as possible without wasting a lot of resources.

Optimization also can be deviating from a target by the smallest possible margin. Generally, a large area of applied mathematics is comprised by the optimization theory and techniques to other formulations. In the simple case, optimization is like finding a good value or a best available value of some problems given a defined domain, including a many of different types of objectives functions and different types of domains.

In vector calculus, the gradient of a scalar field is a vector field that points in the direction of the scalar field, and whose the magnitude is that rate of increase. The variation in space of any quantity can be represented by a slope in simple terms. The gradient is like represents the steepness and the direction of the slope. The gradient or gradient of a scalar function f〖:R〗^n→R^1 is denoted by ∇f or ∇ ⃗f where ∇ denotes the vector of the differential operator.

Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and this matrix is later named after him. Hessian matrix is the matrix of second derivatives of a multivariate function. In mathematics it means the gradient of the gradient of a function. Hessian matrix is relevant in many places such as in economy too.

Let a real-valued function f〖:R〗^n→R^1 be given and if all the second partial derivatives of f exist and are continuous over the domain of the function, then the Hessian matrix of f is

H(f)_ij (x)=D_i D_j f(x)=(δ^2 f(x))/(δx(δx))

where x=(x_1,x_2,...,x_n ) and D_i is the differentiation operator with the respect to the ith argument. Hence

H(f)(x)=∇^2 f(x)=[■((∂^2 f)/(〖∂x〗_1^2 )&(∂^2 f)/(∂x_1 ∂x_2 )&...@(∂^2 f)/(∂x_2 ∂x_1 )&(∂^2 f)/(〖∂x〗_2^2 )&...@■(⋮@(∂^2 f)/(∂x_n ∂x_1 ))&■(⋮@(∂^2 f)/(∂x_n ∂x_2 ))&■(⋱@...)) ■((∂^2 f)/(∂x_1 ∂x_n )@(∂^2 f)/(∂x_2 ∂x_n )@■(⋮@(∂^2 f)/(〖∂x〗_n^2 )))]

Second derivative test is a test to determine if a given critical point of a real function of one variable is a local maximum or minimum. The real function is twice differentiable at a critical point, x:

Theorem: Sufficient Conditions for an Extremum.

Let f be twice differentiable, then

If f"(x) < 0, then f has a local maximum at x.

If f"(x) > 0, then f has a local minimum at x.

If f"(x) = 0, the test is inconclusive.

For n-dimensional cases, we have the following criteria:

If ∇^2 f(x) is positive definite, then f has a local minimum at x.

If ∇^2 f(x) is negative definite, then f has a local maximum at x.

If ∇^2 f(x) is non-definite, then the test is inconclusive.

Objective Of the Study

In this study, we wish to investigate the effect of scaling within the symmetric rank one for unconstrained optimization.

The specific objectives are as follow:

To derive some scaling for the symmetric rank one update in preserving the positive definiteness of the updating matrix.

To compare the performance of symmetric rank one method with various choices on the scaling.

Outline Of the Report

This report is divided into 5 chapters. Chapter 1 consist the background of the study which includes the introduction of the optimization, gradient and the Hessian matrix. We also state the objectives of this study and the overview of the report.

Chapter 2 gives a brief review of literature on Newton method, quasi-Newton method and symmetric rank one method.

We will describe about the methodology or ways to conducts this paper in Chapter 3. We will discuss about the method that are considered in solving the test problems.

In additional, the algorithms of the scaling of the symmetric rank one method will be provided.

Next, in Chapter 4, we will attempt to show the calculations and results of the few test problems. We will make a comparison between the various scaling that we choose and at the end will discuss whether which scaling is more effective to use in solving the unconstrained optimization.

Finally we conclude the results and findings of this study in Chapter 5. In addition, we will recommend some areas for further research that might play an important role in future study.

CHAPTER 2

LITERATURE REVIEW

Newton method also can be called as the Newton-Raphson method or Newton Iteration. It is a root finding algorithm. But somehow Newton method has its own drawbacks. There is the Newton method is an expensive method which mean it will cost a lot if using this method to solve some problem.

In Newton method, it assumes that the function can be locally approximated as a quadratic in a region around the optimum and the method uses the first and the second derivatives to find out the stationary point. Newton method uses the gradient and the Hessian matrix of the second derivatives of the function to be minimized in the higher dimension.

The main purpose of quasi-Newton method is to find the local maxima and minima of the functions or problems. Quasi-Newton method is based on the Newton method to find the stationary point of the function where the gradient will be zero. Stationary point is an input to a function where the derivative is zero in mathematics.

Today, quasi-Newton method is recognized

...

...

Download as:   txt (38.2 Kb)   pdf (479.1 Kb)   docx (27 Kb)  
Continue for 25 more pages »
Only available on AllBestEssays.com