Choosing the Algorithm - MATLAB & Simulink (2024)

Choosing the Algorithm

fmincon Algorithms

fmincon has five algorithm options:

  • 'interior-point' (default)

  • 'trust-region-reflective'

  • 'sqp'

  • 'sqp-legacy'

  • 'active-set'

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • Use the 'interior-point' algorithm first.

    For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

  • To run an optimization again to obtain more speed on small- to medium-sized problems, try 'sqp' next, and 'active-set' last.

  • Use 'trust-region-reflective' when applicable. Your problem must have: objective function includes gradient, only bounds, or only linear equality constraints (but not both).

See Potential Inaccuracy with Interior-Point Algorithms.

Reasoning Behind the Recommendations

  • 'interior-point' handles large, sparse problems, as well as small dense problems. The algorithm satisfies bounds at all iterations, and can recover from NaN or Inf results. It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can use special techniques for large-scale problems. For details, see Interior-Point Algorithm in fmincon options.

  • 'sqp' satisfies bounds at all iterations. The algorithm can recover from NaN or Inf results. It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.

  • 'sqp-legacy' is similar to 'sqp', but usually is slower and uses more memory.

  • 'active-set' can take large steps, which adds speed. The algorithm is effective on some problems with nonsmooth constraints. It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.

  • 'trust-region-reflective' requires you to provide a gradient, and allows only bounds or linear equality constraints, but not both. Within these limitations, the algorithm handles both large sparse problems and small dense problems efficiently. It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can use special techniques to save memory usage, such as a Hessian multiply function. For details, see Trust-Region-Reflective Algorithm in fmincon options.

For descriptions of the algorithms, see Constrained Nonlinear Optimization Algorithms.

fsolve Algorithms

fsolve has three algorithms:

  • 'trust-region-dogleg' (default)

  • 'trust-region'

  • 'levenberg-marquardt'

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • Use the 'trust-region-dogleg' algorithm first.

    For help if fsolve fails, see When the Solver Fails or When the Solver Might Have Succeeded.

  • To solve equations again if you have a Jacobian multiply function, or want to tune the internal algorithm (see Trust-Region Algorithm in fsolve options), try 'trust-region'.

  • Try timing all the algorithms, including 'levenberg-marquardt', to find the algorithm that works best on your problem.

Reasoning Behind the Recommendations

  • 'trust-region-dogleg' is the only algorithm that is specially designed to solve nonlinear equations. The others attempt to minimize the sum of squares of the function.

  • The 'trust-region' algorithm is effective on sparse problems. It can use special techniques such as a Jacobian multiply function for large-scale problems.

For descriptions of the algorithms, see Equation Solving Algorithms.

fminunc Algorithms

fminunc has two algorithms:

  • 'quasi-newton' (default)

  • 'trust-region'

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • If your objective function includes a gradient, use 'Algorithm'='trust-region', and set the SpecifyObjectiveGradient option to true.

  • Otherwise, use 'Algorithm'='quasi-newton'.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

For descriptions of the algorithms, see Unconstrained Nonlinear Optimization Algorithms.

Least Squares Algorithms

lsqlin

lsqlin has three algorithms:

  • 'interior-point', the default

  • 'trust-region-reflective'

  • 'active-set'

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • Try 'interior-point' first.

    Tip

    For better performance when your input matrix C has a large fraction of nonzero entries, specify C as an ordinary double matrix. Similarly, for better performance when C has relatively few nonzero entries, specify C as sparse. For data type details, see Sparse Matrices. You can also set the internal linear algebra type by using the 'LinearSolver' option.

  • If you have no constraints or only bound constraints, and want higher accuracy, more speed, or want to use a Jacobian Multiply Function with Linear Least Squares, try 'trust-region-reflective'.

  • If you have a large number of linear constraints and not a large number of variables, try 'active-set'.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

See Potential Inaccuracy with Interior-Point Algorithms.

For descriptions of the algorithms, see Least-Squares (Model Fitting) Algorithms.

lsqcurvefit and lsqnonlin

lsqcurvefit and lsqnonlin have three algorithms:

  • 'trust-region-reflective' (default for unconstrained or bound-constrained problems)

  • 'levenberg-marquardt'

  • 'interior-point' (default for problems with linear or nonlinear constraints)

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • For problems with bound constraints only, try 'trust-region-reflective' or 'levenberg-marquardt' first.

  • If your bound-constrained problem is underdetermined (fewer equations than dimensions), try 'levenberg-marquardt' first.

  • If your problem has linear or nonlinear constraints, use 'interior-point'.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

For descriptions of the algorithms, see Least-Squares (Model Fitting) Algorithms.

Linear Programming Algorithms

linprog has four algorithms:

  • 'dual-simplex-highs', the default

  • 'dual-simplex-legacy'

  • 'interior-point'

  • 'interior-point-legacy'

Use optimoptions to set the Algorithm option at the command line.

Recommendations

Use the 'dual-simplex-highs' algorithm, the 'dual-simplex-legacy' algorithm, or the 'interior-point' algorithm first. It is difficult to know which algorithm will work more effectively on a problem.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

See Potential Inaccuracy with Interior-Point Algorithms.

Reasoning Behind the Recommendations

  • Often, the 'dual-simplex-highs', 'dual-simplex-legacy', and 'interior-point' algorithms are fast, and use relatively little memory.

  • The 'interior-point-legacy' algorithm is similar to 'interior-point', but 'interior-point-legacy' can be slower, less robust, or use more memory.

For descriptions of the algorithms, see Linear Programming Algorithms.

Mixed-Integer Linear Programming Algorithms

intlinprog has two algorithms:

  • 'highs', the default

  • 'legacy'

Use optimoptions to set the Algorithm option at the command line.

Recommendations

Use the 'highs' algorithm first.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

Reasoning Behind the Recommendations

For descriptions of the algorithms, see Mixed-Integer Linear Programming (MILP) Algorithms.

Quadratic Programming Algorithms

quadprog has three algorithms:

  • 'interior-point-convex' (default)

  • 'trust-region-reflective'

  • 'active-set'

Use optimoptions to set the Algorithm option at the command line.

Recommendations
  • If you have a convex problem, or if you don't know whether your problem is convex, use 'interior-point-convex'.

  • Tip

    For better performance when your Hessian matrix H has a large fraction of nonzero entries, specify H as an ordinary double matrix. Similarly, for better performance when H has relatively few nonzero entries, specify H as sparse. For data type details, see Sparse Matrices. You can also set the internal linear algebra type by using the 'LinearSolver' option.

  • If you have a nonconvex problem with only bounds, or with only linear equalities, use 'trust-region-reflective'.

  • If you have a positive semidefinite problem with a large number of linear constraints and not a large number of variables, try 'active-set'.

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded.

See Potential Inaccuracy with Interior-Point Algorithms.

For descriptions of the algorithms, see Quadratic Programming Algorithms.

Large-Scale vs. Medium-Scale Algorithms

An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. This may be done internally by storing sparse matrices, and by using sparse linear algebra for computations whenever possible. Furthermore, the internal algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method.

In contrast, medium-scale methods internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute.

Don't let the name “large scale” mislead you; you can use a large-scale algorithm on a small problem. Furthermore, you do not need to specify any sparse matrices to use a large-scale algorithm. Choose a medium-scale algorithm to access extra functionality, such as additional constraint types, or possibly for better performance.

Potential Inaccuracy with Interior-Point Algorithms

Interior-point algorithms in fmincon, quadprog, lsqlin, and linprog have many good characteristics, such as low memory usage and the ability to solve large problems quickly. However, their solutions can be slightly less accurate than those from other algorithms. The reason for this potential inaccuracy is that the (internally calculated) barrier function keeps iterates away from inequality constraint boundaries.

For most practical purposes, this inaccuracy is usually quite small.

To reduce the inaccuracy, try to:

  • Rerun the solver with smaller StepTolerance, OptimalityTolerance, and possibly ConstraintTolerance tolerances (but keep the tolerances sensible.) See Tolerances and Stopping Criteria).

  • Run a different algorithm, starting from the interior-point solution. This can fail, because some algorithms can use excessive memory or time, and all linprog and some quadprog algorithms do not accept an initial point.

For example, try to minimize the function x when bounded below by 0. Using the fmincon default interior-point algorithm:

options = optimoptions(@fmincon,'Algorithm','interior-point','Display','off');x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x = 2.0000e-08

Using the fmincon sqp algorithm:

options.Algorithm = 'sqp';x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x2 = 0

Similarly, solve the same problem using the linprog interior-point-legacy algorithm:

opts = optimoptions(@linprog,'Display','off','Algorithm','interior-point-legacy');x = linprog(1,[],[],[],[],0,[],1,opts)
x = 2.0833e-13

Using the linprog dual-simplex algorithm:

opts.Algorithm = 'dual-simplex';x2 = linprog(1,[],[],[],[],0,[],1,opts)
x2 = 0

In these cases, the interior-point algorithms are less accurate, but the answers are quite close to the correct answer.

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

 

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Choosing the Algorithm- MATLAB & Simulink (1)

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

Contact your local office

Choosing the Algorithm
- MATLAB & Simulink (2024)

FAQs

What is the MATLAB algorithm? ›

MATLAB provides the tools you need to transform your ideas into algorithms, including: Thousands of core mathematical, engineering, and scientific functions. Application-specific algorithms in domains such as signal and image processing, control design, computational finance, and computational biology.

What is the default algorithm in MATLAB Fsolve? ›

fsolve has three algorithms: 'trust-region-dogleg' (default) 'trust-region' 'levenberg-marquardt'

What are the tools for algorithm development? ›

One of the most basic tools for algorithm development is an integrated development environment (IDE) or a code editor. These are software applications that allow you to write, edit, run, debug, and refactor your code. Some examples of popular IDEs and editors are Visual Studio Code, PyCharm, Eclipse, and Sublime Text.

What to consider when choosing an algorithm? ›

Choosing the best algorithm for your project can be a daunting task. There are many factors to consider, such as the type, size, and complexity of your data, the goal and scope of your analysis, the performance and accuracy of your results, and the trade-offs and limitations of different approaches.

Which algorithm does Simulink use by default to solve algebraic loops? ›

Simulink uses a dogleg trust region algorithm to solve algebraic loops. The tolerance used is smaller than the ODE solver Reltol and Abstol . This is because Simulink uses the “explicit ODE method” to solve Index-1 differential algebraic equations (DAEs).

When to use fsolve in MATLAB? ›

Given a set of n nonlinear functions Fi(x), where n is the number of components in the vector x, the goal of equation solving is to find a vector x that makes all Fi(x) = 0. fsolve attempts to solve a system of equations by minimizing the sum of squares of the components.

What is fast algorithm in MATLAB? ›

The FAST algorithm determines if a corner is present by testing a circular area around the potential center of the corner. The test detects a corner if a contiguous section of pixels are either brighter than the center plus a threshold or darker than the center minus a threshold.

What are the two tools of algorithm? ›

Flowcharts. A flowchart represents an algorithm using symbols instead of words. ... Pseudocode. Pseudocode is an English-like description of the processing steps to be performed in a program. ...

How will you choose the best algorithm for a problem? ›

A well-designed algorithm should not only produce the correct output in a timely manner, but also be easy to understand, modify, and reuse. It's also important to consider scalability, robustness, and flexibility to ensure that the algorithm can handle unexpected scenarios and adapt to changing requirements.

How do you identify which algorithm is performing better? ›

One of the most common ways to measure algorithm performance is time complexity, which is the amount of time it takes for an algorithm to complete its task as a function of the input size. Time complexity is usually expressed using the big O notation, which gives the upper bound of the worst-case scenario.

How do I choose a search algorithm? ›

Selecting the best search algorithm depends on factors such as the nature of your data, problem complexity, and efficiency requirements. Consider factors like time complexity, space complexity, and the specific characteristics of your problem to make an informed choice.

Top Articles
Latest Posts
Article information

Author: Zonia Mosciski DO

Last Updated:

Views: 5858

Rating: 4 / 5 (51 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Zonia Mosciski DO

Birthday: 1996-05-16

Address: Suite 228 919 Deana Ford, Lake Meridithberg, NE 60017-4257

Phone: +2613987384138

Job: Chief Retail Officer

Hobby: Tai chi, Dowsing, Poi, Letterboxing, Watching movies, Video gaming, Singing

Introduction: My name is Zonia Mosciski DO, I am a enchanting, joyous, lovely, successful, hilarious, tender, outstanding person who loves writing and wants to share my knowledge and understanding with you.