Skip to content Skip to sidebar Skip to footer

Optimisation Using Scipy

In the following script: import numpy as np from scipy.optimize import minimise a=np.array(range(4)) b=np.array(range(4,8)) def sm(x,a,b): sm=np.zeros(1) a=a*np.exp(

Solution 1:

Your function sm appears to be unbounded. As you increase x, sm will get ever more negative, hence the fact that it is going to -inf.

Re: comment - if you want to make sm() as close to zero as possible, modify the last line in your function definition to read return abs(sm).

This minimised the absolute value of the function, bringing it close to zero.

Result for your example:

>>> opt = minimize(sm,x0,args=(a,b),method='nelder-mead', options={'xtol': 1e-8,     'disp': True})
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 153
         Function evaluations: 272
>>> opt
  status: 0
    nfev: 272
 success: True
     fun: 2.8573836630130245e-09
       x: array([-1.24676625,  0.65786454,  0.44383101,  1.73177358])
 message: 'Optimization terminated successfully.'
     nit: 153

Solution 2:

Modifying the proposal of FuzzyDuck, I replace sm +=((b-a)**2) which return me the desired result.


Post a Comment for "Optimisation Using Scipy"