ohyb_nosníku-w

Ohyb prostého nosníku

  • prostý nosník zatížený rovnoměrným zatížením (vlastní tíha a užitné zatížení)
  • ocelový nosník I S235
In [1]:
from __future__ import print_function

import openturns as ot
from openturns.viewer import View
import math as m
from collections import OrderedDict
import re
import matplotlib.pyplot as plt
import numpy as np
In [2]:
#%matplotlib nbagg
In [3]:
def plot_grid(marginals, inp, out, DP=None):
    n = len(marginals)
    fig, ax = plt.subplots(nrows=1, ncols=int((n-1)*n/2), figsize=(10*n, 4), tight_layout=True)
    k = 0
    for i in [0]:#range(n-1):
        for j in [1]:#range(i+1, n):
            mask = out[:,0] < 0
            ax[k].plot(inp[:, i][~mask], inp[:, j][~mask], 'g.', ms=1)
            ax[k].plot(inp[:, i][mask], inp[:, j][mask], 'r.', ms=1)
            ax[k].set_xlabel(marginals[i].getDescription()[0])
            ax[k].set_ylabel(marginals[j].getDescription()[0])
            if DP:
                ax[k].plot(DP[i], DP[j], 'co')
            k+=1

Výpočet

  • mezní stav únosnosti $$ Z = R - E $$ $$ E = M_E = \dfrac{(q + q) l^2}{8} $$ $$ R = M_R = f_y w_\text{pl} $$

  • mezní stav použitelnosti $$ Z_w = R - E $$ $$ E = w_\text{lim} = \frac{l}{200} $$ $$ R = w = \dfrac{5}{384 E I} (q + q) l^4$$

Náhodné veličiny

Všechny náhodné veličiny jsou uvažovány nezávislé.

Řešení pomocí openturns

In [4]:
dim = 7 # počet náhodných veličin

# parametry vstupních náhodných veličin
mu_q = 4
cov_q = 0.15
std_q = mu_q * cov_q

mu_g = 2.9
cov_g = 0.075
std_g = mu_g * cov_g

mu_l = 3.25
cov_l = 0.001
std_l = mu_l * cov_l

mu_wpl = 6.36e-5
cov_wpl = 0.04
std_wpl = mu_wpl * cov_wpl

mu_fy = 265
cov_fy = 0.07
std_fy = mu_fy * cov_fy

mu_e = 220
cov_e = 0.03
std_e = mu_e * cov_e

mu_iy = 3.27e-3
cov_iy = 0.001
std_iy = mu_iy * cov_iy

Definice rezervy spolehlivosti

Varianta 1 - čitelnější, ale pomalejší

In [5]:
def regularFunc(x):
    q = x[0]
    g = x[1]
    l = x[2]
    wpl = x[3]
    fy = x[4]
    
    R = fy * wpl * 1000
    E = (g + q) * l**2 / 8
    return [R - E]

limitStatePy = ot.PythonFunction(dim, 1, regularFunc)

Varianta 2 - rychlejší

In [6]:
GM = '''(fy * wpl * 1000) - ((g + q) * l*l / 8.)'''
Gw = '''(l / 200. * 1000) - (5/384.*  (g + q) * l^4 / e / iy)'''

G = Gw
var_list = ['q', 'g', 'l', 'wpl', 'fy', 'e', 'iy']

limitState = ot.SymbolicFunction(var_list, [G])
limitState = ot.MemoizeFunction(limitState) # save history

Otestování funkcí na středních hodnotách

In [7]:
x = [mu_q, mu_g, mu_l, mu_wpl, mu_fy, mu_e, mu_iy]
print('x =', x)
%timeit limitState(x)
print('G(x) =', limitState(x))
%timeit limitStatePy(x)
print('G(x) =', limitStatePy(x))
x = [4, 2.9, 3.25, 6.36e-05, 265, 220, 0.00327]
3.05 µs ± 514 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
G(x) = [2.3168]
7.55 µs ± 1.18 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
G(x) = [7.74384]

Definice pravděpodobnostních rozdělení náhodných veličin

In [8]:
q_dist = ot.LogNormalMuSigma(mu_q, std_q).getDistribution()
q_dist.setDescription(['q'])

g_dist = ot.Normal(mu_g, std_g)
g_dist.setDescription(['g'])

l_dist = ot.Normal(mu_l, std_l)
l_dist.setDescription(['l'])

wpl_dist = ot.Normal(mu_wpl, std_wpl)
wpl_dist.setDescription(['wpl'])

fy_dist = ot.LogNormalMuSigma(mu_fy, std_fy).getDistribution()
fy_dist.setDescription(['fy'])

e_dist = ot.LogNormalMuSigma(mu_e, std_e).getDistribution()
q_dist.setDescription(['e'])

iy_dist = ot.Normal(mu_iy, std_iy)
iy_dist.setDescription(['iy'])

marginals = [q_dist, g_dist, l_dist, wpl_dist, fy_dist, e_dist, iy_dist]
In [9]:
fig, ax = plt.subplots(ncols=3, nrows=3, figsize=(10,6), tight_layout=True)
View(q_dist.drawPDF(), axes=[ax[0, 0]], legend_kwargs=dict(loc='best'))
View(g_dist.drawPDF(), axes=[ax[0, 1]], legend_kwargs=dict(loc='best'))
View(l_dist.drawPDF(), axes=[ax[0, 2]], legend_kwargs=dict(loc='best'))
View(wpl_dist.drawPDF(), axes=[ax[1, 0]], legend_kwargs=dict(loc='best'))
View(fy_dist.drawPDF(), axes=[ax[1, 1]], legend_kwargs=dict(loc='best'));
View(e_dist.drawPDF(), axes=[ax[1, 2]], legend_kwargs=dict(loc='best'));
View(iy_dist.drawPDF(), axes=[ax[2, 0]], legend_kwargs=dict(loc='best'));
No description has been provided for this image

Sdružené rozdělení pravděpodobnosti

In [10]:
# Create a copula : IndependentCopula (no correlation)
aCopula = ot.IndependentCopula(dim)
aCopula.setName('Independent copula')

# Instanciate one distribution object
myDistribution = ot.ComposedDistribution(marginals, aCopula)
myDistribution.setName('myDist')
In [11]:
RS = ot.CorrelationMatrix(dim)
# RS[2, 3] = -0.2 # change correlation
# Evaluate the correlation matrix of the Normal copula from RS
R = ot.NormalCopula.GetCorrelationFromSpearmanCorrelation(RS)
# Create the Normal copula parametrized by R
copula = ot.NormalCopula(R)

Náhodné vektory

  • $G$ rezerva spolehlivosti
  • myEvent podmínka spolehlivosti $G < 0$
In [12]:
# We create a 'usual' RandomVector from the Distribution
vect = ot.RandomVector(myDistribution)

# We create a composite random vector
G = ot.CompositeRandomVector(limitState, vect) # pro použití varianty 1 limitStatePy

# We create an Event from this RandomVector
myEvent = ot.Event(G, ot.Less(), 0.0)

Metoda Monte Carlo

In [13]:
limitState.clearHistory()
In [14]:
cv = 0.05
NbSim = 100000

experiment = ot.MonteCarloExperiment()
algoMC = ot.ProbabilitySimulationAlgorithm(myEvent, experiment)
algoMC.setMaximumOuterSampling(NbSim)
#algoMC.setBlockSize(4)
algoMC.setMaximumCoefficientOfVariation(cv)
# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

# Perform the analysis:
%time algoMC.run()
CPU times: user 87.5 ms, sys: 0 ns, total: 87.5 ms
Wall time: 87.5 ms
In [15]:
# Results:
resultMC = algoMC.getResult()
pf_MC = resultMC.getProbabilityEstimate()
cov_MC = resultMC.getCoefficientOfVariation()
print('MonteCarlo result=', resultMC)
print('Number of executed iterations =', resultMC.getOuterSampling())
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('Pf = ', pf_MC)
print('CV =', cov_MC)
algoMC.drawProbabilityConvergence()
MonteCarlo result= probabilityEstimate=5.135829e-02 varianceEstimate=6.584757e-06 standard deviation=2.57e-03 coefficient of variation=5.00e-02 confidenceLength(0.95)=1.01e-02 outerSampling=7399 blockSize=1
Number of executed iterations = 7399
Number of calls to the limit state = 7399
Pf =  0.05135829166103531
CV = 0.049964242696757005
Out[15]:
No description has been provided for this image
In [16]:
inpMC = np.asarray(limitState.getInputHistory())
outMC = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
plot_grid(marginals, inpMC, outMC)
No description has been provided for this image

Metoda FORM - First Order Reliability Method

In [17]:
limitState.clearHistory()
In [18]:
# Using FORM analysis

# We create a NearestPoint algorithm
myCobyla = ot.Cobyla()
# Resolution options:
eps = 1e-3
neval = 1000

myCobyla.setMaximumEvaluationNumber(neval)
myCobyla.setMaximumAbsoluteError(eps)
myCobyla.setMaximumRelativeError(eps)
myCobyla.setMaximumResidualError(eps)
myCobyla.setMaximumConstraintError(eps)

# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

# We create a FORM algorithm
# The first parameter is a NearestPointAlgorithm
# The second parameter is an event
# The third parameter is a starting point for the design point research

algoFORM = ot.FORM(myCobyla, myEvent, myDistribution.getMean())

# Perform the analysis:
algoFORM.run()
In [19]:
# Results:
resultFORM = algoFORM.getResult()
pf_FORM = resultFORM.getEventProbability()
beta_FORM = resultFORM.getHasoferReliabilityIndex()
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('beta =', beta_FORM)
print('Pf =', pf_FORM)

standardSpaceDesignPoint = resultFORM.getStandardSpaceDesignPoint()

# Graphical result output
resultFORM.drawImportanceFactors()
Number of calls to the limit state = 112
beta = 1.6156295335153799
Pf = 0.053087217296660286
Out[19]:
No description has been provided for this image
In [20]:
inpFORM = np.asarray(limitState.getInputHistory())
outFORM = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)

DP = resultFORM.getPhysicalSpaceDesignPoint()
plot_grid(marginals, inpFORM, outFORM, DP=DP)
No description has been provided for this image

Metoda LHS - Latin Hypercube Sampling

In [21]:
limitState.clearHistory()
#hist = ot.HistoryStrategy(ot.Full())
In [22]:
cv = 0.05
nsim = 100000

algoLHS = ot.LHS(myEvent)
#algoLHS.setConvergenceStrategy(hist)
algoLHS.setBlockSize(4)
algoLHS.setMaximumCoefficientOfVariation(cv)
algoLHS.setMaximumOuterSampling(nsim)

# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

%time algoLHS.run()
CPU times: user 390 ms, sys: 16 ms, total: 406 ms
Wall time: 404 ms
In [23]:
# retrieve results
resultLHS = algoLHS.getResult()
pf_LHS = resultLHS.getProbabilityEstimate()
cov_LHS = resultLHS.getCoefficientOfVariation()

print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('Number of calls =', resultLHS.getOuterSampling())
print('CoV=', cov_LHS)
print('Pf=', pf_LHS)
#algoLHS.getConvergenceStrategy().getSamples()
algoLHS.drawProbabilityConvergence()
Number of calls to the limit state = 9036
Number of calls = 2259
CoV= 0.04996788975433318
Pf= 0.05245683930942897
Out[23]:
No description has been provided for this image
In [24]:
inpLHS = np.asarray(limitState.getInputHistory())
outLHS = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
plot_grid(marginals, inpLHS, outLHS)
No description has been provided for this image

Metoda Importance Sampling

  • pro nalezení oblasti poruchy je využit návrhový bod z metody FORM
  • váhová funkce myImportance je mnohorozměrné normální rozdělení se směrodatnou odchylkou impstd pro všechny dimenze
In [25]:
limitState.clearHistory()
In [26]:
impstd = 1.0
myImportance = ot.Normal(standardSpaceDesignPoint, [impstd] * dim, ot.CorrelationMatrix(dim))

# Create a simulation algorithm
cv = 0.05
nsim = 10000

experiment = ot.ImportanceSamplingExperiment(myImportance)
algoIS = ot.ProbabilitySimulationAlgorithm(ot.StandardEvent(myEvent), experiment)
algoIS.setMaximumCoefficientOfVariation(cv)
algoIS.setMaximumOuterSampling(nsim)
algoIS.setConvergenceStrategy(ot.Full())

# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

algoIS.run()
In [27]:
# retrieve results
resultIS = algoIS.getResult()
pf_IS = resultIS.getProbabilityEstimate()
cov_IS = resultIS.getCoefficientOfVariation()
print('Number of calls =', resultIS.getOuterSampling())
print('CoV=', cov_IS)
print('Pf=', pf_IS)
Number of calls = 736
CoV= 0.049982151247421176
Pf= 0.05612106276589862
In [28]:
inpIS = np.asarray(limitState.getInputHistory())
outIS = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
DP = resultFORM.getPhysicalSpaceDesignPoint()
plot_grid(marginals, inpIS, outIS, DP=DP)
No description has been provided for this image

METODA SORM - Second Order Reliability Method

In [29]:
limitState.clearHistory()
In [30]:
# Using SORM analysis

# We create a NearestPoint algorithm
myCobyla = ot.Cobyla()
# Resolution options:
eps = 1e-3
niter = 1000

myCobyla.setMaximumEvaluationNumber(niter)
myCobyla.setMaximumAbsoluteError(eps)
myCobyla.setMaximumRelativeError(eps)
myCobyla.setMaximumResidualError(eps)
myCobyla.setMaximumConstraintError(eps)

# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

# We create a FORM algorithm
# The first parameter is a NearestPointAlgorithm
# The second parameter is an event
# The third parameter is a starting point for the design point research

algoSORM = ot.SORM(myCobyla, myEvent, myDistribution.getMean())

# Perform the analysis:
algoSORM.run()
In [31]:
# Results:
resultSORM = algoSORM.getResult()
beta_SORM = resultSORM.getGeneralisedReliabilityIndexBreitung()
pf_SORM = resultSORM.getEventProbabilityBreitung()
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('beta =', beta_SORM)
print('Pf =', pf_SORM)
print('Pf =', resultSORM.getEventProbabilityHohenBichler())
print('Pf =', resultSORM.getEventProbabilityTvedt())

# Graphical result output
resultSORM.drawImportanceFactors()
Number of calls to the limit state = 112
beta = 1.609992043944165
Pf = 0.053699796595093864
Pf = 0.053865443525820254
Pf = 0.05385970797959911
Out[31]:
No description has been provided for this image
In [32]:
inpSORM = np.asarray(limitState.getInputHistory())
outSORM = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
DP = resultSORM.getPhysicalSpaceDesignPoint()
plot_grid(marginals, inpSORM, outSORM, DP=DP)
No description has been provided for this image

Directional sampling

In [33]:
limitState.clearHistory()
In [34]:
# Using Directional sampling

# Resolution options:
cv = 0.05
NbSim = 100000

algoDS = ot.DirectionalSampling(myEvent)
algoDS.setMaximumOuterSampling(NbSim)
algoDS.setBlockSize(1)
algoDS.setMaximumCoefficientOfVariation(cv)
# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

# Perform the analysis:
algoDS.run()
In [35]:
# Results:
resultDS = algoDS.getResult()
pf_DS = resultDS.getProbabilityEstimate()
cov_DS = resultDS.getCoefficientOfVariation()
print('Number of executed iterations =', resultDS.getOuterSampling())
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('Pf = ', pf_DS)
print('CV =', cov_DS)
algoDS.drawProbabilityConvergence()
Number of executed iterations = 1420
Number of calls to the limit state = 25395
Pf =  0.054261977258449334
CV = 0.049977751499418986
Out[35]:
No description has been provided for this image
In [36]:
inpDS = np.asarray(limitState.getInputHistory())
outDS = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
plot_grid(marginals, inpDS, outDS)
No description has been provided for this image

Fitting MC

In [37]:
factories = [ot.LogNormalFactory(), ot.NormalFactory(), ot.WeibullFactory()]
In [38]:
estimated_distribution, test_result = ot.FittingTest.BestModelKolmogorov(outMC, factories)
test_result
Out[38]:

class=TestResult name=Unnamed type=Kolmogorov WeibullMin binaryQualityMeasure=false p-value threshold=0.5 p-value=0.5 statistic=0.0131433 description=[WeibullMin(beta = 6.66715, alpha = 5.23743, gamma = -3.8253) vs sample Unnamed]

In [39]:
print('mu =', estimated_distribution.getMean())
print('sigma =', estimated_distribution.getStandardDeviation())
print('gamma_1 =', estimated_distribution.getSkewness())
print('gamma_2 =', estimated_distribution.getKurtosis())

pf_fit_MC = estimated_distribution.computeCDF(0)
print('pf =', pf_fit_MC)
mu = [2.31265]
sigma = [1.34752]
gamma_1 = [-0.285816]
gamma_2 = [2.91656]
pf = 0.05303481728796552
In [40]:
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(8,4), tight_layout=True)
View(ot.HistogramFactory().build(outMC).drawPDF(), axes=[ax], legend_kwargs=dict(loc='best'), plot_kwargs=dict(label='G histogram'))
View(estimated_distribution.drawPDF(), axes=[ax], legend_kwargs=dict(loc='best'), plot_kwargs={'color':'b', 'label': 'fitted PDF'})
ax.set_yscale('log')
ax.set_ylim(1e-6, 1)
Out[40]:
(1e-06, 1)
No description has been provided for this image

Fitting LHS

In [41]:
factories = [ot.LogNormalFactory(), ot.NormalFactory(), ot.WeibullFactory()]
In [42]:
estimated_distribution, test_result = ot.FittingTest.BestModelKolmogorov(outLHS, factories)
test_result
Out[42]:

class=TestResult name=Unnamed type=Kolmogorov WeibullMin binaryQualityMeasure=true p-value threshold=0.5 p-value=1 statistic=0.0115075 description=[WeibullMin(beta = 7.20134, alpha = 5.68172, gamma = -4.36622) vs sample Unnamed]

In [43]:
print('mu =', estimated_distribution.getMean())
print('sigma =', estimated_distribution.getStandardDeviation())
print('gamma_1 =', estimated_distribution.getSkewness())
print('gamma_2 =', estimated_distribution.getKurtosis())

pf_fit_LHS = estimated_distribution.computeCDF(0)
print('pf =', pf_fit_LHS)
mu = [2.29429]
sigma = [1.35702]
gamma_1 = [-0.339149]
gamma_2 = [2.98576]
pf = 0.056589190660749814
In [44]:
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(8,4), tight_layout=True)
View(ot.HistogramFactory().build(outLHS).drawPDF(), axes=[ax], legend_kwargs=dict(loc='best'), plot_kwargs=dict(label='G histogram'))
View(estimated_distribution.drawPDF(), axes=[ax], legend_kwargs=dict(loc='best'), plot_kwargs={'color':'b', 'label': 'fitted PDF'})
ax.set_yscale('log')
ax.set_ylim(1e-6, 1)
Out[44]:
(1e-06, 1)
No description has been provided for this image

Subset Sampling

In [45]:
limitState.clearHistory()
In [46]:
cv = 0.05
NbSim = 1000

algoSS = ot.SubsetSampling(myEvent)

algoSS.setBlockSize(1)
#algoSS.setMaximumOuterSampling(NbSim)
#algoSS.setMaximumCoefficientOfVariation(cv)
# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

algoSS.run()
In [47]:
resultSS = algoSS.getResult()
pf_SS = resultSS.getProbabilityEstimate()
cov_SS = resultSS.getCoefficientOfVariation()
print('Subset Sampling result=', resultSS)
print('Number of executed iterations =', resultSS.getOuterSampling())
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('Pf = ', pf_SS)
print('CV =', cov_SS)
s = algoSS.getConvergenceStrategy()
algoSS.drawProbabilityConvergence()
Subset Sampling result= probabilityEstimate=5.235000e-02 varianceEstimate=9.273775e-07 standard deviation=9.63e-04 coefficient of variation=1.84e-02 confidenceLength(0.95)=3.77e-03 outerSampling=20000 blockSize=1
Number of executed iterations = 20000
Number of calls to the limit state = 20000
Pf =  0.05234999999999984
CV = 0.018395499776154622
Out[47]:
No description has been provided for this image
In [48]:
inpSS = np.asarray(limitState.getInputHistory())
outSS = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(outSS)
print(outSS.shape)
n = len(marginals)
nit = resultSS.getOuterSampling()
ns = algoSS.getNumberOfSteps()
print(ns, nit)
fig, ax = plt.subplots(nrows=int((n-1)*n/2), ncols=1, figsize=(4, n*4), tight_layout=True)
k = 0
for i in range(n-1):
    for j in range(i+1, n):
        mask = outSS[:,0] < 0
        for l in range(ns):
            r = np.s_[int(l*nit/ns):int(nit/ns*(l+1))]
            ax[k].plot(inpSS[:, i][r], inpSS[:, j][r], ms=1, marker='.', ls='None')
        #ax[k].plot(inpSS[:, i][mask], inpSS[:, j][mask], ms=1, marker='.', ls='None')
        ax[k].set_xlabel(marginals[i].getDescription()[0])
        ax[k].set_ylabel(marginals[j].getDescription()[0])
        k+=1
(20000, 1)
2 20000
No description has been provided for this image

Adaptive Directional Sampling

In [49]:
limitState.clearHistory()
In [50]:
cv = 0.05
NbSim = 1000

algoADS = ot.AdaptiveDirectionalSampling(myEvent)
#algoADS.setMaximumOuterSampling(NbSim)
#algoADS.setBlockSize(1)
#algoADS.setMaximumCoefficientOfVariation(cv)
# For statistics about the algorithm
initialNumberOfCall = limitState.getEvaluationCallsNumber()

# Perform the analysis:
%time algoADS.run()
CPU times: user 511 ms, sys: 3.94 ms, total: 514 ms
Wall time: 513 ms
In [51]:
# Results:
resultADS = algoADS.getResult()
pf_ADS = resultADS.getProbabilityEstimate()
cov_ADS = resultADS.getCoefficientOfVariation()
print('Number of executed iterations =', resultADS.getOuterSampling())
print('Number of calls to the limit state =', limitState.getEvaluationCallsNumber() - initialNumberOfCall)
print('Pf = ', pf_ADS)
print('CV =', cov_ADS)
#algoADS.drawProbabilityConvergence()
Number of executed iterations = 860
Number of calls to the limit state = 8252
Pf =  0.042400524761365264
CV = 0.01725850996168097
In [52]:
inpADS = np.asarray(limitState.getInputHistory())
outADS = np.asarray(limitState.getOutputHistory())
#display(inp)
#display(out)
plot_grid(marginals, inpADS, outADS)
No description has been provided for this image
In [ ]:
 
In [54]:
print('MC =', pf_MC)
print('LHS =', pf_LHS)
print('FORM =', pf_FORM)
print('SORM =', pf_SORM)
print('SS =', pf_SS)
print('DS =', pf_DS)
print('ADS =', pf_ADS)
print('fit MC =', pf_fit_MC)
print('fit LHS =', pf_fit_LHS)
MC = 0.05135829166103531
LHS = 0.05245683930942897
FORM = 0.053087217296660286
SORM = 0.053699796595093864
SS = 0.05234999999999984
DS = 0.054261977258449334
ADS = 0.042400524761365264
fit MC = 0.05303481728796552
fit LHS = 0.056589190660749814
In [ ]:
 
In [ ]:
 
In [ ]: