Presentation is loading. Please wait.

Presentation is loading. Please wait.

Alan P. Reynolds1. , Asaad Abdollahzadeh2, David W

Similar presentations


Presentation on theme: "Alan P. Reynolds1. , Asaad Abdollahzadeh2, David W"— Presentation transcript:

1 Guide Objective Assisted Particle Swarm Optimization and its Application to History Matching
Alan P. Reynolds1*, Asaad Abdollahzadeh2, David W. Corne1, Mike Christie2, Brian Davies3 and Glyn Williams3 1 School of Mathematical and Computer Sciences (MACS), Heriot-Watt University, Edinburgh, Scotland 2 Institute of Petroleum Engineering (IPE), Heriot-Watt University, Edinburgh, Scotland 3 BP Motivation Using guide objectives and the true objective: vij ← wvij + αr1(pij – xij) + βr2(gj – xij) History matching is the improvement of parameterized oil reservoir models via the minimization of the misfit between real world observations and those obtained through simulation. We wish to automate the history matching of oil reservoirs, incorporating reservoir experts’ domain knowledge into a metaheuristic. However, this is difficult to do in a generally applicable way. We note that, given a suitable model parameterization, certain model parameters will affect certain misfit components to a greater degree than others. This suggests that the problem might be roughly decomposed, with subsets of misfit components being used to create a guide objective for subsets of the model parameters. We show how PSO can be adapted to use both the guide objectives and the overall objectives in a single optimization run. + γr1(pij(j) – xij) + δr2(gj(j) – xij) . Changing the values of α, β, γ and δ allows the influence of the guide objectives on the search to be controlled. Test function Fig. 3: The highly multimodal function, g(x), of Kvasnicka et al. 20 variable Rosenbrock function Minimize: Fig. 1: The PUNQ-S3 case study Roughly separable, with g(xi) acting as guide objective for xi. Optimization and separable problems Fig. 4: Results for the test function, with 95% confidence intervals. Best results are obtained using both guide objectives and the true objective. (Results obtained using only the true objective are of considerably poorer quality and are omitted for clarity.) History matching: PUNQ-S3 Fig. 2: Standard and guided PSO updates, minimizing x2 + y2. On such a separable problem, the best values for x (minimizing x2) and y (minimizing y2) provide better guidance than the overall best solution. Region A B C D E F G H I Guide wells 5 5, 12 4, 5, 12 1, 4, 15 1, 4, 11, 15 1, 11, 15 1, 11 If we have a separable objective function, e.g. f(x, y) = x2 + y2, we should optimize x and y separately. Minimizing f directly results in good values for x being missed when coupled with poor choices for y and vice versa. Note that it may not always be obvious when the objective can be separated, e.g. f(x, y) = x4 + 2x2y2 + y4. Separate optimization is also appropriate for roughly separable problems, e.g. minimizing f(x, y) = x4 + 2x2y2 + y4 + εx3y, where ε is small. A near optimal solution is quickly found that can be improved further by optimizing f directly if desired. Here we refer to g(x) = x2 and h(y) = y2 as the guide objectives for x and y. Table 1: The 9 regions in the PUNQ-S3 reservoir and the wells most likely to be affected by changes in those regions. Porosity values for 9 regions in 5 layers gives 45 decision variables. The guide objective for a variable is the sum of misfits for a subset of wells associated with the respective region. PSO and guide objectives Basic PSO: Best position visited by particle i (particle best) Best position visited by swarm (global best) vij ← wvij + αr1(pij – xij) + βr2(gj – xij) , vij ← min(vij, Vmax, j) , vij ← max(vij, -Vmax, j) , xij ← xij + vij . Fig. 5: Results for the PUNQ-S3 history matching problem for 3000 and 1000 function evaluations, with 95% confidence intervals. Velocity for particle i, component j. Particle position References Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proc. IEEE Int. Conf. on Neural Networks. Vol. 4, (1995) Kvasnicka, V., Pelikan, M., Pospichal, J.: Hill climbing with learning (an abstraction of genetic algorithm). Neural Network World 6(5), (1995) Mohamed, L., Christie, M., Demyanov, V.: Reservoir model history matching with particle swarms. In: SPE Oil and Gas India Conf. And Exhibition. Mumbai, India (2010) Rosenbrock, H.H.: An automatic method for finding the greatest or least value of a function. The Computer Journal 3(3), (1960) Using guide objectives: Swarm best, according to objective for decision variable j vij ← wvij + αr1(pij(j) – xij) + βr2(gj(j) – xij) . Particle best, according to guide objective for decision variable j This performs separate optimizations concurrently in a single run of PSO.


Download ppt "Alan P. Reynolds1. , Asaad Abdollahzadeh2, David W"

Similar presentations


Ads by Google