06.03.2013 Views

Assessing Uncertainty in Future Pressure Changes Predicted By ...

Assessing Uncertainty in Future Pressure Changes Predicted By ...

Assessing Uncertainty in Future Pressure Changes Predicted By ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

case how to characterize the uncerta<strong>in</strong>ty <strong>in</strong> future<br />

predicted response.<br />

Assum<strong>in</strong>g that a nonl<strong>in</strong>ear model can be l<strong>in</strong>earized<br />

around the optimum parameter vector and that the<br />

measurement errors on the observed past and future<br />

data to be observed are <strong>in</strong>dependent, identically<br />

distributed random variables with mean zero and<br />

2 variance σ d (i.e., CD <strong>in</strong> Eq. 3 is a diagonal matrix<br />

2 2 2<br />

with all diagonal entries equal to σ d or σd, i = σ d <strong>in</strong><br />

Eq. 4), we can show that the variance of the predicted<br />

value of y pi , at a given time ti such that ti > tN<br />

is<br />

d<br />

given by (Bard, 1974; Dogru et al., 1977; Sen and<br />

Srivastava, 1990)<br />

( ) 1 −<br />

−<br />

σ σ g G ∗C G ∗ g . (8)<br />

2 2 2 T T 1<br />

p, i = d + s i m D m i<br />

Here, gi is the M-dimensional sensitivity vector of<br />

predicted response at any time ti:<br />

T ⎡∂y ∂y ∂y<br />

⎤<br />

g i = ⎢ ⎥ , (9)<br />

p, i p, i p, i<br />

, , K,<br />

⎣ ∂m1 ∂m2 ∂mM⎦<br />

where each sensitivity is evaluated at the optimized<br />

∗<br />

parameter vector, m obta<strong>in</strong>ed from history match<strong>in</strong>g<br />

−<br />

∗<br />

m D ∗<br />

m<br />

G C G is the M × M approximate<br />

period. ( ) 1<br />

T −1<br />

∗<br />

Hessian matrix evaluated at m and also represents<br />

∗<br />

the covariance matrix of the estimated m . The<br />

diagonal entries of this matrix represent the variances<br />

of model parameters, while its off diagonal entries<br />

represent the covariances (or correlations) between<br />

the two model parameters. This matrix does not vary<br />

with the prediction time ti because it is determ<strong>in</strong>ed<br />

from the history-match<strong>in</strong>g period by m<strong>in</strong>imiz<strong>in</strong>g Eq.<br />

2<br />

3. If σ d is not known, it can be estimated from the<br />

history-match<strong>in</strong>g period:<br />

σ<br />

∗ ( m )<br />

Nd<br />

∑ ⎡yobs, i − f ⎤<br />

⎣ i ⎦<br />

2 i=<br />

1<br />

d =<br />

Nd−M . (10)<br />

2<br />

As is clear from Eq. 8, the uncerta<strong>in</strong>ty (or σ pi , ) <strong>in</strong> the<br />

predicted response is controlled by the variance and<br />

covariance of the estimated parameters through the<br />

matrix ( ) 1 −<br />

T −1<br />

G ∗CDG ∗ and the quality of fit s m m<br />

2<br />

computed from history match<strong>in</strong>g period as well as the<br />

sensitivity of the predicted response to the estimated<br />

model parameters through the vector g i computed <strong>in</strong><br />

2<br />

the prediction period. In general, the behavior of σ pi ,<br />

can be quite complicated depend<strong>in</strong>g on the<br />

magnitudes of<br />

and ( ) 1 −<br />

T T −1<br />

2<br />

σ d (or the quality of match, s 2 )<br />

giG ∗CDG ∗ g m m i . However, theoretically, we<br />

would expect that as the complexity of the model<br />

<strong>in</strong>creases, the magnitude of s 2 2<br />

(or the variance σ d <strong>in</strong><br />

the case where it is unknown and estimated from Eq.<br />

10) decreases, while that of ( ) 1 −<br />

T T −1<br />

giG ∗CDG ∗ g<br />

m m i<br />

<strong>in</strong>creases. We usually expect that the behavior of the<br />

predicted response is more controlled by the<br />

magnitude of ( ) 1 −<br />

T T −1<br />

giG ∗CDG ∗ g m m i than that of s 2 for<br />

the lumped models. In fact, as shown later, our<br />

results show that when an over parameterized model<br />

<strong>in</strong>stead of the correct lumped model is used for<br />

history match<strong>in</strong>g, the uncerta<strong>in</strong>ty <strong>in</strong> predicted<br />

performance is overestimated, while us<strong>in</strong>g a less<br />

parameterized model provides an underestimated<br />

variance <strong>in</strong> the predicted response. Approximate<br />

confidence limits based on Eq. 8 for the predicted<br />

responses can be also constructed (Dogru et al.,<br />

1977; Sen and Srivastava, 1990). These limits can<br />

characterize the uncerta<strong>in</strong>ty <strong>in</strong> future predicted<br />

responses.<br />

As mentioned above, Eq. 8 is based on l<strong>in</strong>earization<br />

of the predicted response around the optimal<br />

parameter vector. Hence, Eq. 8 may not provide a<br />

reliable estimate of the uncerta<strong>in</strong>ty on the predicted<br />

response if the l<strong>in</strong>earization is not valid.<br />

For nonl<strong>in</strong>ear problems, it has been shown that<br />

although it is approximate, the randomized maximum<br />

likelihood method (RML) does a good job for<br />

assess<strong>in</strong>g the uncerta<strong>in</strong>ty <strong>in</strong> the predicted response<br />

(Kitanidis et al., 1995; Oliver et al., 1996; Liu and<br />

Oliver, 2003; Gao et al., 2005). <strong>By</strong> these authors, the<br />

RML has been considered with<strong>in</strong> the Bayesian<br />

estimation framework for under-determ<strong>in</strong>ed problems<br />

(i.e., the unknown model parameters far exceeds the<br />

number of observed data, M > Nd) with a prior model<br />

for the parameters. Here, we apply the RML method<br />

for the lumped-parameter model<strong>in</strong>g without a prior<br />

model, which usually constitutes an over-determ<strong>in</strong>ed<br />

problem (Nd >> M). In this case, the RML would<br />

provide sampl<strong>in</strong>g of the likelihood probability<br />

density function for the model conditional to<br />

observed data, given by (Bard, 1974)<br />

⎧ 1 ⎫<br />

p( myobs ) = cexp⎨− O(<br />

m ) ⎬<br />

(11)<br />

⎩ 2 ⎭<br />

where O(m) is given by Eq. 1 and c is a normaliz<strong>in</strong>g<br />

constant.<br />

In the RML sampl<strong>in</strong>g procedure of Eq. 11, a<br />

conditional realization of the model parameters to<br />

observed data can be generated as follows: (i)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!