10.07.2015 Views

characterization of ranked set sampling bayes estimators with ...

characterization of ranked set sampling bayes estimators with ...

characterization of ranked set sampling bayes estimators with ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

SOOCHOW JOURNAL OF MATHEMATICSVolume 28, No. 2, pp. 223-234, April 2002CHARACTERIZATION OF RANKED SET SAMPLINGBAYES ESTIMATORS WITH APPLICATION TO THENORMAL DISTRIBUTIONBYMOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASAbstract. McIntyre’s (1952) concept <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong> (RSS) was founduseful for increasing the efficiency <strong>of</strong> estimating the population mean, when the distributionis unknown. The procedure is applicable when the observations are easierto be <strong>ranked</strong> than measured. In this paper, Bayesian estimation <strong>of</strong> the parameter <strong>of</strong>a distribution using RSS is considered. Using the notion <strong>of</strong> multiple imputations, a<strong>characterization</strong> that relates the Bayes <strong>estimators</strong> using RSS to those using simplerandom sample (SRS) is obtained. This <strong>characterization</strong> turns out to be useful forstudying some <strong>of</strong> the properties <strong>of</strong> these <strong>estimators</strong>, which have usually complicatedforms. It is also used to approximate these <strong>estimators</strong>. The <strong>characterization</strong>is applied to the case <strong>of</strong> normal distribution <strong>with</strong> conjugate prior.1. IntroductionRanked <strong>set</strong> <strong>sampling</strong> (RSS) was first introduced by McIntyre (1952). Theaim was to improve the efficiency <strong>of</strong> the sample mean as an estimator <strong>of</strong> thepopulation mean (µ). RSS is largely used in situation where the sample unitscould be easily arranged <strong>with</strong> respect to the variable <strong>of</strong> interest or any otherconcomitant variable than quantified. McIntyre’s concept depends on measuring,the first visually ordered unit from the first <strong>set</strong>, the second visually ordered unitfrom the second <strong>set</strong> and so on until the maximum unit from the last <strong>set</strong>, whereeach <strong>set</strong> is <strong>of</strong> size m units. Let X (ii) denote the ith-quantified unit from the ith<strong>set</strong> i = 1, . . . , m, then {X (11) , . . . , X (mm) } is called a <strong>ranked</strong> <strong>set</strong> sample.Received February 16, 2001; revised September 14, 2001; revised October 23, 2001.Key words. <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, Bayes estimator, Bayes risk, efficiency, multiple imputation,simple random <strong>sampling</strong>.223The


224 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASwhole procedure can be repeated, if necessary, r times to obtain a RSS <strong>of</strong> sizerm units. McIntyre proposed the mean <strong>of</strong> these m units, denoted by ⌢ µ RSS as acompetitor estimator <strong>of</strong> µ to the mean <strong>of</strong> a simple random sample (SRS) <strong>of</strong> sizem, denoted by ⌢ µ SRS .Takahasi and Wakimoto (1968) provided the statistical theory for RSS procedure.Assuming that <strong>sampling</strong> is from an absolutely continuous distribution,they showed that ⌢ µ RSS is unbiased for µ and has higher efficiency than ⌢ µ SRS .They established the following bounds for the relative precision (efficiency):1 ≤ RP = Var (⌢ µ SRS )Var ( ⌢ µ RSS ) ≤ m + 1 .2The upper bound is achieved when the underlying distribution is uniform.RSS has many applications in different areas such as biology, medicine andagriculture, (see Al-Saleh et al. (2000), Takahasi and Wakimoto (1968)). Manyauthors evaluated the performance <strong>of</strong> RSS using real application (see Halls andDell (1966), Evans (1967), Martin et al (1980), Al-Saleh et al (2000), Al-Saleh andAl-Shrafat (2000), Zheng and Al-Saleh (2001) and Al-Saleh and Zheng (2001)).Stokes (1976) examined the potential <strong>of</strong> RSS for a variety <strong>of</strong> inferential problemsincluding estimating location and scale parameters, interval estimation, varianceestimation and estimation <strong>of</strong> correlation coefficient. In 1995, she considered thelocation scale family. An annotated bibliography on RSS was provided by Kaur etal (1995). Double <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong> was considered by Al-Saleh and Al-Kadiri(2000). This was generalized to multistage <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong> by Al-Saleh andAl-Omari (2001). See also Al-Saleh and Samawi (2000).Al-Saleh and Muttlak (1998) made a numerical comparison between theBayes risks <strong>of</strong> the Bayes <strong>estimators</strong> obtained based on RSS and SRS for thecase <strong>of</strong> exponential distribution; no theoretical results were provided. Lavine(1999) has explored different aspects <strong>of</strong> Bayesian <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, examinedthe procedure from a Bayesian point <strong>of</strong> view and explored some optimality questions.The concept <strong>of</strong> Bayesian methods along <strong>with</strong> RSS was studied by Al-Salehet al (2000), who found that for exponential family <strong>with</strong> conjugate prior, the RSSBayes estimator has smaller Bayes risk than SRS Bayes estimator. In a general<strong>set</strong> up, they showed that the SRS Bayes estimator is a weighted average <strong>of</strong> them m possible RSS plans Bayes <strong>estimators</strong>. Furthermore, the Bayes risk <strong>of</strong> RSS


CHARACTERIZATION OF RANKED SET SAMPLING BAYES ESTIMATORS 225Bayes estimator is smaller than the Bayes risk <strong>of</strong> SRS Bayes estimator, for at leastone plan. The authors have noted that Bayes <strong>estimators</strong> using RSS have verycomplicated forms even for small sample sizes. Their computations in the simpleexponential case for samples <strong>of</strong> size 2 and 3 demonstrated this fact. Bayesianestimation <strong>of</strong> specified parameters under both balanced and generalized RSS wasaccomplished using the Gibbs sampler by Kim and Arnold (1999).The reason that the RSS Bayes <strong>estimators</strong> have not been popular is that thelikelihood function becomes so complicated, thus making the posterior also verycomplicated, no matter how simple the prior may be. These complications aredue to the fact that there is no minimal sufficient statistic <strong>of</strong> lower dimensionthan the dimension <strong>of</strong> the data itself. This lower dimension sufficient statisticusually exists in the case <strong>of</strong> SRS. In this paper we will use the concept <strong>of</strong> multipleimputation proposed by Rubin (1978,1987) to obtain a formula that relates theposterior distribution <strong>of</strong> a parameter θ given the RSS data to that given the fulldata x. Using this formula, it is possible to obtain a <strong>characterization</strong> <strong>of</strong> the RSSBayes estimator in terms <strong>of</strong> SRS Bayes estimator based on the full data. Thisformula facilitates the study <strong>of</strong> some <strong>of</strong> the theoretical properties <strong>of</strong> the estimatoras well as providing clues for approximating their complicated exact forms. This<strong>characterization</strong> is applied to the case <strong>of</strong> normal distribution in section 3.2. General Setup and Some Useful ResultsAssume X 1 , X 2 , . . . , X m is a SRS <strong>with</strong> common density f(x | θ) and anabsolute continuous distribution function F (x | θ). Let Y 1 , Y 2 , . . . , Y m be a RSSfrom this distribution obtained using a full data <strong>of</strong> m 2 observations,X 11 , . . . , X 1m X (11) = Y 1.X m1,...,Xmm} {{ }X (mm) = Y m} {{ } .full dataRSS.Let Y =∑ mi=1Y imand X = ∑ mi=1X im .


226 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASIt is assumed through out this paper that the judgemental identification <strong>of</strong>the ranks is perfect and <strong>with</strong> negligible cost. This assumption is essential for mostpublished results on RSS. Under this assumption Y i has the same distribution asX (i) , where X (i) is the i th ordered statistics <strong>of</strong> a random sample <strong>of</strong> size m. Also,the squared error loss function is used to compare the <strong>estimators</strong>.Let π(θ) be a prior <strong>of</strong> θ, π(θ | y) and π(θ | x) are the posterior density <strong>of</strong>θ given y and the posterior density <strong>of</strong> θ given x, respectively. Throughout thepaper we will be using the following notations:ˆθ SRS (x): The Bayes estimator <strong>of</strong> θ based on a SRS, X 1 , X 2 , . . . , X m , using anyprior density <strong>of</strong> θ.ˆθ RSS (y): The Bayes estimator <strong>of</strong> θ based on a RSS Y 1 , Y 2 , . . . , Y m using any priordensity <strong>of</strong> θ.ˆθ j SRS (x): The Generalized Bayes estimator <strong>of</strong> θ based on a SRS X 1, X 2 , . . . , X musing Jeffery prior <strong>of</strong> θ.ˆθ j RSS (y): The Generalized Bayes estimator <strong>of</strong> θ based on a RSS Y 1, Y 2 , . . . , Y musing Jeffery prior <strong>of</strong> θ.ˆθ ∗ SRS (x): The Bayes estimator <strong>of</strong> θ based on the full data X 11, X 1m , . . . , X mmusing any prior density <strong>of</strong> θ.ˆθ ∗jSRS (x): The Generalized Bayes estimator <strong>of</strong> θ based on the full data X 11, . . . ,X 1m , . . . , X mm using Jeffery prior <strong>of</strong> θ.For more about Bayesian terminology, see Berger (1980).Next, we will state and the following identities that are similar to thoseprovided by Rubin (1978, 1987), for grouped data. The pro<strong>of</strong>s <strong>of</strong> these identitiesare straightforward and therefore skipped.Identity 1.∫π(θ | y) =−∞xπ(θ | x)m(x | y)dxwhere, m(x | y) = m(x,y)m(y)and∫ ∞∫m(x, y) = π(θ)f(x, y | θ)dθ and m(y) = m(x, y)dx.x


CHARACTERIZATION OF RANKED SET SAMPLING BAYES ESTIMATORS 227This identity says that the posterior density <strong>of</strong> θ given the RSS data <strong>of</strong> size mis the expected value <strong>of</strong> the posterior density <strong>of</strong> θ, given the SRS data <strong>of</strong> sizem 2 (full data). The expectation is <strong>with</strong> respect to the predictive distribution,m(x | y).Identity 2.∫⌢θ RSS (y) =x⌢θ∗SRS(x)m(x | y)dx.This identity says that the RSS Bayes estimator is the expected value <strong>of</strong> the SRSBayes estimator based on the full data <strong>with</strong> respect to the predictive distribution.Identity 3.Var (θ | y) = E[Var (θ | X) | y] + Var [ ⌢ θ ∗ SRS(X) | y].Let r( ⌢ θ RSS(Y ), π) = E θ E Y [( ⌢ θ RSS − θ) 2 ] be the Bayes risk <strong>of</strong> ⌢ θ RSS andr( ⌢ θ ∗ SRS(X), π) = E θ E X [( ⌢ θ ∗ SRS − θ) 2 ] be the Bayes risk <strong>of</strong> ⌢ θ ∗ SRS.Identity 4.r( ⌢ θ RSS(Y ), π) = r( ⌢ θ ∗ SRS(X), π) + E[Var ( ⌢ θ ∗ SRS(X) | Y )].Thus, we conclude that the Bayes estimator based on RSS <strong>of</strong> size m can’t bebetter than the Bayes estimator based on SRS <strong>of</strong> size m 2 (full data).3. Bayes Estimator <strong>of</strong> the Normal Mean <strong>with</strong> Conjugate PriorBefore we seek some <strong>characterization</strong> for the RSS Bayes estimator, ˆθ RSS (y),<strong>of</strong> the normal mean θ, when θ has the conjugate prior, N(0, 1), we will give someproperties for the generalized RSS Bayes estimator, ⌢ θ J RSS(y). Let Y 1 , . . . , Y m bea RSS from N(θ, 1) then the p.d.f <strong>of</strong> Y i isg Yi (y i | θ) =m!(i − 1)!(m − i)! [Φ(y i − θ)] i−1 [1 − Φ(y i − θ)] m−i φ(y i − θ)where Φ, φ are the cumulative distribution function and the density function <strong>of</strong>a standard normal variable, respectively. Their joint density ism∏g y (y | θ) = g yi (y i | θ).i=1


228 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASAssuming that π(θ) = 1 (non-informative Jeffery prior), then the mean <strong>of</strong> theposterior distribution <strong>of</strong> θ given y, is the called the generalized Bayes estimatordenoted by ˆθ J RSS · ⌢θ J SRS is defined similarly. The following properties are easy toverify.Property 1. For a scalar a, let a = (a, . . . , a), and y = (y 1 , . . . , y m ) then⌢Jθ RSS (y + a) = ⌢ θ J RSS (y) + a.Property 2. The risk function <strong>of</strong> ⌢ θ J RSS is free <strong>of</strong> θ.Property 3. ⌢ θ J RSS (−y 1, . . . , −y m ) = − ⌢ θ J RSS (y m, . . . , y 1 ).Property 4. ˆθ RSS J is an unbiased estimator <strong>of</strong> θ, i.e. E(ˆθ RSS J (Y ) | θ) = θ.Now, if X 1 , X 2 , . . . , X m is a SRS from N(θ, 1), then ˆθ SRS is mX1+m, <strong>with</strong> Bayesrisk= 11+m . Also, ˆθ SRS J is X.Therefore, by Identity 2ˆθ RSS (y) = E(ˆθ SRS ∗ (X) | y) =m2m 2 + 1 E(X∗ | y).Here X ∗ is the average <strong>of</strong> SRS <strong>of</strong> size m 2 (full data). Hence,ˆθ RSS (y) =m2m 2 + 1 E(X∗ | y) =m2 ∗Jm 2 E(ˆθ+ 1SRS(X) | y)= m2m 2 + 1 ˆθ RSS J (y). (By Identity 2).Now, based on the last relation we state and prove the following lemma.Lemma 1.(1) r(ˆθ RSS , π) =(2) r(ˆθ RSS , π) ≥1(m 2 + 1) 2 [m4 Var (ˆθ J RSS(Y ) | θ) + 1].0.4805m + 0.5195 + m 3(m 2 + 1) 2 (0.4805m + 0.5195) .(3) 1 ≤ efficiency ≤ (m2 + 1) 2 (0.4805m + 0.5195)(m + 1)(0.4805m + 0.5195 + m 3 ) .where the efficiency = r(ˆθ SRS ,π)r(ˆθ RSS ,π) .


CHARACTERIZATION OF RANKED SET SAMPLING BAYES ESTIMATORS 229Pro<strong>of</strong>.(1) ˆθRSS (y) = m2m 2 + 1 ˆθ RSS J (y) impliesE(ˆθ RSS (Y ) | θ) =m2m 2 + 1 E(ˆθ J RSS(Y ) | θ)= m2m 2 θ. (By Property 4).+ 1Also, Var (ˆθ RSS (Y ) | θ) =Property 2. Hence,( m 2m 2 +1) 2Var (ˆθJ RSS (Y ) | θ), which is free <strong>of</strong> θ byR(ˆθ RSS (Y ), θ) = Var (ˆθ RSS (Y ) | θ) + (bias) 2( m2 ) 2 θ 2=Var (ˆθJm 2 + 1RSS (Y ) | θ) +(m 2 + 1) 2 .Therefore, the Bayes risk for ˆθ RSS isr(ˆθ RSS (Y ), π) = E θ (R(ˆθ RSS (Y ), θ)) =1(m 2 + 1) 2 [m4 Var (ˆθ RSS J (Y ) | θ) + 1].(2) To find a lower bound for the Bayes risk <strong>of</strong> ˆθ RSS , it is known by Rao-Cramerinequality that Var (ˆθ j 1RSS(y) | θ) ≥I ∗ (θ) , where I∗ (θ) is the informationnumber in RSS about θ. But I ∗ (θ) = m + m(m − 1)(0.4805), (see Stokes1995), hence[]1m 4r(ˆθ RSS , π) ≥(m 2 + 1) 2 m + m(m − 1)(0.4805) + 1=0.4805m + 0.5195 + m 3(m 2 + 1) 2 (0.4805m + 0.5195) .(3) Efficiency = r(ˆθ SRS , π)r(ˆθ RSS , π) = 1(m + 1)r(ˆθ RSS , π)≤ (m2 + 1) 2 (0.4805m + 0.5195)(m + 1)(0.4805m + 0.5195 + m 3 )(By (2)).To find a lower bound for the efficiency, note that for any estimator <strong>of</strong> θ, ˆθ,we have by the definition <strong>of</strong> the Bayes estimator r(ˆθ RSS , π) ≤ r(ˆθ, π). Thus,


230 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASmr(ˆθ RSS , π) ≤ r(m + 1 Y , π)m= E π [Var (m + 1 Y | θ) + (bias)2 ]m1= (m + 1 )2 Var (Y | θ) +(m + 1) 2 ≤ 1(m + 1) 2 [m2 Var (X | θ) + 1]= 1m + 1 .Since Var (Y | θ) ≤ Var (X | θ) = 1 mby Takahasi and Wakimoto (1968), whereY , X are based on the same number <strong>of</strong> quantifications m.Therefore, efficiency = r(ˆθ SRS ,π)∴ r(ˆθ RSS , π) ≤ 1m + 1 .1=(m+1)r(ˆθ RSS ,π) r(ˆθ RSS ,π)≥ 1 and hence 3 holds.Table 1 shows a numerical comparison between the Bayes risk <strong>of</strong> ˆθ RSS andits efficiency (<strong>with</strong> respect to ˆθ SRS ) obtained by simulation work, and thatTable 1.Bayes Risk and Efficiency Obtained by Simulation and that ObtainedUsing the Bounds in (2) and (3), m =<strong>set</strong> size, r = # <strong>of</strong> cyclesr(ˆθ RSS , π) Lower bound for Upper bound for Efficiencyr m From simulation r(ˆθ RSS , π) by (2) efficiency by (3) From simulation1 2 0.2502 0.2561 1.3014 1.30443 0.1443 0.1477 1.6928 1.68734 0.0971 0.0942 2.1240 2.07825 0.0644 0.0648 2.5735 2.57152 2 0.1437 0.1458 1.3720 1.37183 0.0794 0.0790 1.8072 1.81224 0.0490 0.0491 2.2648 2.26605 0.0337 0.0333 2.7318 2.69533 2 0.1009 0.1018 1.4028 1.40283 0.0538 0.0540 1.8532 1.85214 0.0339 0.0332 2.3191 2.31075 0.2240 0.0224 2.7912 2.7554


CHARACTERIZATION OF RANKED SET SAMPLING BAYES ESTIMATORS 231obtained using the bounds in (2), (3) <strong>of</strong> Lemma 1. We can see that the Bayesrisk lower bound values from (2) is very close to its approximated values foundby simulatioin, so, we could use the lower bound as the value for the Bayes risk<strong>with</strong>out getting involved into simulation work. Also, most <strong>of</strong> the efficiency values(from simulation) are <strong>with</strong>in the upper bound <strong>of</strong> the efficiency found from (3).The few cases, in which the values <strong>of</strong> the efficiency obtained are higher than theupper bound, were due to <strong>sampling</strong> variations.4. Approximations <strong>of</strong> ˆθ RSSAs a useful use <strong>of</strong> the relation ˆθ RSS (y) =m2m 2 +1 E(X∗ | y), one can think <strong>of</strong>different ways <strong>of</strong> approximating the conditional density <strong>of</strong> the complete samplegiven the RSS, y. This in turn gives different approximations to ˆθ RSS .As examples, we will discuss two ways <strong>of</strong> approximating ˆθ RSS .(A) To find ˆθ 1 , as an approximation to ˆθ RSS we reproduce a full data <strong>set</strong> <strong>of</strong> sizem 2 using the following two steps:(1) Order the RSS, y 1 , . . . , y m , to get y (1) , . . . , y (m) .(2) Select m values independently from U(y (i) , y (i+1) ) i = 1, . . . , m − 1.This will yield m(m − 1) data points. Using these data points along <strong>with</strong>the original m RSS data points as the full data we have:ˆθ RSS (y)=m2m 2 +1 E(X∗ | y) ∼ = ˆθ 1 (y)= 1 [ m+2m 2 (y+1 2 (1) +y (m) )+(1+m)m−1 ∑i=2y (i)].(B) The other approximation ˆθ 2 is found by again ordering the RSS to get y (1) , . . . ,y (m) . Since X i ∼ N(0, 2) (unconditionally), we may reproduce a full data <strong>set</strong>as follows:Select m values independently from N(0, 2), such that these values are betweeny (i) , y (i+1) ; i = 1, . . . , m−1, i.e. from the density φ( √ x 2)/ √ [2 Φ( y (i+1)√2)−Φ( y ](i)√2) , which is a truncated N(0, 2). This will yield m(m − 1) data pointsthat can be used along <strong>with</strong> the m RSS data as the full data.Now, theexpected value <strong>of</strong> each selected observation between y (i) , y (i+1) can be shownto be√2[φ( y (i)√2) − φ( y ](i+1)√2)[Φ( y (i+1)) − Φ( y ] , i = 1, . . . , m − 1.(i)√2)√2


232 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWASThus,ˆθ RSS (y) =m2m 2 +1 E(X∗ | y) ∼ = ˆθ 2 =m(m 2 y+ √ 2+1m−1 ∑i=1[φ( y (i)√2)−φ( y ](i+1) )√2)[Φ( y (i+1)√2)−Φ( y ] .(i)√2)Table 2 shows exact values for ˆθ RSS , obtained using intensive simulation,along <strong>with</strong> values <strong>of</strong> the approximations ˆθ 1 , ˆθ2 for different values <strong>of</strong> θ whenm = 2, 3. It can be seen that the two approximations are very close to ˆθ RSS .Table 2.Simulated vaules <strong>of</strong> θ, ˆθ 1 , ˆθ 2 , ˆθn θ RSS ˆθRSS ˆθ1 ˆθ22 -2.0553 -1.7684 -1.3709 -1.4530 -1.4482-1.8641-0.4665 -0.1953 -0.2484 -0.2292 -0.2285-0.3777-0.1543 0.4420 0.1977 0.1976 0.19680.51980.1588 1.3029 0.7280 0.7172 0.70610.49020.2298 1.1069 0.2168 0.2110 0.1988-0.57953 -0.2744 1.3300 0.1690 0.1720 0.21010.7726-1.87810.3220 1.0184 0.3897 0.3943 0.38930.3527-0.00564.7351 0.7509 -5.8211 -4.7759 10 −2 -4.2693 10 −210 −2 .0710 10 −2-1.05563 -0.2162 0.8056 -0.2406 -0.2362 -0.16520.3348-2.28620.4424 1.4390 0.5599 0.5491 0.53390.30640.2673


CHARACTERIZATION OF RANKED SET SAMPLING BAYES ESTIMATORS 233The above procedure <strong>of</strong> studying the properties <strong>of</strong> RSS Bayes <strong>estimators</strong> aswell as approximating them, based on their relation to SRS Bayes <strong>estimators</strong>,can be applied in a similar fashion to other distributions.5. Concluding RemarksRanked <strong>set</strong> <strong>sampling</strong> technique is applicable in very specific situations, butwhen applicable, it gives more efficient <strong>estimators</strong> than simple random <strong>sampling</strong>.RSS Bayes <strong>estimators</strong> can be more efficient than the corresponding SRS Bayes<strong>estimators</strong>. However those <strong>estimators</strong> have very complicated forms. Using the<strong>characterization</strong> provided in this paper, these <strong>estimators</strong> can be approximatedand their properties can be investigated.AcknowledgementsWe are grateful to the referees for their constructive and helpful commentsand suggestions which greatly improve the final version <strong>of</strong> the paper.References[1] M. Fraiwan Al-Saleh and A. I. Al-Omari, Multistage <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, Journal <strong>of</strong> StatisticalPlanning and Inference, to appear.[2] M. Fraiwan Al-Saleh and G. Zheng, Estimation <strong>of</strong> bivariate characteristics using <strong>ranked</strong> <strong>set</strong><strong>sampling</strong>. Australian & New Zealand Journal <strong>of</strong> Statistics, to appear.[3] M. Fraiwan Al-Saleh, K. Al-Shrafat and H. Muttlak, Bayesian estimation using <strong>ranked</strong> <strong>set</strong><strong>sampling</strong>, Biometrical Journal 42(2000), 1-12.[4] M. Fraiwan Al-Saleh and H. Muttlak, A note on the estimation <strong>of</strong> the parameter <strong>of</strong> theexponential distribution using Bayesian RSS. Pakistan Journal <strong>of</strong> Statistics, 14(1998), 49-56.[5] M. Fraiwan Al-Saleh and K. Al-Shrafat, Estimation <strong>of</strong> the average milk yield <strong>of</strong> sheep using<strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>. Environmentrics, 12(2000), 395-399.[6] M. Fraiwan Al-Saleh and H. Samawi, On the efficiency <strong>of</strong> Monte Carlo methods usingsteady state <strong>ranked</strong> simulated samples, Communication in Statistics (Simulation and Computation),29(2000), 941-954.[7] J. Berger, Statistical Decision Theory, Springer-Verlag, 1980.[8] M. Evans, Application <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong> to regeneration surveys in areas direct-seededto long leaf pine. Master Thesis, School <strong>of</strong> Forestry and Wild Life Management, LouisianaState University, Baton Rouge, Louisiana, 1967.[9] L. Halls and T. Dell, Trial <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong> for forage yields, Forest Science, 12(1966),22-6.[10] A. Kaur, G. Patil, A. Sinha and C. Tailie, Ranked <strong>set</strong> <strong>sampling</strong>: an annotated bibliography,Environmental and Ecological Statistics, 2(1995), 25-45.


234 MOHAMMAD FRAIWAN AL-SALEH AND JAWAHER YOUSEF ABUHAWWAS[11] Y. Kim, B. Arnold, Parameter estimation under generalized <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>. Statisticsand Probability Letters, 42(1999), 353-360.[12] M. Lavine, The Bayesics <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, Journal <strong>of</strong> Environmental and EcologicalStatistics, 6(1999), 47-57.[13] W. Martin, T. Sharlk, R. Oderwald and D. Smith, Evaluation <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>for estimating shrub phytomass in application Oak forests publication number FWS-4-80, School <strong>of</strong> Forestry and Wildlife Resources, Virginia Polytechnic Institute and StateUniversity, Blacksburg, Virginia, 1980.[14] G. McIntyre, A method for unbiased selective <strong>sampling</strong> using <strong>ranked</strong> <strong>set</strong>, Australian Journal<strong>of</strong> Agricultural Research. 3(1952), 385-90.[15] D. Rubin, Multiple imputations in sample surveys - a phenomenological Bayesian approachto nonresponse, Proc. Survey Res. Methods Sec. Amer. Statist. Assoc., Washington, 1978.[16] D. Rubin, Multipe imputations for nonresponse in sample surveys and censuses, Wiley, NewYork, 1987.[17] S. Stokes, An investigation <strong>of</strong> the consequences <strong>of</strong> <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, Ph. D. thesis,Department <strong>of</strong> Statistics, University <strong>of</strong> North Carolina, Chapel Hill, North Carolina, 1967.[18] S. Stokes, Parametric <strong>ranked</strong> <strong>set</strong> <strong>sampling</strong>, Annals <strong>of</strong> Institute <strong>of</strong> Mathematical Statistics,47(1995), 465-482.[19] K. Takahasi and K. Wakimoto, On unbiased estimates <strong>of</strong> the population mean based on thesample stratified by means <strong>of</strong> ordering. Annals <strong>of</strong> the Institute <strong>of</strong> Statistical Mathematics,20(1968), 1-31.[20] G. Zheng and M. Fraiwan Al-Saleh, Modified maximum likelihood <strong>estimators</strong> based on <strong>ranked</strong><strong>set</strong> <strong>sampling</strong>. The Annals <strong>of</strong> the Institute <strong>of</strong> Statistical Mathematics, to appear.Department <strong>of</strong> Mathematics and Statistics, Sultan Qaboos University, Sultanate <strong>of</strong> Oman.E-mail: malsaleh@squ.edu.omMathematics Department, Hashemiah University, JORDAN.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!