m β (c 1 , c 2 , ρ) = m ′ ...(1.3.12) the numerator approximating its limiting value m β (c 1 )/ρ and hence after a certain point R(c 1 , c 2 , ρ, α, β) becomes an increasing function. [ Reference Figure 1.3.1 ] 5. But the rate of increase will decrease so that R(c 1 , c 2 , ρ, α, β) → m β (c 1 )/m 1−α (c 1 ) as c 2 → ∞. ( The arguments provided above do not constitute a proof of the theorem. But, what is being claimed is simple enough and the supporting arguments help in understanding the main point.) Thus, for a given c 1 there is a unique c 2 for which R(c 1 , c 2 , ρ, α, β) will be minimum. We call this as c (c 1) 2 . Note that c (c 1) 2 > c (c 1+1) 2 ...(1.3.10) and R(c 1 , c (c 1) 2 , ρ, α, β) > R(c 1 , c (c 1+1) 2 , ρ, α, β) ...(1.3.11) Figure 1.3.1 presents the graph of R(c 1 , c (c 1) 2 , ρ, α, β) as a function of c 2 for α = 0.05, β = 0.10, ρ = 0.1 and c 1 = 1, 2, 3, 4. Since c 1 , c 2 are all integers we must consider the set of c 1 , c 2 values for which R(c 1 , c 2 , ρ, α, β) is less than p ′ /p and choose the c 1 and c 2 for which m β (c 1 , c 2 , ρ) is minimum. This will ensure the minimum sample size satisfying m 1−α (c 1 , c 2 , ρ) = m Construction Algorithm Precisely we shall adopt the following steps. Step 1. Choose c 1 Step 2. For c 2 < c (c 1) 2 , we check if R(c 1 , c 2 − 1, ρ, α, β) ≥ p ′ /p > R(c 1 , c 2 , ρ, α, β) If it holds, then choose n = m β (c 1 , c 2 , ρ)/p ′ . Else, Step 3. increase c 1 by one and go to step 1. To facilitate the above tasks we may construct a table containing c 1 , c 2 , m β (c 1 , c 2 , ρ), m 1−α (c 1 , c 2 , ρ) arranged in descending order of R(c 1 , c 2 , ρ, α, β) for a given 67
α, β, and ρ. For α = 0.05, β = 0.10 and ρ = 0.1 this table is given below as table 1.3.1. Similar tables for other values of α, β and ρ can be constructed. With the tables provided, construction of fixed risk C kind plan for r = 2 as demonstrtrated is simple. No attempts are made to invistigate the case r > 2. The approach will be similar in essence but numerically the problem is sure to turn more clumsy for r > 2. 68
- Page 1 and 2:
Multiattribute acceptance sampling
- Page 3 and 4:
e) the cost functions are linear an
- Page 5 and 6:
Contents 0.1 Purpose of sampling in
- Page 7 and 8:
Part 0 : Introduction 0.1 Purpose o
- Page 9 and 10:
ectification is defined by setting
- Page 11 and 12:
procurement of materials of the US
- Page 13 and 14:
average defective from such a proce
- Page 15 and 16:
0.4 Scope of the present inquiry 0.
- Page 17 and 18:
0.4.6 A generalized acceptance samp
- Page 19 and 20:
sembled units the general practice
- Page 21 and 22: with the above purpose in mind. The
- Page 23 and 24: equals to p i in the long run. We a
- Page 25 and 26: m j , r∏ −P C j ′ = g(x j , m
- Page 27 and 28: show that if we increase c 2 keepin
- Page 29 and 30: independence and mutually exclusive
- Page 31 and 32: different sampling schemes in terms
- Page 33 and 34: curves i.e. the OC curve as a funct
- Page 35 and 36: 0.5.3 Part3 Bayesian multiattribute
- Page 37 and 38: K(N, n)/(A 1 − R 1 ) = nk ′ s +
- Page 39 and 40: example, a sample of finished garme
- Page 41 and 42: ...(1.1.2) If now X i is assumed to
- Page 43 and 44: (i) Poisson as approximation to bin
- Page 45 and 46: 1.2 Multiattribute sampling schemes
- Page 47 and 48: We take this case and the case of t
- Page 49 and 50: highest with respect to critical de
- Page 51 and 52: ...(1.2.4) Note that, for single at
- Page 53 and 54: ...(1.2.7) To compare the relative
- Page 55 and 56: Corollary For ρ 2 /ρ 1 > c 2 /c 1
- Page 57 and 58: increasing in each a i . Note that
- Page 59 and 60: Figure 1.2.1 Absolute value of Slop
- Page 61 and 62: Table 1.2.5: Three attribute A kind
- Page 63 and 64: Table 1.2.5 (contd.): Three attribu
- Page 65 and 66: Table 1.2.5 (contd.) : Three attrib
- Page 67 and 68: Table 1.2.5 (contd.) : Three attrib
- Page 69 and 70: 1.3 Multiattribute sampling plans o
- Page 71: Let M 2 < M 1 . Then G(c 1 , M 1 ρ
- Page 75 and 76: ma β (a 1 , a 2 , ρ) = m ′ ...(
- Page 77 and 78: Table:1.3.2: Construction parameter
- Page 79 and 80: 1.3.5 D kind plans of given strengt
- Page 81 and 82: Figure 1.3.1: The sketch of the fun
- Page 83 and 84: 2.1 General cost models 2.1.1 Scope
- Page 85 and 86: (j) Interaction of scrappable attri
- Page 87 and 88: use of defective item is additive o
- Page 89 and 90: 2.1.5 Approximation under Poisson c
- Page 91 and 92: 2.2 The expected cost model for dis
- Page 93 and 94: P (p (j) ) denotes the probability
- Page 95 and 96: 250 200 Figure 2.2.1 : Lot Quality
- Page 97 and 98: Table 2. 2. 1 Testing goodness of f
- Page 99 and 100: 2. 3 Cost of MASSP's of A kind 2.3.
- Page 101 and 102: From (2.3.2) we observe that γ 2 1
- Page 103 and 104: ( γ / ) 2 γ − ( m′− m e ) /
- Page 105 and 106: We now consider the following funct
- Page 107 and 108: ….(2.3.12) Here S A denotes the s
- Page 109 and 110: ∴There exists an optimal A plan f
- Page 111 and 112: Hence, Q(m) = 1 − P (m) has the s
- Page 113 and 114: where a 0 and n 0 are the parameter
- Page 115 and 116: where, p = (p 1 , p 2 , ..., p r )
- Page 117 and 118: situations as above. For the third
- Page 119 and 120: 2.5.3 Optimal Bayesian plans in sit
- Page 121 and 122: Table 2.5.2 : Optimal multi attribu
- Page 123 and 124:
Annexure Microsoft Visual Basic pro
- Page 125 and 126:
If Regret,OMREGRET(c3-1) Tjem Prevo
- Page 127 and 128:
3.1 Bayesian single sampling multia
- Page 129 and 130:
f(p i , ¯p i , s i )dp i = e −v
- Page 131 and 132:
easonable to assume that the p i
- Page 133 and 134:
the minimum cost is obtained at a 1
- Page 135 and 136:
Table 3.1.1 (Contd.) : Results of 1
- Page 137 and 138:
Table 3.1.2 : Testing goodness of f
- Page 139 and 140:
Figure 3.1.1 : Scatter plots of num
- Page 141 and 142:
Figure 3.1.2: Sketch of the cost fu
- Page 143 and 144:
Annexure Microsoft Visual Basic pro
- Page 145 and 146:
References Ailor, R. B., Schmidt, J
- Page 147 and 148:
Copenhagen. Hamaker, H. C. (1950).
- Page 149:
Taylor, E. F. (1957). Discovery sam