12.07.2015 Views

PDF of presentation slides - Imperial College London

PDF of presentation slides - Imperial College London

PDF of presentation slides - Imperial College London

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Context Stochastic Processes Tackling Large State Spaces More ExamplesTackling Large State Spacesin Performance ModellingWilliam Knottenbelt, Jeremy Bradley, Nicholas Dingle,Peter Harrison, Aleksandar Trifunovic{wjk,jb,njd200,pgh,at701}@doc.ic.ac.ukDepartment <strong>of</strong> Computing<strong>Imperial</strong> <strong>College</strong> <strong>London</strong>United KingdomSFM-07 Bertinoro, Italy 1 June 2007


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsThe rise <strong>of</strong> system complexitySystems are no longer controlled by single-flow-<strong>of</strong>-controlprograms running on single machinesIncreasing use <strong>of</strong> technologies like multithreading andparallel and distributed programmingModern systems are complex webs <strong>of</strong> cooperatingsubsystems implying:millions or billions <strong>of</strong> possible system configurationsmassive potential for bugs caused by subtle interactionsChallenge: How can we rigorously guarantee satisfactorysystem operation in the face <strong>of</strong> this state explosion?


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsThe triumph <strong>of</strong> formal methods over ad hoc testingThorough code reviews and ad hoc testing are not enoughe.g. You might think file systems are highly reliable andthoroughly tested, right?


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsThe triumph <strong>of</strong> formal methods over ad hoc testingThorough code reviews and ad hoc testing are not enoughe.g. You might think file systems are highly reliable andthoroughly tested, right?Recent work on model-checking <strong>of</strong> file system code byEngler et al (OSDI ’06) uses simple BFS exploration <strong>of</strong> allpossible execution paths and failure points18 (serious and hitherto undiscovered) errors uncovered inthe 10 widely-used file systems testedFormal methods can also be brought to bear on problem <strong>of</strong>ensuring adequate system performance


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsPerformance requirements are getting more sophisticatedNo longer primarily concerned with simple steady-stateresource-based measures like mean utilizationService Level Agreements featuring more sophisticatedperformance requirements based on passage times andtransients aboundUsed by hospitals, emergency services, research councils,postal services, industry-benchmarks etc.


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsCPU Speed“The free performance lunch is over” thanks to physicalbarriersMulti-core processors widely available:Dual-core processors commonIBM/Sony Cell, Sun Niagara have 8 coresIntel has released 80-core prototype


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsCPU Speed“The free performance lunch is over” thanks to physicalbarriersMulti-core processors widely available:Dual-core processors commonIBM/Sony Cell, Sun Niagara have 8 coresIntel has released 80-core prototypeConcurrent/parallel programming is the next programmingrevolution


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsDrive capacity (Seagate projections)


Context Stochastic Processes Tackling Large State Spaces More ExamplesTrends in Large-scale SystemsDrive capacity (Seagate projections)2006 2009 2013Capacity (GB) 750 2000 8000Areal Density (GB/sq in) 133 500 1800Transfer Rate (Mb/s) 930 2000 5000RPM 7200 7200 10000read seek time (ms) 8 7.2 6.5Source: Dave Anderson, Seagate Techologies, 2007


Context Stochastic Processes Tackling Large State Spaces More ExamplesTalk Overview


Context Stochastic Processes Tackling Large State Spaces More ExamplesTalk Overview


Context Stochastic Processes Tackling Large State Spaces More ExamplesTalk Overview


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsCTMC IntroductionModel systems by identifying all possible states systemcan enter (the state space) and ways in which system canmove between states (the state graph)State transitions describe a stochastic process χ(t)Focus on time-homogeneous Markov processes withdiscrete, finite state spaces that evolve in continuous time


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsCTMC DefinitionEvolution <strong>of</strong> an homogenous N-state CTMC described byN × N generator matrix Qq ij is the infinitesimal rate <strong>of</strong> moving from state i to state j(i ≠ j)q ii = − ∑ j≠i q ijSojourn time in state i is exponentially distributed with rateparameter µ i = −q iiSteady state distribution πQ = 0 with ∑ π i = 1


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)t3 (3)t2 (2)p3


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)1t3 (3)t2 (2)(1,1,0)p3


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)1t3 (3)t2 (2)1(1,1,0)2(0,2,0) (1,0,1)p3


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)1t3 (3)t2 (2)1(1,1,0)2(0,2,0) (1,0,1)p3


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)1t3 (3)t2 (2)1(1,1,0)2(0,2,0) (1,0,1)p32(0,1,1)


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p2(2,0,0)1t3 (3)t2 (2)1(1,1,0)23p33(0,2,0) (1,0,1)213(0,1,1)2(0,0,2)


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsGenerating CTMCsCTMCs can be automatically derived from severalhigher-level formalisms e.g. Stochastic Petri nets,Stochastic Process AlgebrasExample:Lp1t1 (1)p21(2,0,0)1 2 3 4 5 611−11t3 (3)t2 (2)1(1,1,0)223 23Q=43−31−22−42133(0,2,0) 4 (1,0,1)53 −5 2p3213(0,1,1)2(0,0,2)5 663−3


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsResponse Time Analysis with UniformisationCTMCs can be “uniformised” for purposes <strong>of</strong> first passagetime analysis:P = Q/q + Iwhere the rate q > max i |q ii | ensures that P is aperiodicP is the one-step transition matrix <strong>of</strong> a DTMC; hasequivalent behaviour to CTMC when number <strong>of</strong> transitionsis given by an associated Poisson process <strong>of</strong> rate q.Modify P to produce P ′ by making all target statesabsorbing


Context Stochastic Processes Tackling Large State Spaces More ExamplesContinuous Time Markov ChainsResponse Time Analysis with UniformisationDensity <strong>of</strong> time between states can be found by convolvingstate holding-time densities along all paths between states⎛⎞∞∑f ⃗i ⃗ j(t) = ⎝ qn t n−1 e −qt ∑π (n) ⎠(n − 1)!kn=1k∈ ⃗ j⎛⎞m∑≃ ⎝ qn t n−1 e −qt ∑π (n) ⎠(n − 1)!kn=1k∈ ⃗ jwhere:with:π (n+1) = π (n) P ′ for n ≥ 0π (0)k={0 for k /∈ ⃗ iπ k / ∑ j∈ ⃗ i π j for k ∈ ⃗ i


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesSMP IntroductionExtensions <strong>of</strong> Markov processes that allow for generallydistributed sojourn timesMarkov property holds at transition instantsSMP characterised by two matrices P and H with elementsp ij and H ij (t) respectivelyp ij = IP(χ n+1 = j | χ n = i) is the state transition probabilitybetween states i and j (embedded DTMC); χ n is the stateat transition nH ij (t) = IP(T n+1 − T n ≤ t | χ n+1 = j, χ n = i) is the sojourntime distribution in state i when the next state is j; T n is thetime <strong>of</strong> transition n


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesGenerating SMPsSMPs can be generated from various higher-level formalisms(e.g. SM-SPN, SM-PEPA)DefinitionAn SM-SPN consists <strong>of</strong> a 4-tuple, (PN, P, W, D), where:PN = (P, T , I − , I + , M 0 ) is the underlying Place-Transitionnet.P : T × M → ZZ + , denoted p t (m), is a marking-dependentpriority function for a transition.W : T × M → IR + , denoted w t (m), is amarking-dependent weight function for a transition.D : T × M → (IR + → [0, 1]), denoted d t (m), is amarking-dependent cumulative distribution function for thefiring time <strong>of</strong> a transition.


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesSteady State Analysis <strong>of</strong> SMPsSolve for steady state distribution <strong>of</strong> embedded DTMC:π = πPCalculate average sojourn time in state i:IE[τ i ] = ∑ jp ij IE[τ ij ]where IE[τ ij ] is the expected sojourn time in state i whenmoving to state j.Then the steady-state probability <strong>of</strong> being in state i is:φ i =π i IE[τ i ]∑ Nm=1 π mIE[τ m ]


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesResponse Time Analysis with Laplace TransformsWe calculate the Laplace Transform (LT) <strong>of</strong> requiredresponse time density and then invert it numerically.Laplace Transform <strong>of</strong> f (t) isFor f (t) = λe −λtL(s) =L(s) =∫ ∞0∫ ∞0= λ/(λ + s)= λ/(λ + s)λe −(λ+s)t dte −st f (t)dt∫ ∞0(λ + s)e −(λ+s)t dt


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?LλL(s) =( ) λ⇔ f (t) = λe −λtλ + sUniquenessL


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?LλµLL(s) =( λ) ( µ)λ + s µ + sConvolution (easier than h(t) = f (t) ∗ g(t) = ∫ t0f (α)g(t − α)dα)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?Lpλ(1−p)µL( ) ( )λµL(s) = p + (1 − p)λ + sµ + sLinearity


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?LUniform(a,b) ImmediateDet(d)L( e −as − e −bs )L(s) =(1)(e −ds)(b − a)sExtensible to non-exponential time delays (SMPs)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?Integration in t domain corresponds to division in Laplacedomain.By inverting L(s)/s instead <strong>of</strong> L(s) we obtain cumulativedensity function (cdf) cheaply.L f (t) (s) =∫ ∞0= sL F (t) (s)e −st f (t)dt = e −st F (t) ∣ ∣ ∞ 0∫ ∞+ s e −st F (t)dt0Quantiles (percentiles) can be easily calculated from cdf.


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesWhy the Laplace Domain?Calculating (raw) moments is straightforward:L ′ (s) =L ′ (0) = −∫ ∞0∫ ∞0= −E[T ]−te −st f (t)dttf (t)dtHigher moments can be found via L ′′ (0), L ′′′ (0) etc.i.e. From the Laplace Transform, we can easily find themean, variance, skewness, kurtosis etc. <strong>of</strong> f (t)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesResponse Time Analysis with Laplace TransformsFirst step analysis (source state i, target states ⃗ j):L i ⃗ j(s) = ∑ p ik hik ∗ (s)L k ⃗ j (s) + ∑ p ik hik ∗ (s)k /∈ ⃗ jk∈ ⃗ jFor CTMCs:L i ⃗ j(s) = ∑ k /∈ ⃗ jq iks − q iiL k ⃗ j(s) + ∑ k∈ ⃗ jq iks − q iiWith multiple source states:L ⃗i ⃗ j(s) = ∑ k∈ ⃗ iα k L k ⃗ j(s) where α k = ˜π k∑j∈ ⃗ i˜π j


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesResponse Time Analysis with Laplace TransformsAs a system <strong>of</strong> n linear equations in Ax = b form ( ⃗ j = 1):⎛1 −r12 ∗ (s) · · · −r 1n ∗ (s) ⎞ ⎛ ⎞L 1 ⃗ j(s)⎛0 1 − r22 ∗ (s) · · · −r 2n ∗ (s)0 −r32 ∗ (s) · · · −r 3n ∗ (s)L 2 ⃗ j(s)⎟ ⎜ L 3 ⃗ j(s) ⎟= ⎜⎜⎝.0 . ..⎟. ⎠ ⎜⎝0 −rn2 ∗ (s) · · · 1 − r nn(s)∗where r ∗ij (s) = p ijh ∗ ij (s)Symbolic solution infeasible for large n..L n ⃗ j(s)⎟⎠⎜⎝r11 ∗ (s) ⎞r21 ∗ (s)r31 ∗ (s)⎟. ⎠rn1 ∗ (s)Instead solve for particular values <strong>of</strong> s based on evaluationdemands <strong>of</strong> numerical LT inversion algorithm.


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesResponse Time Analysis with Laplace TransformsGeneral principle:- Input: values <strong>of</strong> t- Inverter demands: value <strong>of</strong> L(s) set for several s- Output: values <strong>of</strong> f (t)Computational cost (values <strong>of</strong> L(s) required):- Euler: ≈ 30|t|, Laguerre (modified): 400Euler inversion on a 15 000 000 state model and 50 t pointsrequires solving 1 500 systems <strong>of</strong> 15 000 000 × 15 000 000(complex) sparse linear equations!


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesNumerical Laplace Transform Inversion (Erlang-2, λ = 2)0.8exact0.70.60.5f(t)0.40.30.20.100 0.5 1 1.5 2 2.5 3 3.5 4t


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesNumerical Laplace Transform Inversion (Erlang-2, λ = 2)0.8exactinverted0.70.60.5f(t)0.40.30.20.100 0.5 1 1.5 2 2.5 3 3.5 4t


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesNumerical Laplace Transform Inversion (Heavyside-2, λ = 2)1.2exact10.80.6f(t)0.40.20-0.20 0.5 1 1.5 2 2.5 3 3.5 4t


Context Stochastic Processes Tackling Large State Spaces More ExamplesSemi-Markov ProcessesNumerical Laplace Transform Inversion (Heavyside-2, λ = 2)1.2exactinverted10.80.6f(t)0.40.20-0.20 0.5 1 1.5 2 2.5 3 3.5 4t


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesThe ChallengeGenerate re<strong>presentation</strong> <strong>of</strong> state space and state graph inface <strong>of</strong>:huge number <strong>of</strong> stateslarge state descriptorsExtract performance measures efficiently (in terms <strong>of</strong> bothtime and space)In both cases approaches should ideally be scalable


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesThe ChallengeLp1t1 (1)p2t3 (3)t2 (2)p3


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesThe ChallengeLp1t1 (1)p2t3 (3)t2 (2)p3Looks innocent doesn’t it!


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesThe Challenge1e+121e+101e+08states1e+061000010010 200000 400000 600000 800000 1e+06tokens


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesCourier Modelt1 (r7)t34 (r10)senderapplicationp1p2p45p46receiverapplicationtasktaskt2t33 (r1)courier1p3p4p43p44courier4t3 (r1)t32p5p42sendersessionp6t4 (r2)t31 (r2)p41receiversessiontasktaskp8p40t5t30 (r1)courier2p10p9p38p39courier3t6 (r1)t29p11p36p37t7 (r8)t28 (r9)sendertransportp12p13p35receivertransporttasktaskt8 (q1)t9 (q2)t27np14p15p16p33p34t12 (r3)t10 (r5)t11 (r5)mp17p32t25 (r5)t26 (r5)p20 p18 p19p30p31t13t14t22t23t24t15p21p22p27 p28 p29p23t16 (r6)t17 (r6)networkdelayt18 (r4)p24p25network delayt19 (r3) t20 (r4) t21 (r4)p26


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesCourier State Spacek n a1 11 700 48 3302 84 600 410 1603 419 400 2 281 6204 1 632 600 9 732 3305 5 358 600 34 424 2806 15 410 250 105 345 9007 39 836 700 286 938 6308 94 322 250 710 223 9309 207 498 600 1 623 000 330


Context Stochastic Processes Tackling Large State Spaces More ExamplesIssuesCourier State Space1e+101e+09tangible states (n)arcs in state graph (a)1e+081e+071e+061000001000010001001011 2 3 4 5 6 7 8k


Context Stochastic Processes Tackling Large State Spaces More ExamplesBasic BFS State Space GenerationAlgorithmbeginE = {s 0 }F.add(s 0 )A = ∅while (F not empty) do beginF.remove(s)for each s ′ ∈ succ(s) do beginif s ′ /∈ E do beginF .add(s ′ )E = E ∪ {s ′ }endA = A ∪ {id(s) → id(s ′ )}endendend


Context Stochastic Processes Tackling Large State Spaces More ExamplesSymbolic ApproachesBinary Decision Diagrams0x 11x 20x 3x 411 100 10Figure: A binary decision diagram corresponding to the booleanfunction (x 1 ∧ x 2 ) ∨ (x 3 ∧ x 4 ), or, equivalently, the state space{0011, 0111, 1011, 1100, 1101, 1110, 1111}


Context Stochastic Processes Tackling Large State Spaces More ExamplesSymbolic ApproachesSymbolic State Space GenerationbeginE = F = {s 0 }repeatT = succ(F )N = T − EF = NE = E ∪ Nuntil (N = ∅)endFigure: Algorithm for symbolic state space generation. E, T , F , N areBDDs representing “explored”, “to”, “from”, “new” states respectively.


Context Stochastic Processes Tackling Large State Spaces More ExamplesSymbolic ApproachesMulti-Terminal Binary Decision DiagramsR =⎛⎜⎝0 4 0 03 0 3 00 2 0 20 0 1 0⎞⎟⎠transition r 1 c 1 r 2 c 2 value0 4 → 1 0 0 0 1 41 3 → 0 0 0 1 0 31 3 → 2 0 1 1 0 32 2 → 1 1 0 0 1 22 2 → 3 1 1 0 1 23 1 → 2 1 1 1 0 1otherwise 0


Context Stochastic Processes Tackling Large State Spaces More ExamplesSymbolic ApproachesMulti-Terminal Binary Decision Diagramsr1c1rc 224 3 0 21Figure: A multi-terminal binary decision diagram for encoding thematrix


Context Stochastic Processes Tackling Large State Spaces More ExamplesSymbolic ApproachesPros and Cons <strong>of</strong> (MT)BDDsPotential for exceeding bounds <strong>of</strong> explicit stateenumeration methods by several orders <strong>of</strong> magnitudegiven sufficiently regular state spaceSensitive to variable orderingPeak BDD size <strong>of</strong>ten an order <strong>of</strong> magnitude larger thanfinal BDD sizeCompletely symbolic numerical solution using MTBDDshas not (yet) met with successOther symbolic techniques (notably MDDs) will bedescribed in this afternoon’s lecture; we will focus here onexplicit techniques


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationThe Case for Probabilistic AlgorithmsCan be useful to relax the requirement that the solutionproduces the correct answerSuch probabilistic or randomised approaches can lead todramatic memory and time savingsRisk <strong>of</strong> wrong result must be quantified and kept smallAlgorithms can be <strong>of</strong>ten made to be more reliable than thehardware they are running on!


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationThe Case for Probabilistic AlgorithmsExampleMiller-Rabin primality test <strong>of</strong> number n is based on three facts:If n is composite (i.e. not prime) then at least threequarters <strong>of</strong> the natural numbers less than n are witnessesto the compositeness <strong>of</strong> nIf n is prime then there is no natural number less than nthat is witness to the compositeness <strong>of</strong> n.Given number natural numbers m and n with m < n, thereis an efficient algorithm which ascertains whether or not mis a witness to the compositeness <strong>of</strong> n.We perform k witness tests using randomly chosen naturalnumbers less than n. If all fail assume n is prime. So chance <strong>of</strong>failing to find a witness for a composite number is 2 −2k and canbe made arbitrarily small at logarithmic runtime cost


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationApplication to State Space GenerationKey idea: use one-way hashing to reduce memoryrequirements <strong>of</strong> the explored state tableThis reduces the amount <strong>of</strong> memory required to storestates dramaticallyBut introduces the risk that two distinct states will have thesame hashed re<strong>presentation</strong> ⇒ misidentification andomission <strong>of</strong> states in state graphNeed to quantify this risk and keep it low.


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationHolzmann’s Bit-state HashingAttempts to maximise state coverage in face <strong>of</strong> limitedmemoryBased on Bloom filters (devised in 1970!)Explored state table is a large bit vector TInitially all bits in T are set to zeroHash function h(s) maps states onto bit vector positions,so when s is encountered, T [h(s)] is set to 1To check if s is present, examine T [h(s)]. If zero, s has notbeen encountered previously; if one, assume (possiblyerroneously) that s has already been encountered.


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationHolzmann’s Bit-state HashingProbability <strong>of</strong> no hash collisions when inserting n statesinto a bit vector <strong>of</strong> t bits is:p =Assuming n ≪ tn−1t!(t − n)!t n = ∏i=0(t − i)t=n−1∏i=0(1 − i )tp ≈n−1∏i=0e −i/t = e P n−1i=0 −i/t = e − n(n−1)2t= e n−n22t≈ e − n22tCorresponding probability <strong>of</strong> state omission is q = 1 − pProblem: To obtain q = 0.1% when inserting n = 10 6states requires the allocation <strong>of</strong> a bit vector <strong>of</strong> 125TB!


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationImproving Holzmann’s MethodWhy not use two hash functions h 1 (s) and h 2 (s)?Then when we set T [h 1 (s)] = 1 and T [h 2 (s)] = 1 when weencounter a stateAnd only conclude a state is explored if both h 1 (s) = 1 andh 2 (s) = 1Nowp ≈ e − 4n3t 2 .But this still requires an infeasibly large bit vector toachieve low state omission probabilitiesActually optimal number <strong>of</strong> hash functions is around 20(determined by Wolper and Leroy).


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationInterpreting the State Omission ProbabilityThe state omission probability does NOT indicate a fraction<strong>of</strong> states that are omitted or misidentifiedSo if we generate a state space <strong>of</strong> 10 8 states with anomission probability <strong>of</strong> 0.1% this does NOT imply that100000 states are incorrectRather there is a 99.9% chance that the entire state spacehas been generated correctly


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationWolper and Leroy’s Hash CompactionHolzmann’s method only gives good complete statecoverage probability when most <strong>of</strong> bit vector is emptyWolper and Leroy proposed to store only which bitpositions are occupiedDone by hashing states onto compressed keys <strong>of</strong> b bits,which can then be stored in a smaller (fixed size) hashtable which supports a collision resolution schemeSince approach is like Holzmann’s method with a table <strong>of</strong>size 2 b , we havep ≈ e − n22 b+1


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationStern and Dill’s Improved Hash CompactionUses large static hash tableDetermines where to store states by using a separateindependent hash functionAlso uses double hashing to keep number <strong>of</strong> probes perinsertion lowFor a full table with n slots:q ≈ 1 n(ln n − 1)2b


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationDynamic Probabilistic State Space Generationprimaryhash keyh 1(s i )secondaryhash keysh 2(s i )0h 2(s 2) id(s 2) h 2 (s 9 ) id(s 9 )29949409h 2(s 10)id(s 10)2 98318241 9 40236897 101h 2(s 4) id(s 4)h 2(s 8) id(s 8)71685177 4 16248711 82h 2(s 0) id(s 0) h 2 (s 3 ) id(s 3 ) h 2 (s 6 ) id(s 6 ) h 2(s 7) id(s 7)31118156 0 50604823 3 43591640 6 73576100 7etc.r-1h 2(s 1)id(s 1)h 2(s 5) id(s 5)08783635 1 62703471 5


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationDynamic Probabilistic State Space GenerationEffectively we have r independent Wolper and Leroy-typehash tables with 2 b potential entries each:p ≈ e − n22 b+1 r ≈ 1 − n22 b+1 rThis is in fact a lower bound for p (i.e. provides aconservative estimate for the probability <strong>of</strong> complete statecoverage)


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationComparison <strong>of</strong> State Omission Probabilities10.10.01holzmannwolper and leroystern and dilldynamic hash0.001omission probability0.00011e-051e-061e-071e-081e-091e-101 10 100 1000 10000 100000 1e+06 1e+07 1e+08number <strong>of</strong> states


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationParallel Dynamic Probabilistic State Space GenerationUse a partitioning hash function to assign states toprocessorsh 0 (s) → (0, . . . , N − 1)Processor i constructsE i = {s : h 0 (s) = i}A i = {(s 1 → s 2 ) : h 0 (s 1 ) = i}Processor i has local FIFO queue F i and hash table H iProcessors send states to owner processors (and receiveidentities <strong>of</strong> target states)Now we have:p ≈ e − n22 b+1 Nr ≈ 1 − n22 b+1 Nr


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationCourier Parallel State Space Generation Time3500300025001 processor2 processors4 processors8 processors12 processors16 processorstime (seconds)20001500100050000 2e+06 4e+06 6e+06 8e+06 1e+07 1.2e+071.4e+071.6e+07tangible states generated


Context Stochastic Processes Tackling Large State Spaces More ExamplesProbabilistic State Space GenerationCourier Parallel State Space Generation Speedupspeedup1615141312111098765432k=1k=2k=3k=4k=5k=611 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16number <strong>of</strong> processors


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionThe ChallengeNeed to solveπQ = 0for very large, sparse QIn doing so need to consider how to:Represent Q (Kronecker, on-the-fly, MDD etc.)Access elements <strong>of</strong> QStore πScale solution


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionNumerical MethodsDirect vs. iterative methodsClassical Iterative MethodsJacobiGauss-SeidelSORScalable Iterative MethodsJacobi MethodKrylov Subspace Techniques (e.g. CGS)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionLumpabilityAllows us to aggregate states to form smaller “quotient”chainCan obtain steady-state and transient results for originalchain from quotient chain without loss <strong>of</strong> accuracySymmetry is main source <strong>of</strong> lumpabilityGiven a partition P = P 1 , P 2 , . . . , P k <strong>of</strong> a CTMC withgenerator matrix Q, the CTMC is ordinarily lumpable withrespect to P iff∀B, B ′ ∈ P, s 1 , s 2 ∈ B : Q(s 1 , B ′ ) = Q(s 2 , B ′ )


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionLumpabilityλµλµ µλµλ


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionLumpabilityλµλµ µλµλ


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionLumpability2λµ2µλ


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionDisk-based Steady-State SolutionStoring Q (and parts <strong>of</strong> π) from disk and reading elementsas needed can yield surprisingly effective solutiontechniques, esp. when combined with clever block-basednumerical methods that can re-use read dataEffective data production rate <strong>of</strong>ten higher than storing Qin-core using some compact encoding (e.g. Kronecker,on-the-fly)Imposes no structural restrictionsDeavours and Sanders pioneered disk-based CTMCsolution


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionDisk-based Steady-State Solution (Mehmood and Kwiatkowska)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionDisk-based Steady-State Solution (Mehmood and Kwiatkowska)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionParallel Steady-State Disk-based SolutionSharedMemoryP0P1SharedMemoryLocalDiskDisk I/OProcessBuffer 1Buffer 2ComputeProcessComputeProcessBuffer 1Buffer 2Disk I/OProcessLocalDiskSemaphoresSemaphoresNetworkSharedMemorySharedMemoryLocalDiskDisk I/OProcessBuffer 1Buffer 2ComputeProcessComputeProcessBuffer 1Buffer 2Disk I/OProcessLocalDiskSemaphoresP2P3Semaphores750 million states achieved by Bell and Haverkort (2001)


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionCourier Disk-based Parallel Steady-State Solutionk=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8Jac time 33.647 278.18 1506.4 5550.3Jac its 4925 4380 4060 3655p = 1 CGS time 2.1994 21.622 163.87 934.27 29134CGS its 60 81 106 129 157MB/node 20.3 22.1 30.5 60.8 154.0Jac time 29.655 176.62 1105.7 4313.6Jac its 4925 4380 4060 3655p = 2 CGS time 1.6816 13.119 93.28 509.90 7936.9CGS its 57 84 107 131 148MB/node 20.2 21.1 25.45 41.2 89.7Jac time 25.294 148.45 627.96 3328.3Jac its 4925 4380 4060 3655p = 4 CGS time 1.2647 8.4109 58.302 322.50 1480.5CGS its 60 80 108 133 159MB/node 20.1 20.6 22.9 31.4 57.5Jac time 38.958 140.06 477.02 1780.9 6585.4Jac its 4925 4380 4060 3655 3235p = 8 CGS time 1.4074 6.0976 39.999 204.46 934.76 4258.7CGS its 61 82 109 132 155 171MB/node 20.0 20.3 21.7 26.5 41.4 81.6Jac time 41.831 125.68 506.31 1547.9 5703.4 11683 32329Jac its 4925 4380 4060 3650 3235 2325 2190p = 16 CGS time 3.3505 7.1101 31.322 134.48 577.68 2032.5 13786 141383CGS its 60 91 104 132 146 173 179 213MB/node 20.0 20.2 21.0 24.1 33.4 58.5 79.8 161


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionCourier Steady-State Convergence Behaviour10cgsjacobi-1log10(residual norm/vector norm)-2-3-4-5-6-7-8-9-100 50 100 150 200 250 300 350 400 450 500matrix-vector multiplications


Context Stochastic Processes Tackling Large State Spaces More ExamplesSteady-State SolutionCourier Steady-state Performance Measuresk = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8λ 74.3467 120.372 150.794 172.011 187.413 198.919 207.690 214.477P send 0.01011 0.01637 0.02051 0.02334 0.02549 0.02705 0.02825 0.02917P recv 0.98141 0.96991 0.96230 0.95700 0.95315 0.95027 0.94808 0.94638P sess1 0.00848 0.01372 0.01719 0.01961 0.02137 0.02268 0.02368 0.02445P sess2 0.92610 0.88029 0.84998 0.82883 0.81345 0.80196 0.79320 0.78642P transp1 0.78558 0.65285 0.56511 0.50392 0.45950 0.42632 0.40102 0.38145P transp2 0.78871 0.65790 0.57138 0.51084 0.46673 0.43365 0.40835 0.38871Table: Courier Protocol performance measures in terms <strong>of</strong> thetransport window size k.


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningParallel Sparse Matrix–Vector Products and Data PartitioningParallel sparse matrix–vector product (or similar)operations form the kernel <strong>of</strong> many parallel numericalalgorithmsIn our context this includes steady-state and passage timecalculations that use iterative algorithms for solving verylarge sparse systems <strong>of</strong> linear equations.The data partitioning strategy adopted (i.e. the assignment<strong>of</strong> matrix and vector elements to processors) has a majorimpact on performance, especially in distributed memoryenvironments.


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningPartitioning Objectives and StrategiesAim is to allocate matrix and vector elements acrossprocessors such that:computational load is balancedcommunication is minimisedCandidate partitioning strategies:naive row (or column) stripingmapping <strong>of</strong> rows (or columns) and corresponding vectorelements to processors using 1D graph orhypergraph-based data partitioningmapping <strong>of</strong> individual non-zero matrix elements and vectorelements to processors using 2D hypergraph-basedpartitioning


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningNaive Row-StripingAssume an n × n sparse matrix A, an n-vector x and pprocessors.Simply allocate n/p matrix rows and n/p vector elementsto each processor (assuming p divides n exactly).If p does not divide n exactly, allocate one extra row andone extra vector element to those processors with rankless than n mod p.What are the advantages and disadvantages <strong>of</strong> thisscheme?


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningNaive Row-Striping: Example12P13456P278910P311121314P415161 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16x


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Graph PartitioningAn n × n sparse matrix A can be represented as anundirected graph G = (V, E).Each row i (1 ≤ i ≤ n) in A corresponds to vertex v i ∈ V inthe graph.The (vertex) weight w i <strong>of</strong> vertex v i is the total number <strong>of</strong>non-zeros in row i.For the edge-set E, edge e ij connects vertices v i and v jwith (edge) weight:1 if either one <strong>of</strong> |a ij | > 0 or |a ji | > 0,2 if both |a ij | > 0 and |a ji | > 0Aim to partition the vertices into p mutually exclusivesubsets (parts) {P 1 , P 2 , . . . , P p } such that edge-cut isminimised and load is balanced.


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Graph PartitioningAn edge e ij is cut if the vertices which it contains areassigned to two different processors, i.e. if v i ∈ P m andv j ∈ P n where m ≠ n.The edge-cut is the sum <strong>of</strong> the edge weights <strong>of</strong> cut edgesand is an approximation for the amount <strong>of</strong> interprocessorcommunicationLetW k = ∑ i∈P kw i (for 1 ≤ k ≤ p)denote the weight <strong>of</strong> part P k , and W denote the averagepart weight.A partition is said to be balanced if:for k = 1, 2, . . . p.W k ≤ (1 + ɛ)W


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Graph PartitioningProblem <strong>of</strong> finding a balanced p-way partition thatminimizes edge cut is NP-complete.But heuristics can <strong>of</strong>ten be applied to obtain goodsub-optimal solutions.S<strong>of</strong>tware tools:CHACOMETISParMETISOnce partition has been computed, assign matrix row i toprocessor k if v i ∈ P k .


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Graph Partitioning: ExampleConsider the graph corresponding to the sparse matrix A<strong>of</strong> the previous example.Graph partitioner recommends four parts as follows:P 1 = {v 13 , v 7 , v 16 , v 11 } P 2 = {v 15 , v 9 , v 2 , v 5 }P 3 = {v 14 , v 8 , v 10 , v 4 } P 4 = {v 3 , v 12 , v 1 , v 6 }


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Graph Partitioning: ExampleP1P2P3P413 7 16 11 15 9 2 5 14 8 10 4 3 12 1 613716111592514810431216x


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Hypergraph PartitioningAn n × n sparse matrix A can be represented as ahypergraph H = (V, N ).V is a set <strong>of</strong> vertices and N is a set <strong>of</strong> nets or hyperedges.Each n ∈ N is a subset <strong>of</strong> the vertex set V.Each row i (1 ≤ i ≤ n) in A corresponds to vertex v i ∈ V.Each column j (1 ≤ i ≤ n) in A corresponds to net N j ∈ N .In particular v i ∈ N j iff a ij ≠ 0.The (vertex) weight w i <strong>of</strong> vertex v i is the total number <strong>of</strong>non-zeros in row i.Given a partition {P 1 , P 2 , . . . , P p }, the connectivity λ j <strong>of</strong> netN j denotes the number <strong>of</strong> different parts spanned by N j .Net N j is cut iff λ j > 1.


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Hypergraph PartitioningThe cutsize or hyperedge cut <strong>of</strong> a partition is defined as:∑(λ j − 1)N j ∈NAim is to minimize the hyperedge cut while maintaining thebalance criterion (which is same as for graphs).Again, problem <strong>of</strong> finding a balanced p-way partition thatminimizes the hyper-edge cut is NP-complete, butheuristics can be used to find sub-optimal solutions.S<strong>of</strong>tware tools:hMETISPaToHParkway


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Hypergraph Partitioning: ExampleConsider the hypergraph corresponding to the sparsematrix A <strong>of</strong> the previous example.Hypergraph partitioner recommends four parts as follows:P 1 = {v 13 , v 7 , v 16 , v 10 } P 2 = {v 15 , v 9 , v 1 , v 3 }P 3 = {v 14 , v 8 , v 11 , v 4 } P 4 = {v 2 , v 12 , v 5 , v 6 }


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning1D Hypergraph Partitioning: ExampleP1P2P3P413 7 16 10 15 9 1 3 14 8 11 4 2 12 5 613716101591314811421256x


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph PartitioningThe most general mapping possible is to allocate individualnon-zero matrix elements and vector elements toprocessors.General form <strong>of</strong> parallel sparse matrix–vector multiplicationfollows four stages, where each processor:1 sends its x j values to processors that possess a non-zeroa ij in column j,2 computes the products a ij x j for its non-zeros a ij yielding aset <strong>of</strong> contributions b is where s is a processor identifier.3 sends b is contributions to the processor that is assigned x i .4 adds up received contributions for assigned vectorelements, so b i = ∑ p−1s=0 b is


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph PartitioningEach non-zero is modelled by a vertex (weight 1) in thehypergraph; if a ii is 0 then add “dummy” vertex (weight 0).Model Stage 1 comms cost by net whose constituentvertices are the non-zeros <strong>of</strong> column j. Model Stage 3comms cost by net whose constituent vertices are thenon-zeros <strong>of</strong> row i.Now partition hypergraph into p parts such that the k − 1metric is minimised, subject to balance constraint.Assign non-zero elements to processors according topartition.Assign b i ’s to processors appropriately according towhether row i and/or column i hyperedge is cut (if any).


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph Partitioning: Example1 2 3 4 5 6 7 812345678


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph Partitioning: Example2 4 7 1 3 5 6 8x24713568


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph Partitioning: Examplev17v21c2v7v10r5v24v20v25c7v15v16r2v19c6v23r7c1v22v6v9r4v8r6c5v4v1r1v26v2c4v14c3v13c8v28v11v18v3v5v12r3r8v27


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph Partitioning2D Hypergraph Partitioning: Example2 4 7 1 3 5 6 8x24713568


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningApplication to PageRank Computation1e+07Total inter-processor communication for different webgraph partitionsNumber <strong>of</strong> FP elements sentCommunication vol. (# <strong>of</strong> floating point elements)1e+06100000100001000CyclicGleZhuPartition algorithm1D H-graph2D H-graph


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningApplication to PageRank Computation0.06Breakdown <strong>of</strong> PageRank time per iteration into constituent components0.05Residual0.04ComputationTime, s0.03Residual0.02CommunicationComputationResidual0.01CommunicationComputation0CommunicationGleZhu1D H-graphPartition algorithms2D H-graph


Context Stochastic Processes Tackling Large State Spaces More ExamplesHypergraph PartitioningComparison <strong>of</strong> Graph and Hypergraph Partitioning TechniquesA graph partition aims to minimise the number <strong>of</strong> non-zeroentries in <strong>of</strong>f-diagonal matrix blocks.A hypergraph partition aims to minimise actualcommunication; the partition may have more <strong>of</strong>f-diagonalnon-zero entries than a graph partition but these will tendto be column aligned.Either sort <strong>of</strong> partitioning is preferable to a naive or randompartition.For very large matrices, parallel partitioning tools arerequired (e.g. ParMeTiS for graphs, and Parkway or Zoltanfor hypergraphs).


Context Stochastic Processes Tackling Large State Spaces More ExamplesParallel Computation <strong>of</strong> Densities and Quantiles <strong>of</strong> First Passage TimeThe SMARTA ToolEnhancedDNAmacahigh−levelspecificationStateSpaceGeneratorDTMCSteadyStateSolverDistributedLaplaceTransformInverterHypergraphPartitionerpartitionedmatrixfilesLT inverterwith noL(s)evaluationmasterdiskcachefilters−value work queues−valuesmasterprocessors 2s 1slave processorgroupsL(s )1L(s )2mastermemory cacheL(s )1L(s )2L(s )nLT inverterwithL(s)evaluationmasterdisk cache


Context Stochastic Processes Tackling Large State Spaces More ExamplesParallel Computation <strong>of</strong> Densities and Quantiles <strong>of</strong> First Passage TimeCourier Response Time Density Function0.03Result from 10 simulation runs <strong>of</strong> 1 billion transitions eachUniformization resultLaplace result0.025Probability density0.020.0150.010.00500 20 40 60 80 100Time


Context Stochastic Processes Tackling Large State Spaces More ExamplesParallel Computation <strong>of</strong> Densities and Quantiles <strong>of</strong> First Passage TimeSome Tools Developed at <strong>Imperial</strong>DNAmaca (Data Network Architectures Markov ChainAnalyser): Steady-state Analysis <strong>of</strong> Markov Chains (mostsuited to GSPNs and related formalisms)smca (Semi-Markov Chain Analyser): Steady-stateAnalysis <strong>of</strong> Semi-Markov Chains (SM-SPN models)HYDRA (Hypergraph-based Distributed Response-timeAnalyser): Uniformisation-based Passage-time Analysis <strong>of</strong>Markov ChainsSMARTA (Semi-Markov Response-time Analyser):Numerical Laplace-transform Inversion-basedPassage-time Analysis <strong>of</strong> Semi-Markov Chainsipc (<strong>Imperial</strong> Pepa Compiler): Specification and Analysis<strong>of</strong> PEPA models via DNAmaca and HYDRAParkway (Parallel k-way Hypergraph Partitioner)


Context Stochastic Processes Tackling Large State Spaces More ExamplesVoting ModelDescription


Context Stochastic Processes Tackling Large State Spaces More ExamplesVoting ModelState SpaceSystem CC MM NN States1 60 25 4 106 5402 100 30 4 249 7603 125 40 4 541 2804 150 40 5 778 8505 175 45 5 1 140 0506 300 80 10 10 999 140


Context Stochastic Processes Tackling Large State Spaces More ExamplesVoting ModelResponse Time Density Function0.012Passage time density: 10.9 million state voting modelError from 10 simulation runs <strong>of</strong> 1 billion transitions each0.01Probability density0.0080.0060.0040.0020500 550 600 650 700 750 800Time


Context Stochastic Processes Tackling Large State Spaces More ExamplesVoting ModelResponse Time Cumulative Distribution Function1Cumulative passage time distribution: 10.9 million state voting model0.8Cumulative probability0.60.40.20500 550 600 650 700 750 800Time


Context Stochastic Processes Tackling Large State Spaces More ExamplesActive Badge ModelDescription


Context Stochastic Processes Tackling Large State Spaces More ExamplesActive Badge ModelResponse Time Density Function0.03Passage: 3 people in R1 any 1 person in R60.0250.02Probability density, p0.0150.010.00500 20 40 60 80 100 120 140Time, t


Context Stochastic Processes Tackling Large State Spaces More ExamplesActive Badge ModelResponse Time Cumulative Distribution Function1Passage: 3 people in R1 any 1 person in R60.8Probability, p0.60.40.200 20 40 60 80 100 120 140Time, t


Context Stochastic Processes Tackling Large State Spaces More ExamplesThank you! Any (more) questions?

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!