30.07.2015 Views

Actas JP2011 - Universidad de La Laguna

Actas JP2011 - Universidad de La Laguna

Actas JP2011 - Universidad de La Laguna

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Actas</strong> XXII Jornadas <strong>de</strong> Paralelismo (<strong>JP2011</strong>) , <strong>La</strong> <strong>La</strong>guna, Tenerife, 7-9 septiembre 2011graphic applications for military and entertainmentpurposes.Although these approaches can provi<strong>de</strong> interactivelatencies and frame rates, required by multiplayergames, they usually can only support a few hundredsof user-driven entities within a simulation.III. A Distributed System for <strong>La</strong>rge-ScaleCrowd SimulationIn previous works, we proposed an architecturethat can simulate large crowds of autonomous agentsat interactive rates [6], [7]. In that architecture, thecrowd system is composed of many Client Computers,that host agents implemented as threads of aClient Process, and one Action Server (AS), executedin one computer, that is responsible for checking theactions (eg. collision <strong>de</strong>tection) sent by agents [6].In or<strong>de</strong>r to avoid server bottleneck, the simulationworld was partitioned into subregions and each oneassigned to one parallel AS [7]. A scheme of thisarchitecture is shown in figure 1. This figure showshow the 2D virtual world occupied by agents (blackdots) is partitioned into three subregions, and eachone managed by one parallel AS (labeled in the figureas AS x ). Each AS is hosted by a different computer.Agents are execution threads of a Client Process (labeledin the figure as CP x ) that is hosted on oneClient Computer. The computers hosting client andserver processes are interconnected. Each AS processhosts a copy of the Semantic Database. However,each AS exclusively manages the part of thedatabase representing its region. In or<strong>de</strong>r to guaranteethe consistency of the actions near the bor<strong>de</strong>rof the different regions (see agent k in figure 1), theASs can collect information about the surroundingregions by querying the servers managing the adjacentregions. Additionally, the associated Clients arenotified about the changes produced by the agents locatednear the adjacent regions by the ASs managingthose regions.The architecture shown in Figure 1 allows to simulatelarge crowds of autonomous agents providing agood scalability. However, it also needs a scalable visualizationmethod in or<strong>de</strong>r to ren<strong>de</strong>r the simulatedcrowd. The visualization system will be in chargeof ren<strong>de</strong>ring the simulated world, starting from theinformation generated by the distributed servers. Inor<strong>de</strong>r to provi<strong>de</strong> scalability, the visualization systemshould be <strong>de</strong>signed in a distributed fashion.A feasible way of implementing a distributed visualizationsystem could be the integration of a ren<strong>de</strong>ringmodule within each Action Server. In thisway, each AS could visualize its own region of thevirtual world. However, the computational workloadresulting from adding a ren<strong>de</strong>ring module to eachAS could result in a performance <strong>de</strong>gradation of thewhole simulation system [17]. Additionally, with thisapproach the number of cameras would be limitedby the number of servers in the system. Instead, wehave followed a different approach, where the visualizationof the simulation is distributed among dif-Fig. 1. General scheme of the distributed simulation systemwith a Visual Client Process.ferent processes, each one <strong>de</strong>noted as Visual ClientProcess (VCP). Each VCP manages one camera, andit is hosted in a <strong>de</strong>dicated computer different fromthe ones hosting either CPs or ASs. A VCP can beconnected to several different ASs, <strong>de</strong>pending on thearea of the virtual world covered by the camera ofthe VCP. For example, in Figure 1 the VCP is connectedto both AS 1 and AS 2 , since the projection ofthe camera plane (<strong>de</strong>noted as MBR Frustum) intersectsthe regions managed by these ASs.In or<strong>de</strong>r to efficiently <strong>de</strong>signing the ren<strong>de</strong>ring moduleof the VCP, the first step consists of measuringthe workload that the information received from theASs represents for a single VCP. The amount of informationsent by the ASs <strong>de</strong>pends on two factors: thenumber of simulated agents and the acting period ofthose agents (the period of time between two successiveactions requested by an agent). Table I showsthe percentage of CPU utilization in the computerhosting the VCP when increasing both the numberof agents in the MBR Frustum and the acting period.The results were obtained using up to four servers,each one managing 3000 agents (12000 agents in totalfor four servers). In these tests, the VCP wereconnected to the servers and all the agent requestsreceived by the servers were sent to the VCP, i.e. theVCP received updates from 12000 agents when usingfour servers. Table I shows that the VCP workloa<strong>de</strong>xceeds the computational bandwidth of the hostingcomputer when 6000 agents (2 servers) are connectedto the VCP, since the percentage of CPU utilizationreaches 100%. Also, this table shows that the workloadgenerated by the VCP is inversely related to theacting period, as it could be expected.In or<strong>de</strong>r to find how the saturation of the CPU affectsthe performance of the VCP, we have measured<strong>JP2011</strong>-336

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!