09.11.2016 Views

Foundations of Python Network Programming 978-1-4302-3004-5

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 7 ■ SERVER ARCHITECTURE<br />

» » for worker in workers:<br />

» » » if not worker.is_alive():<br />

» » » » print worker.name, "died; starting replacement"<br />

» » » » workers.remove(worker)<br />

» » » » workers.append(start_worker(Worker, listen_sock))<br />

First, notice how this server is able to re-use the simple, procedural approach to answering client<br />

requests that it imports from the launcelot.py file we introduced in Listing 7–2. Because the operating<br />

system keeps our threads or processes separate, they do not have to be written with any awareness that<br />

other workers might be operating at the same time.<br />

Second, note how much work the operating system is doing for us! It is letting multiple threads or<br />

processes all call accept() on the very same server socket, and instead <strong>of</strong> raising an error and insisting<br />

that only one thread at a time be able to wait for an incoming connection, the operating system patiently<br />

queues up all <strong>of</strong> our waiting workers and then wakes up one worker for each new connection that<br />

arrives. The fact that a listening socket can be shared at all between threads and processes, and that the<br />

operating system does round-robin balancing among the workers that are waiting on an accept() call, is<br />

one <strong>of</strong> the great glories <strong>of</strong> the POSIX network stack and execution model; it makes programs like this very<br />

simple to write.<br />

Third, although I chose not to complicate this listing with error-handling or logging code—any<br />

exceptions encountered in a thread or process will be printed as tracebacks directly to the screen—I did<br />

at least throw in a loop in the master thread that checks the health <strong>of</strong> the workers every few seconds, and<br />

starts up replacement workers for any that have failed.<br />

Figure 7–4 shows the result <strong>of</strong> our efforts: performance that is far above that <strong>of</strong> the single-threaded<br />

server, and that also beats slightly both <strong>of</strong> the event-driven servers we looked at earlier.<br />

Figure 7–4. Multi-process server benchmark<br />

Again, given the limitations <strong>of</strong> my small duo-core laptop, the server starts falling away from linear<br />

behavior as the load increases from 5 to 10 simultaneous clients, and by the time it reaches 15<br />

concurrent users, the number <strong>of</strong> 10-question request sequences that it can answer every second has<br />

fallen from around 70 per client to less than 50. And then—as will be familiar to anyone who has studied<br />

119

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!