Winsock I/O Model - Part II : Implementation

Winsock I/O Model - Part II : Implementation

In the previous post[6], I
summarized several scalable network I/O models in theory. In this
article, I will give concrete code to show how to use each model to
build a scalable network server. Building scalable server is a
challenging task and needs a lot of considerations, we just consider
the network I/O model problem here.

When a client request comes
in, the server will do some float number calculation, get server system
time and then response the client with these information. The code can
be found in the
ServeTheClient() function in Util.cxx.

1. Multi-threading

The server consists of two parts:
- One Listening Thread: listen and accept client connection
- Many Worker Threads: one for each client

new client comes, the Listening Thread accepts it and creates a new
worker thread to serve it. The worker thread reads a string data from
client, do the processing and send back the output to client.

This is the simplest solution, code can be found here: Multithreading Network Server.

2. I/O Multiplexing - BSD Select

this model, the main thread only init winsock environment, start
listening, other worker threads do the real work. Each worker thread
handle upto N - 1 client connections(N is the 64 on most BSD compatible
systems), one slot is for listening socket.

While using BSD
Select to implement network server, you should set the server
socket(socket that is used to listen/accept client connections) into
non-blocking mode. The reasoning is that, even select() tells that some
client connection request had arrived, accept() may block since it's
possible that many threads get this notification.

There is a SELECT_CONN_CTX data structure for each client connection. It's definitely a state machine style server architecture.

drawback of current implementation is that the number of threads only
increases, never decrease. Worker threads should become less when
concurrent client connections drops down.

An alternative design
is use some dedicated thread(s) to Listen/Accept client connections and
other threads only serve client requests. Since it introduces many
shared data among different threads, synchronization mechanisms (such
as lock) are needed.

The code - BSD Select based Network Server

3. I/O Multiplexing - Event Select

It's very similiar to BSD Select model with some small differences:
- WSAWaitForMultipleEvents() rather than select() is used to wait for network event
Since the wait function only returns one array index value, we should
check all event handles that follows the returned one to ensure
- WSAEnumNetworkEvents() is used to determine the exact network event
- each thread can only serve WSA_MAXIMUM_WAIT_EVENTS - 1 client connections

The drawback and possible new architecture is the same as BSD select.

Note: Socket should be in non-blocking mode to avoid blocking for the same reason as in BSD select model. But WSAEventSelect() will put a socket into non-blocking mode, so we don't need to call other function to do this as in BSD select model.

The code - Event Select based Network Server

4. Overlapped I/O - Event Waiting

this model, you associate each winsock call with an overlapped data
structure, and associate a kernel event with each overlapped data
structure. This event is used to get async/overlapped call completion

Core logic:
- The main thread only deal with Init work
- Worker threads do async accept using AcceptEx()
- When client connected, it's served by server in state machine style
- When WSAWaitForMultipleEvents() returns, we should check all following handles to ensure fairness
- When no free slot, a new worker thread will be created to serve more clients.

it uses kernel event to get completioin notification, it has the same
drawback as "Kernel Event Select" model - each thread can only serve
WSA_MAXIMUM_WAIT_EVENTS - 1 client connections.

The code - Overlapped I/O with Event Waiting based Network Server

5. Overlapped I/O - Callback Routine

In this model, you pass an extra callback routine parameter when issuing overlapped I/O calls.

each client connection is represented by a context data structure,
which can be accessed in the callback routine, this models seems pretty
- no multiple threading
- no synchronization
- callback routine represents state transition

Core Logic:
- Main thread is a async listen/accept loop
- AcceptEx() sends completion notification using Kernel Event
- Main thread must wait in alertable state in order to get overlapped i/o callback routine executed.

The code - "Overlapped I/O with Callbacking" based Network Server

6. Overlapped I/O - IO Completion Port

This is the most elegant solution for a scalable network server:
- The main thread is a listen/accept thread, when a client connection arrives, it post a notification to IOCP and let worker thread process it.
- Each worker thread deal with each client connection in state machine style: using network I/O completion notification to trigger state transition and next operation call.
- Let system to control thread number and scheduling.

can see from the source code that the logic is very clear and easy to
understand. If you are going to write a network server on windows
platform, this model is highly recommended.

The code - "Overlapped I/O with IOCP" based Network Server

The original version of CreateIOCompletionPort() is heavily overloaded on its semantic. So I created two separated function CreateNewIoCompletionPort() and AssociateDeviceWithIoCompletionPort() to make each one only do one simple function. The definition and implementation can be found in Util.h and Util.cxx

I listed the implementation of a scalable network server using various
network I/O models. In next article, I will do some load test on each
server and compare the performance/scalability of each model.

1. Write Scalable Winsock Application using I/O Completion Port
2. Writing scalable server applications using I/O Completion Port
3. Book - Network Programming for Microsoft Windows
4. Network I/O system call implementation on Linux
5. Winsock Network I/O Model - Part I : Theory
6. Scalable I/O Model