Testing with Socket Proxies
Load balancing and fail-overs have been most popularly implemented using hardware
techniques. A typical load balancer, that also serves as a failover mechanism
works with a forwarding agent that is resident on a IP address seperate from
the actual service nodes. The client applications connect to the forwarding agent. The forwarding agent
determines the "best" service node to which the connection is forwarded.
-------- ----------- --------
| | | |<----->| Node 1 |
| Client | | Load | --------
| |<--------->| Balancer |
| | | (proxy) | --------
| | | |<----->| Node 2 |
-------- ----------- --------
The forwarding agent, often contains mechanisms not only to to determine the load of the
slave agents but also to monitor their "health". An outage of one of the nodes will cause
the forwarding agent to stop forwarding requests to the node. The service for the client
is thus always "available" even if one or move slave agents are down, making the service
"Highly Available" (or HA!). Also, the forwarding agents are very generic in nature. They
are generally not aware of the type of service being offered by the service nodes. Due to
the "generic" nature of the forwarding agent it has no way of statefully failing-over the affected
connections to a functional node. Thus, an outage of any node is
not really transparent to the client that is connected to it. Though the architecture offers
the robustness of the service, the client applications need to be able to re-connect the service
to fully utilize the same.
Building applications that re-establish connectivity to its resources is not often easy. The applications
need to be aware of the type of errors on which such attempts must be made. Even a simple application will
need to have code that is not very aesthetic in nature to be "Highly available". A moderately complex
application that has a transactional context with such a service will need to contain a mechanism to
reconstruct the context after a successful re-connection is made. Complexity of such implementation will
warrent a good deal of testing. (Of course, an XA application will need to do more than this). But due to the nature
of problem, testing with a real fail-over can only be random. It would be impossible to test fail-overs at different
critical sections of the application routines with the real system.
Socket Proxy is a software mechanism
that can be used to simulate failure on demand. These are generally used in test suites (Junits) for the applications.
The rest of the article demonstrates the idea with use of a SocketTunnel that is implemented in Java.
Socket Tunnel
A very simple reference implementation is built on Java Platform and offers API that can be used
by Java applications. The API of the socket tunnel is very simple. The socket tunnel is constructed with the
Socket parameters - remote host name, port on which the real service is running and the proxy port to be used for
testing.
SocketTunnel socketTunnel = new SocketTunnel(realHost, realPort, localProxyPort);
socketTunnel.start();
As expected, the proxy is started on invokation of start() in a background daemon thread and lives for only
one connection from the client. On receipt of a client socket, the proxy simply forwards the data to the real service
through background data channels. For the client the service is being offered by the proxy, much like the case with
the Load balancer architecture. The difference, however, is that the client application is in control of the socket proxy.
To simulate a condition where the service is interrupted due to node outage, the client invokes the
restart API. The API takes the outage-interval in seconds as a paramter. To simulate an outage
of 2 seconds, the client simply invokes -
// useful code
socketTunnel.restart(2);
// more usefule code
The proxy also contains an API to simulate a complete outage. The API simply stops the proxy service.
// useful code
socketTunnel.kill();
// more usefule code
JDBC Test example
Lets say a typical Bean persistance helper contains the API to store different types of beans into a database.
public class BeanPeristance
{
public void store(Bean1 bean);
public void store(Bean2 bean);
public void rollback();
public void commit();
public Bean1 retrieve(BeanParams params);
}
Clearly, the class contains transactional APIs. Assume that the implementation has re-connect functionality in
all the methods. To test the re-connect functionality during the "commit", the test-suite would simply restart
the socket tunnul before invoking the commit function.
public class BeanPersistanceImplTest
{
private SocketTunnel socketTunnel;
public void init()
{
socketTunnel = new SocketTunnel(realTestHost,
realPort, proxyLocalPort);
socketTunnel.start();
beanPersistance = new BeanPeristance("localhost",
proxyLocalPort);
}
// Lots of useful tests
.
.
.
public void reconnectOnCommitTest()
{
beanPersistance.store(bean1);
beanPersistance.store(bean2);
// some more work
socketTunnel.restart(1);
beanPersistance.commit();
assert(checkBeanUpdates(retrieve(beanParams1)));
}
The test suite initializes every test with a socket tunnel. The tunnel is started on a proxy port to which
the test instance connects. To test the commit function, the test instance is used to store a couple of
bean instances after which the proxy service is made temporarily unavailable before the commit is issued.
If the commit is implemented correctly and is tolerant to the downtime, the bean instances should have been
reloaded to the database after the connection was re-established. The retrival of the data should be as
expected in the test.
Implementation
Building the Socket Proxy using the Java Platform API is really simple. The code excerpt below is from a simple
implementation of the concept.
public void run()
{
try
{
dt1 = new DataTunnel("ToReal:");
dt2 = new DataTunnel("ToClient:");
sleep();
serverSocket = new ServerSocket(localPort);
InputStream forwardHostInputStream = null;
OutputStream forwardHostOutputStream = null;
socketToServer = new Socket(realHost, realPort);
socketFromClient = serverSocket.accept();
InputStream inputStream1 = socketFromClient.getInputStream();
forwardHostOutputStream = socketToServer.getOutputStream();
dt1.pumpData(inputStream1, forwardHostOutputStream);
forwardHostInputStream = socketToServer.getInputStream();
OutputStream outputStream2 = socketFromClient.getOutputStream();
dt2.pumpData(forwardHostInputStream, outputStream2);
serverSocket.close();
}
catch(Exception e)
{
Log.log("Main tunnel crashed!");
Log.log(e);
}
}
The code binds the server socket to the proxy port and establishes a socket to the real server on the listening port.
Input and output stream are obtained from the sockets from the client and the (real) server. DataTunnel
are rudimentary pipe streams that read data from the input stream and write it to the output stream. The DataTunnel start
background daemon threads in which the data is piped between the sockets.
Limitations
The socket tunnels work with services that communicate with the client using a single socket. It will not work
with services that use more than one socket, like FTP servers. (Ftp servers use seperate control and data channels)
Conclusion
Socket proxies provide a test-harness to determine the application behaviour during temporary outages of the dependent
resources. Since they can be created and controlled through software means, unit testing complex application routines
that provide fault-tolerance to resource outages is feasible.
Resources
- Archive containing the sample implementation.
|