Skip to content

Latest commit

 

History

History
68 lines (47 loc) · 3.43 KB

README.md

File metadata and controls

68 lines (47 loc) · 3.43 KB

Multithread Client in Java

This is part 1 of my project in CS6650 - Building Scalable Distributed Systems.

The goal of this part is to implement an efficient multithreaded client, which could later be used to test against a distributed server.

  • ClientES1: all client features implemented.
  • ClientES2: ClientES1 + additional latency, performance measurements implemention and visualization.
  • ClientES3: ClientES2 + self exploration with benchmarking.

The scenario

The project's goal is to build a lift ticket reader system for a ski resort chain. The system should be able to handle multiple request concurrently and store all ski lift data as a basis for data analysis.

In this part, I am building a multithreaded client that generates requests and sends lift data to a server hosted in AWS, simply set up with Tomcat for now.

Requests are sent in 3 phases, each phase will spawn an amount of threads (ranging from 32 - 256), and each thread will be assigned to send multiple POST requests to the server. The detailed requirements for phases and threads can be found here.

The design

  • Phases are signaled to start using CountDownLatches.
  • Each Phase stores
    • an ExecutionService to manage and schedule threads.
    • a CompletionService made from this ExecutionService to store the stats of this thread when completed.
    • a list of Client connections for threads to reuse.
  • Thread implements Callable<Stats>
    • send out POST requests via OkHttp3Client
    • Update performance stats (total failed/successful requests, latency) and return this result to CompletionService via Future type.

Performance metrics

Program performance:

  • Total wall time
  • Throughput (request per second)

Response performance:

  • Total number of successful and failed requests
  • Mean, median, percentile 99th, max latency of POST requests

Performance results

Target number of requests is 180,000 post requests in total for each run. Testings are done for number of threads = 32, 64, 128, 256.

1. Throughput

Expected throughput result: (maxNumThreads/latency) * 60

  • Expected throughput due to Little's Law = maxNumThreads/latency
  • Each Phase runs a different amount of Threads, the program in total runs at 60% of maxNumThreads

Actual performance:

2. Wall time

3. Response latency

Requests are grouped by the second-th that request occurs since the program starts, the average values of the groups are plotted.