August 1st 2021
Deployment is a big step forward. This project will be deployed on an AWS EC2 T2micro instance.
I’ve discussed what a proxy is in the first post of this series. But to review its function in short, it serves static content and the content generated by services. With deployment and scalability in mind, we will be deploying the app with at minimum 3 servers. One for the proxy, one for the service, and one for the service’s database.
An EC2 instance is a virtual server. The simplest way to deploy onto the instance is to SSH into the server via the terminal, pull the project repo, npm intall
, and npm start
. To keep the instances running in case it crashes for any reason, I used pm2 instead.
These initial stress tests were run from my local machine with k6, and tracked with New Relic.
Instructors GET requests
RPS | % Dropped Requests |
---|---|
1 | 0.00% |
10 | 0.00% |
100 | 2.05% |
1000 | 46.12% |
Instructors POST requests
RPS | % Dropped Requests |
---|---|
1 | 0.00% |
10 | 0.00% |
100 | 0.20% |
1000 | 44.17% |
Currently, as peers work on their own services, thew proxy is tested while serving the instructor service only.
Proxy GET requests
RPS | % Dropped Requests |
---|---|
1 | 0.00% |
10 | 0.00% |
100 | 0.07% |
1000 | 20.28% |
It seems at 1000RPS the dropped requests have a sharp rise. It is possible that the server times out and drops many requests at once.
When creating a new EC2 instance, I chose T2micro. Security inbound rules rules allow HTTP and HTTPS access from anywhere on port 80. This will also allow the proxy to make requests to the service instance.
No special configurations needed to be made when deploying the proxy or instructors service.
npm install
.pm2 start app.js
, otherwise npm start
will work just fine.Other notes: Remember to keep the private IPs of the service instances, as we need to give the database servers these IPs for access to the data.
Deploying the database is simple, as we only need to set up the database and its schema (and dummy data if needed).
This means I installed Postgres and set up Postgres and the project’s schema to the database.
The primary configurations needed are regarding security. Security rules for the DB instance will need to allow connections from the service that will make requests directly to the database.
One of the files that need to be updated is pg_hba.conf
and update the #IPV4 local connections
and #IPV6 local connections
. To apply the changes made, the Postgres server needs to be restarted: sudo service postgresql stop
then sudo service postgresql start
.
I also open up the database server PostgreSQL port to the service instance.
Somehow, the environment variables from my
.env
file were not being read properly. Instead, I moved the variables into.bash_profile
from root. In order for the changes to apply,source .bash_profile
must be run first.
Instructors GET requests
RPS | Latency | % Dropped Requests |
---|---|---|
1 | 29ms | 0.00% |
10 | 29ms | 0.00% |
100 | 27ms | 0.00% |
1000 | 6022ms | 71.14% |
Instructors POST requests
RPS | Latency | % Dropped Requests |
---|---|---|
1 | 16ms | 0.00% |
10 | 15ms | 0.00% |
100 | 14ms | 0.00% |
1000 | 2848ms | 41.04% |
Proxy GET requests
RPS | Latency | % Dropped Requests |
---|---|---|
1 | 28ms | 0.00% |
10 | 26ms | 0.00% |
100 | 24ms | 0.00% |
1000 | 25ms | 0.00% |
As you may note, the app has a lot of trouble receiving 1000RPS, but generally have no problem up to 100RPS.
The proxy requests do not show drops at all, and this is primarily because it does not need to make any server requests itself. It only needs to request for the bundle.js
served by the service.
When programming with React, the React lifecycle allows an app to first be loaded with static content only, then quickly update the content with backend data via state updates. This affects the proxy, as the proxy can serve the “dumb” bundle.js
without needing to wait for data.
Next, I will be slowly improving the performance of this app through horizontal scaling. Look forward to sever side rendering, caching, and load balancing.