Badger Academy endeavours have slowed down slightly, managing my last year of university and working part-time is a becoming a sizeable challenge. Much more so to come once I arrive in the real world!
In reference to Albert's post, I personally have found the same experience that working at a startup while at University is a significant contribution towards my own academic performance! The amount of trials and tributes faced at Badger Academy have prepared me for kata-like performance at University. There's so much I've learned at Badger Academy that isn't taught at University or put into practice! (Sadly they don't teach git workflows at University)
I highly recommend working for a startup and gaining experience in preparation for post-grad life. My dissertation has its foundations laid ahead of it due to the concepts taught at Red-Badger!
The architecture of a single page application talking via ajax requests to the backend rails api emphasizes data flow. Without the chance to just install a ruby gem and have most of the work done for you, we are forced to implement the same methodology and best network practices (As demonstrated before with Nginx).
The process of authentication leading to API-data fetching is similar to a TCP Three-way handshake.
In Badger-Time, the process occurs like the following:
- The clientside router checks if an generated authentication token is stored in HTML5 LocalStorage on any route (A persisted datastore in the browser with its own API)
- The router redirects the user to the /login route and renders the React.js component
- The user logs in with their pre-registered Badger-Time details.
- The user's credentials are verified in the backend api, a generated authentication token is sent over (Made to expire after a day until a refresh call is made by the user)
- The generated authentication token is received, it is stored in HTML5 LocalStorage.
- Sequential requests from then on after include the authentication token in the request headers.
- The API checks if the request header has a valid authentication token and replies back after executing the body of the request.
(I take that back, that was more like 7 steps rather than 3)
Technically and code-wise, the above process is implemented and our decisions in doing so are:
- NPM's Superstore-sync module to have an API for setting and getting the auth token from HTML5 LocalStorage.
- Modification of the API helper on the frontend to send token in all request headers if present.
- A Before filter/action in the Application controller to verify whether the request header has a token for a session table match; there is also an expiry value.
- An action verifies the appropriate BCrypt encrypted password details and generates a token value from a hash.
The tokens are stored in the process environment variables and are read directly instead!
So for now, the FreeAgent account is hardcoded.
FreeAgent OAuth tokens are refreshed with data pull down on a recurrent clockwork module task to keep the rails models updated! Asynchronously too because of the Sidekiq and Redis combination! No interruptions at all! Deployment and usage has continuous activity!
There was also the decision to diff our Remote Timeslips (FreeAgent populates this model) and diff our Days model on every sync too.
This was actually quite easy (algorithmic wise!), we assume that all Timeslips are up-to-date, therefore the Days model and its burnt hours attributes would be overwritten. Don't overwrite if the burnt hour is already up-to-date; a comparison of updated-at or burnt hours values.
Our BDD process is finally done; I'd like to mention again!
Another trick we setup DevOps wise was to start the phantomjs debug server in a docker container then run the cucumber tests, we now have console session logs stored! We can view those logs through the Phantomjs Web UI!