History & First Architecture
This web application was originally written to run a Predictions Game with colleagues for the 2010 World Cup. The back end was coded up using Java and the Struts 1.2 MVC Framework and the user interface using table-less HTML (i.e. using "div" for layout purposes instead of "table" components) inside web pages crafted using JSP and JSTL. These JSP pages have numerous custom tags for iterating over data lists. These iterators mimimise HTML and enable page lists to be flexible in terms of the numbers of items in each list. Some of these iterators even generate names for page elements that contain lists so that, for example, data entry forms can submit lists of data for processing rather than simply pre-defined static page/form elements.The pages are laid out using Apache Tiles so that header, footer and menu sections are never duplicated and all pages are styled using a single CSS stylesheet with maximum efforts made to include no in-line styles. This ensures a single stylesheet change can completely re-design the whole look and feel of the site with no changes to any html required (assuming of course that all the correct styling classes are included in the css)! The build tool was initially Ant and the persistence layer a MySQL database accessed using JDBC from a Java "dao" layer. JUnit3 was used for unit testing various calculations.
Initialisation data is input using SQL scripts and at the start of each competition it is necessary to input data into the teams and fixtures tables. This takes care of the league stage. The fixture data for the knock out stages can be set up at the start as well, but until the end of the league stage, we won't know the teams that will play in each of these fixtures, just the date, time and location. However, we will have a reference to help us decide which team will be popuated into these fixtures, for example the Winner Group A may play the Runner Up in Group C in fixture 1 of the knock out stage. At this stage in the development, these had to be manually determined and input into the knock out fixtures by running update scripts by hand.
For those with a keen interest in SQL and/or Data, then click here to read all about it!
First Business Logic Change
The web app was used again 2 years later to run a game for the UEFA 2012 European Championship. At this point it was discovered that UEFA Rules are slightly different from FIFA when it comes to ordering teams on the same points in the Group Stage Leagues. This required code changes to implement a second ordering system, whereby teams on the same points are placed into a "mini-league" and only the games between them used to determine their position.The code to order teams in such a way was already present in a function in the codebase, and calling this same section of code to order this sub-league would be sensible rather than duplicating it, then this part of the code was pulled out into its own function with the intention of refactoring it to call itself (recursive), when a sub-league needed to be ordered.
To ensure a good result from this quite complex refactor, a TDD approach was taken. Suitable tests were crafted with simple data to be ordered and then more complex tests were built with first 2 and then 3 teams with the same number of points. The recursive method was then tested and refactored until all the tests were passing. Once this was achieved the function could now be re-introduced into the original code which also needed refactoring to fit!
The Match Generator
The Match Generator was a new automated function to run at the end of each stage, to populate the appropriate teams into the fixtures of the next Knock Out round. Once the league stage is complete, then the "match generator" is run by an admin and this processes the first and second placed teams from each league into the first knock out phase games. This process uses a simple mapping in a DB table to model the 'winner A' v 'runner up C' formula, for exampleinsert into match_generator (stageid, matchid, homeaway, groupid, grouppos) values (2, 37, 1, 1, 1); insert into match_generator (stageid, matchid, homeaway, groupid, grouppos) values (2, 37, 2, 3, 2);
This example script simply states that stage 2 (last 16), matchid 37, home team is the team finishing in group 1 pos 1 and the away team is the team finishing in group 3 pos 2. This data is entered for all the matches for the last 16 stage and again for each of the following stages so that the teams that play each other in every stage are populated into the fixture table by a simple admin function at the end of the preceding stage. This means that once all the initial data scripts have been run in, then the only work necessary is to enter the match results on a regular basis and click the "Calculate Points" button to keep the players points and the League Tables up to date. Then, at the end of every stage, simply clicking the "Roll Forward" button results in the following knock out stage fixtures being populated with the appropriate teams.
Re-factoring the web layer to Spring 4 MVC and adding Maven
During 2013-2015, a major re-work was undertaken to migrate the code-base from Struts 1.2 to the more popular Spring 4 MVC Framework. The existing use of Tiles was maintained but updated to Tiles 3. Once Spring MVC was implemented, the app was then upgraded to use the Maven build and dependency management tool and also upgraded fromJUnit3 to JUnit4.Fully embracing TDD
The next issue to tackle was to widen the test coverage, however writing automated unit tests for some essential "calculation" classes was difficult, due to tight coupling with the database layer. So all affected classes were refactored, removing database dependencies and re-writing using a TDD approach - this meant starting with the tests, creating input and expected output data in the form of objects or lists of objects, then re-writing the calculation classes to work with the tests. The final stages were then to refactor the data access components to provide the data and take in data in the required forms and finally re-stitch everything back together. The DB access layer at this stage was based on the Spring @Repository class and JdbcTemplates with update, query and queryForObject method calls. Most of the query calls used Lambdas to create a List of DB objects from the returned resultset as this very much minimised bolierplate code.The result is an architecture that facilitates TDD and is now easy to enhance, test and maintain.
IDE and Hosting Service
Development at this time was using the Eclipse (Mars) IDE and run locally using the Java EE perspective, running Tomcat 7 as the server. Production deployment was into Amazon's AWS Service using ElasticBeanstalk.Extending the Match Generator for UEFA third placed teams
During Euro 2016 it became time to tackle a major limitation of the "match generator" function. With 32 teams in the World Cup, this breaks down to 8 leagues of 4 and hence the top two teams in those 8 leagues neatly gives us the 16 we need to start the knock out stages. UEFA European Championships typically have only 24 teams, however, giving only 6 leagues of 4, so the top two teams in each league only gives us 12 teams. So, we need a further 4 to make up the 16 required.So, when run for the Euro 2016, I had to manually update the DB with the best 4 of the 6 third placed teams in order to progress through the knock out stages.
Upgrading the "match generator" to handle the best 4 of 6 third placed teams was quite tricky as it is in fact a bit of a process. Let me explain. It is easy enough to pick the best 4 of the 6 third placed teams, (using the ubiquitous league ordering function as mentioned earlier), but which fixture these 4 teams gets inserted into is more difficult to determine. Of course Uefa can't say in advance of the matches that the best third place team will play the winner of Group C, because that team may have already played in Group C, so the methodology is necessarily a little more complex!
How to determine who the best third place teams play
First, we need to determine the top 4 teams of the 6 third placed teams. This is easy enough, using our league ordering function. Next we must make a "code" from the group letter of each group that these teams were in (ordered alphabetically), for example in 2020 this code was "ACDF" because the best third placed teams were Portugal, Czech Republic, Switzerland, Ukraine and these 4 teams came from Groups F, D, A and C respectively.Now that we have our code from the 3rd place teams, "ACDF", we can refer to the code look up table, see below, and find the row that corresponds to this to give us the opponents for these teams, which in this case, from the table is F3, D3, C3, A3. That gives us the missing fixtures, B1 v F3, C1 v D3, E1 v C3 and F1 v A3.
3rd-Place Teams | B1 plays | C1 plays | E1 plays | F1 plays |
A, B, C, D | A3 | D3 | B3 | C3 |
A, B, C, E | A3 | E3 | B3 | C3 |
A, B, C, F | A3 | F3 | B3 | C3 |
A, B, D, E | D3 | E3 | A3 | B3 |
A, B, D, F | D3 | F3 | A3 | B3 |
A, B, E, F | E3 | F3 | B3 | A3 |
A, C, D, E | E3 | D3 | C3 | A3 |
A, C, D, F | F3 | D3 | C3 | A3 |
A, C, E, F | E3 | F3 | C3 | A3 |
A, D, E, F | E3 | F3 | D3 | A3 |
B, C, D, E | E3 | D3 | B3 | C3 |
B, C, D, F | F3 | D3 | C3 | B3 |
B, C, E, F | F3 | E3 | C3 | B3 |
B, D, E, F | F3 | E3 | D3 | B3 |
C, D, E, F | F3 | E3 | D3 | C3 |
Coding this up was quite interesting, of course using TDD to define a number of test cases and then to code up the solution until all the tests were passing.
Re-factoring for Spring Boot JDBC
Spring-boot-starter-jdbc brings a lot of help for jdbc, including in memory databases and automated schema and data loading as standard. This makes running integration type tests really quick and easy, with scripts setting up data to any desired point, thereby extending testing to a higher level.I could now retro fit a proper integration test for the Euro 2020 data in a script using the H2 in memory database to confirm that all the final 16 games were generated correctly, including the 3rd placed team generation as described above. In addition I could also create tests using data from previous competitions, to ensure that the code worked for these as well.
IDE and Hosting Services
Development at this time was using IntelliJ Community Edition 2021.2.2 and deployment into my.livehostserver.com under Tomcat 9 with a MySQL back end. My hosting provider does offer Tomcat 10, but since Tomcat 10 uses JakartaEE rather than JavaEE, then I need to first upgrade the app, before it will work with Tomcat 10. Another task for the backlog! :)Major refactor to switch to the Spring-Data-Jpa Persistence Framework instead of using Jdbc Templates and custom SQL
The previously utilised Jdbc Framework is by now rather old technology and the code for this layer contained quite a large amount of code mapping query results to Java objects, something that Spring-data-jpa does for us 'automagically'.One of the benefits of using the Jpa Framework, is the ability to use an in-memory database for testing. Not only does this enable integration tests that cover the full end to end stack without the problems of using a physical database, but it also enables Unit Testing of the data 'slice' of the Spring Context via the @DataJpaTest annotation. This means Data related Services can be tested without having to load up the full Spring Application Context.
In order to run the full in-memory integration tests, @DataJpaTest slice tests and regular Unit Tests all in the same project (as well as maintain configuration for the live deployments) I decided to start from a working Spring JPA Prototype project, rather than try to retrofit all of this config into the existing codebase. I then copied across the old classes, layer by layer.
The whole process went something like this;
* Create an @Entity for each table, annotate with @Entity, @Data and ensure the primary key field has @Id. For composite PKs use multiple @Id
* Ensure the schema.sql and application.properties files exist on the test/resources classpath and that application.properties contains an in-mem db url and hibernate ddl-auto=validate
* Create a DataLoadTest with @DataJpaTest, @Sql("insertDataSQL"). This ensures you validate the schema, insert scripts and find() methods in the Repositories
* Create JoinRepository classes with native queries to handle more complex joins, still using Spring to map the results to new 'View' objects
* As the old Jdbc layer had already been enhanced to use Lambdas in the mapping, it was only necessary to port the SQL strings and method names into the View Repositories
* Use the Service layer to handle complications with data, leaving the Repository classes as pure as possible, i.e. use Java rather than SQL to process data!
* Create Data Transfer/View objects to encapsulate data for Calculations and/or use in the UI
* Use ModelMapper to simplify translations of objects from Entities to/from DTOs
* Added @DataJpaTest persistence layer (slice) tests on every Service class to validate nativeQueries (typical issues around camel/snake case and object/sql mapping!)
Switch to gradle, re-structure source code into modules and add Checkstyle & SpotBugs
Gradle is a more flexible, modern build tool, but to be honest I like it mainly because I prefer to read json config files rather than Maven''s xml.Keeping both unit tests and integration tests in the same folder is not great for a number of reasons, not least having to run all the integration tests every time a gradle build is required, plus different test frameworks and config are usually required for unit and integration tests. I ended up having two test folders, one called test and one called integration test, but switching between them was cumbersome.
So, I have long been looking for a way to neatly differentiate unit and integration test code and until now could only think of having separate /test and /integ-test folders. Of course IntelliJ only supports having one src/main and one src/test folder! So I have been running my unit tests until dev complete and then switching folders using "Mark directory as" to run the integ-tests.
So now I have split the source code into modules to enable the unit tests to sit with the source code (as they should) and for the integration tests to be kept separately in a different module that depends on the source module.
It is epecially important to separate these test dirs when using spring boot, in order to utilise different test frameworks and to enable their required config.
For example, in a prod env we will specify a datasource type and switch off the spring datasource autoconfiguration so that we can define the datasource connection to a real database ourselves, e.g.
spring.datasource.type=org.apache.tomcat.jdbc.pool.DataSource spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
whereas in our test environment, it leads to improved running times to use an in memory database and spring-boots 'auto-configured' option. So we do need to separate the integration tests out from our source code so that can more easily manage the different approaches and spring config.
Future Plans
The next project is most likely to be upgrading the Authentication and Authorization to utilise Spring Security. I have already been researching this and the changes should be relatively straightforward. Spring Security looks amazing and my current idea is to use Forms Based Authentication with custom log-in and log-out forms, and @PreAuthorize annotations on the controller mappings. This will fully encapsulate login/logout, password encryption and role based Authorization (i.e. control access to different parts of the site for regular user and admin rolesAnother project might be to swap out the old JSP & Tiles and replace these with Thymeleaf, although that iteself is now getting old, so perhaps something like React and javascript? Although the project is already using Spring MVC, this will no doubt be quite a big project as there are 46 JSP files in the project in total, including regular, admin, error and layout files and many files are using tags, custom tags and logic conditions, which are somewhat tied into the controller classes, so all of these would need re-writing in tandem!
Docker-isation would be another potential task, to enable easy deployment and scalability using AWS and other hosting suppliers.