The template uses Docker Compose to run your application. All parts of the application, including the database, are run inside containers so that the running environment closely resembles the actual production environment.
Start the application with the following commands:
taito develop # Clean start local development environment
# (Runs 'taito env apply --clean', 'taito start --clean', 'taito init')
taito open client # Open application web user interface
taito info # Show info required for signing in to the application
Installation and starting up takes some time the first time you run the commands, as Docker containers and npm libraries need to be downloaded first. While waiting, browse through the Quick start section of the DEVELOPMENT.md file to get a quick overview of the Taito CLI commands.
The template comes with some strict linting and formatting rules. You should make sure that your editor is configured to show compile and lint error messages so that you don't need to look at the console output all the time. Your editor should also be able to automatically format code on save. You most have to install at least Pretties plugin on your editor to achieve this, perhaps also some additional TypeScript and ESLint plugins.
Some links:
The template comes with TypeScript by default. Many tutorials, however, are written in JavaScript.
If you would like write TypeScript that closely resembles JavaScript, you can disable the noImplicitAny
setting in tsconfig.json
files:
{
"compilerOptions": {
"noImplicitAny": false,
...
},
}
Alternatively, if you would like to write plain JavaScript (*.js), you can enable JavaScript in tsconfig.json
files (TODO does allowJs support ES6/7?):
{
"compilerOptions": {
"allowJs": true,
...
},
}
Make up some simple idea that you would like to implement, and add a new empty page for it. If you don't come up with any idea yourself, just reimplement the posts page that lets you add new posts, but replace posts with articles. Don't worry about API or database for now. Just implement a dummy user interface that works, but doesn't actually store data anywhere permanently.
If you are not yet familiar with React, you should implement the UI state management using only functionality that React provides out-of-the-box. Appendix A: Technology tutorials provides some tips and other resources that might be useful while learning React, HTML and CSS. If you already know React, you may choose to use additional libraries like Redux and redux-saga for managing state and side effects.
The application is built automatically in the background when you make changes. If build fails for some reason, you should see errors on your command line console. You should see the same errors also on your editor, if your editor has been configured properly.
You can debug the implementation with your web browser. Chrome DevTools is a set of web developer tools built directly into the Google Chrome browser. Other web browsers include similar tools also. These tools let you examine generated HTML, change CSS styles directly in browser, and debug implementation by setting breakpoints and executing code line by line in browser. Note that you can find source code of your UI implementation under the webpack folder: Chrome DevTools -> Sources tab -> webpack:// -> . -> src. See appendix A for some additional browser extensions that might also be useful.
If web development is new for you and you are insterested in it, just take your time learning the web development basics before continuing the Taito CLI tutorial.
Add some npm library to the dependencies section of the client/package.json
. Install the new libraries locally by running taito install
(or taito env apply
). Restart the client container with taito restart:client
. Now you should be able to use the npm library in your implementation.
Once in a while commit and push your changes to git. You can do this either with a GUI tool of some sort (e.g. your code editor), with git commands, or with the following taito commands.
Committing changes to a local git repository:
taito stage # Add all changed files to staging area
taito commit # Commit all staged changes to the local git repository
Pulling changes from and pushing changes to a remote git repository:
taito pull # Pull changes from remote git repository using rebase and autostash
taito push # Push changes to remote git repository
For now, you should commit all your changes to the dev branch that is checked out by default. You should also write commit messages in the following format: wip(articles): my short lowercase message
. Branches and commit message conventions are explained later in chapter 3. Version control.
Your implementation needs to store some data permanently. For this, you create 1-N database tables to PostgreSQL database. You add a new database table by adding a new database migration. You can do this with the following commands:
taito db add article -n 'add article table' # Add migration
EDIT database/deploy/article.sql # Edit deploy script
EDIT database/revert/article.sql # Edit revert script
EDIT database/verify/article.sql # Edit verify script
taito init # Deploy to local db
If you modify the deploy/article.sql
after you have already deployed it, you have to deploy the changes with the --clean
option:
taito init --clean
The deploy.sql script creates a database table, the verify.sql script verifies that the database table exists, and the revert.sql script reverts the changes by dropping the database table. You can find example deploy, revert and verify scripts in the database/
directory. These migration scripts will be run automatically by CI/CD pipeline when the application is deployed to different environments (e.g. local, development, testing, user acceptance testing, staging, canary, production).
Migrations are executed with Sqitch. See Sqitch tutorial for PostgreSQL if you need further instructions on editing the migration scripts. See appendix A for some SQL and relational database tutorials.
Often it's a good idea to add some example data to database, as it makes development and testing easier. Folder database/data/
contains example data for each environment. Try to add some example data to the newly created database table(s) with the following commands:
EDIT database/data/local.sql # Modify data used for local environment
taito init --clean # Populate all migrations and init data to local database
Note that taito init --clean
erases all existing data from your local database. If you don't want that, you can alternatively run taito init
and ignore all the already exists error messages.
TODO: note about remote environments and
taito init:dev --clean
.
Connect to your local database and check that the example data exists there. You can do this with the following commands:
taito db connect # Connect to the local database
\dt # Show all database tables (postgres)
select * from article; # Show all articles (SQL command)
\? # Show help for all backslash commands (postgres)
\q # Quit (postgres)
If you are not yet familiar with SQL, you should try to execute also some additional SQL commands just for the fun of it. See appendix A for some SQL tutorials.
TIP: If you have installed some database GUI tool, you can run
taito db proxy
to display database connection details and you can use those details to connect to the local database.
Normally all database changes must be made using database migrations (option a). However, if you are modifying a database table that does not exist in production environment yet, you can keep the scripts located in database/deploy/
cleaner by modifying them directly (option b). Try the both approaches:
Add a new column to your newly created database table as a new database migration. You do this just like you added the database table, but this time you use ALTER TABLE
clause instead of CREATE TABLE
:
taito db add article-foobar -n 'add foobar column to article table' # Add migration
EDIT database/deploy/article-foobar.sql # Edit deploy script
EDIT database/revert/article-foobar.sql # Edit revert script
EDIT database/verify/article-foobar.sql # Edit verify script
taito init # Deploy to local db
The deploy.sql script creates the column, the verify.sql script verifies that the column exists, and the revert.sql script reverts the changes by dropping the column. You can find example deploy, revert and verify scripts in the database/
directory. Note that you can also add multiple columns in a single migration script, if necessary. See Sqitch tutorial for PostgreSQL if you need further instructions.
The upside of this approach is that the new column is deployed to all environments automatically. Other developers need to run taito init
manually, but taito init --clean
is not required, and therefore all data is preserved.
TODO example: posts-images
Add a new column to your newly created database table by modifying the existing deploy script directly:
EDIT database/deploy/article.sql # Edit deploy script
taito init --clean # Deploy to local db
The downside of this approach is that the taito init:ENV --clean
command deletes all existing data from database, and the command must be run manually to all environments that already contain the database table that was modified.
Your UI implementation needs to access the data located in database. However, accessing database directly from UI is a bad approach for many reasons. Therefore you need to implement an API that exists between the UI and the database:
UI (on browser) -> API (on server) -> database
The API should be stateless. That is, the API implementation should not keep any state in memory or on local disk between requests. This is explained in more detail in appendix B.
TODO: Some tips for debugging.
The template supports GraphQL API code generation. Read the instructions.
taito code generate article
to generate code for the article table.taito restart:server
.taito init
to generate example GraphQL queries.taito open graphql
to open GraphQL playground on your browser. Try to execute posts and articles queries. You can copy them from server/test/graphql/generated/queries.See appendix A for some GraphQL API tutorials.
The template supports also RESTful APIs. There is one example at InfraRouter.ts. You can implement a RESTful API endpoint at server/src/core/routers/ArticleRouter.ts if you wish. In a RESTful API a HTTP URL (e.g /articles) defines a resource, and HTTP methods (GET, POST, PUT, PATCH, DELETE) operate on that resource. For example:
GET /articles
: Fetch all articles from the articles collectionPOST /articles
: Create a new article to the articles collectionGET /articles/432
: Read article 432PUT /articles/432
: Update article 432 (all fields)PATCH /articles/432
: Update article 432 (only given fields)DELETE /articles/432
: Delete article 432See appendix A for some RESTful API tutorials.
Your implementation will be run in many other environments in addition to your local environment (testing environment and production environment, for example). Some settings, like database settings, change depending on the environment. You can define these settings with environment variables.
docker-compose.yaml
and restart Docker Compose with CTRL+C and taito start
.server/src/common/setup/config.ts
./config
endpoint in server/src/infra/routers/InfraRouter.ts
and see if /api/config
endpoint returns the configured value to your browser.scripts/helm.yaml
. The helm.yaml file is used for Kubernetes running on remote environments, but you should add the environment variable right away, so that you don't forget to do it later. You can use TODO
as value, if you don't know the correct value yet.Note that you should not use environment variables to define passwords or other secrets. Configuring remote environments and secrets are explained in part II of the tutorial.
You should not worry about 3rd party services and secrets for now. These are explained in part II of the tutorial.
TODO: As noted previously, no local disk.
TODO: https://cloud.google.com/storage/docs/access-control/signing-urls-manually TODO: minio -> S3 compatible (google cloud, etc.)
Data changes made by a service should be atomic to preserve data integrity. That is, if GraphQL mutation or RESTful API endpoint modifies data located in multiple database tables, either all data updates should be completed or none of them should.
With relational databases you can use transactions to achieve atomicity. The full-stack-template starts a transaction automatically for all GraphQL requests containing mutations and all RESTful POST, PUT, PATCH and DELETE requests (see server/src/infra/middlewares/dbTransactionMiddleware.ts
). This is a good default for most cases. See chapter 10. full-stack-template specific details if you'd like to know how to customize your transactions.
Try if transactions work like they should:
taito db connect
, select * from post order by created_at desc
Edit PostService.ts
and add a line that throws an error after post has been added to database:
const createdPost = this.postDao.create(state.tx, post);
if (true) throw new Error('error');
return createdPost;
Using a database transaction does not always suffice if an operation makes data changes to multiple systems. However, if only two systems are involved (e.g. database + object storage), you can often mitigate this issue just by executing the updates in a correct order. You should make all database updates first and only then write data to object storage. This way database updates will be rolled back automatically if the object storage write fails. In a more complex scenario, you might need to catch some errors yourself and revert data changes manually.
Try this yourself by modifying the implementation that you made in exercise 2.10. Try both 1. and 2., and see how they behave when an error occurs during either database update or object storage write:
Some systems support distributed transactions. That is, you can make changes to multiple systems at once, and all of them engage to the same transaction. Distributed transactions come with extra complexity and are rarely needed for simple systems.
Test scripts are run automatically by CI/CD pipeline when the application is deployed to different environments (e.g. local, development, testing, user acceptance testing, staging, canary, production). You can also run these tests manually with the following commands:
taito unit # Run all unit tests
taito test # Run all UI and API tests against locally running application
You can also run a certain subset of tests:
taito unit:client # Run unit tests of client
taito unit:server # Run unit tests of server
taito test:server # Run all API tests of server against locally running application
taito test:client cypress # Run all cypress UI tests of client against locally running application
You can run UI and API tests also against remote environments, but this is explained in chapter 5. Remote environments.
You should not test implementation in your test scripts. Instead, you should always find some kind of 'public API' that is designed not to change very often, and test behaviour of that API. Here public API can be provided by a class, module, library, service or UI for example. This way you can make changes to the underlying implementation, and the existing tests protect you from breaking anything.
TODO: TDD or not, prototyping at beginning of the project TODO: Running tests in production
full-stack-template uses Cypress for automatic user interface tests.
taito cypress:client
and run all existing Cypress tests by pressing the Run all specs
-button.Create tests for you UI. See the client/test/integration/posts.spec.js
as an example. The following resources provide some useful instructions for writing Cypress tests:
TIP: By default, Cypress tests are end-to-end tests. That is, they test functionality all the way from the UI to the database. This is not always a good thing. Your tests may become fragile if they are dependent on 3rd party services or on data that you cannot easily control during the test run. Your tests may also perform poorly, and you easily test the same functionality twice if you already have API tests in place. See Network Requests for more information.
The api test examples use Jest as testing framework.
taito test:server
.Create some tests for your API. See examples at server/test/core. The following resources provide some useful instructions for writing tests:
TODO
The full-stack-template differentiates unit tests from all other tests by using unit
as filename suffix instead of test
. A unit test does not require a running environment. That is, no database or external services are involved as unit test typically tests only a bunch of code. You can achieve this by mocking. TODO mock link.
The unit test examples use Jest as testing framework.
taito unit
.Create unit tests for your TODO. See the TODO
as an example. The following resources provide some useful instructions for writing tests:
TODO
taito open git # Open git repository on browser
taito open project # Open project management on browser
taito open docs # Open project documentation on browser
taito open apidocs # Open generated api documentation on browser
taito open ux # Open UX guides and layouts on browser
taito size check:client # Analyze size of the client
taito dep check:server # Check dependencies of the server
taito code check:server # code check quality ot the server
taito trouble # Display troubleshooting
taito workspace kill # Kill all running processes (e.g. containers)
taito workspace clean # Remove all unused build artifacts (e.g. images)
If you did not already, read Appendix B: Software design for some tips on how to design your application.
Next: 3. Version control