Jump to:
- Benefits of workspaces
- Built With
- Developing
- Structure
- Transpiling/Babel
- Code linting
- Flow, static type checking
- State architecture
- Containers and Components
- Actions
- Selectors/Reselect
- Reducers
- Sagas
- Git flow
- Commit message
- Contributing
- Preparing a good PR
- IDE integration
- Your dependencies can be linked together, which means that your workspaces can depend on one
another while always using the most up-to-date code available. This is also a better
mechanism than
yarn link
since it only affects your workspace tree rather than your whole system. - All your project dependencies will be installed together, giving Yarn more latitude to better optimize them.
- Yarn will use a single lockfile rather than a different one for each project, which means less conflicts and easier reviews.
- Shared eslint config
- Shared flowconfig and flow-typed
- Easy IDE integrations, since we don't have multiple root dirs in the project
Example web project with yarn workspaces.
- yarn-workspaces - It allows you to setup multiple packages
- webpack - Is a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser, yet it is also capable of transforming, bundling, or packaging just about any resource or asset.
- react - A JavaScript library for building user interfaces
- redux - Is a predictable state container for JavaScript apps
- redux-saga - An alternative side effect model for Redux apps
- flowtype - Is a static type checker for JavaScript
- reselect - Selector library for Redux
- styled-components - Visual primitives for the component age
Install dependecies(it will install dependecies from all workspaces):
yarn
Run webpack-dev-server:
cd web
yarn start
Core logic - actions, reducers, sagas, selectors, types and helpers.
Web project - UI components and containers, depends on core
To transpil JS code used babel
babel-loader
config can be found in the webpack rule:
https://github.com/web-pal/mono-app-example/blob/master/web/webpack.config.base.js#L26
Each files with .js and .jsx extensions will be handled by babel-loader, exclude node-modules.
core transpiling will be handled too, webpack(or babel) support worskpaces.
Used presets:
- @babel/preset-env - A Babel preset that compiles ES2015+ down to ES5 by automatically determining the Babel plugins and polyfills you need based on your targeted browser or runtime environments.
- @babel/stage-0 - Includes several babel plugins, details in the link
- @babel/react - Babel preset for all React plugins
- @babel/flow - Removes flow types before transpiling
To lint JS code used eslint with babel-eslint parser, whitch allows to lint ALL valid Babel code.
eslint-config-airbnb bring rules for Airbnb style guide
Config file for the eslint.
We have a global config and extended config for the web
"settings": {
"import/resolver": {
"webpack": {
"config": "./webpack.config.base.js"
}
}
},
it allows eslint understand webpack import rules.
Flow - Is a static type checker for your JavaScript code. It does a lot of work to make you more productive. Making you code faster, smarter, more confidently, and to a bigger scale. Flow checks your code for errors through static type annotations. These types allow you to tell Flow how you want your code to work, and Flow will make sure it does work that way.
Does a full Flow check and prints the results:
yarn flow
Config file for the flow.
[ignore]
.*/node_modules/.*
We ignore node_modules for the static typechecking, but it may throw import errors, so if you use
some third-party library which contains flow definition, just exlude it from the ignore
regexp or use flow-typed
for modules without flow definition.
Read flow-typed chapter above.
[options]
esproposal.export_star_as=enable
export_start_as option allows to use export * as
syntax.
[options]
module.name_mapper='^web\-components\/\(.*\)$' -> '<PROJECT_ROOT>/web/src/components/\1'
module.name_mapper='^web\-containers\/\(.*\)$' -> '<PROJECT_ROOT>/web/src/containers/\1'
name_mapper allows to resolve any custom import paths, we use it to resolve webpack aliases
flow-typed
is a repository of
third-party library interface definitions for use with Flow.
When you start a project with Flow, you likely want to use some third-party libraries that were not written with Flow. By default, Flow will just ignore these libraries leaving them untyped. As a result, Flow can't give errors if you accidentally mis-use the library (nor will it be able to auto-complete the library).
To address this, Flow supports library definitions which allow you to describe the interface of a module or library separate from the implementation of that module/library.
The flow-typed repo is a collection of high-quality library definitions, tests to ensure that definitions remain high quality, and tooling to make it as easy as possible to import them into your project.
All you have to do when you add one or more new dependencies to your project is run flow-typed install. This will search the libdef repo and download all the libdefs that are relevant for your project and install them for you. After that, simply check them in and be on your way!
To install package definition use flow-typed
command:
yarn flow-typed install [email protected]
The state architecure based on redux documentation, so for deeply understanding just read the documentation.
- Data normalizing is the most important thing that you have to use.
- Simple reducer
- Memorized selectors
Actions are payloads of information that send data from your application to your store.
They are the only source of information for the store. You send them to the store
using store.dispatch()
.
-
Dispatch actions from components only using
dispatch
funtion, avoid bindActionCreators. Type definitions of actions provided bybindActionCreators
invisible for the components and you have to define types inProps
. Using dispatch allows to see type definitions of actions and separates actions fromprops
in the components. Example can be found here. -
Use general purpose actions - for instance, you have a reducer ui which used like key-value storage, do not create a new action for each key changing, define one action which will service ui key-value storage. Example can be found here.
const setUiState = (
key: string,
value: string | number,
): UiAction => ({
type: actionTypes.SET_UI_STATE,
payload: {
key,
value,
},
});
- If you handle an action in a saga, mark type and name of the action by
Request
prefix.
getApiListRequest = (
type: actionTypes.GET_API_LIST_REQUEST,
) => ({
type: actionTypes.GET_API_LIST_REQUEST,
});
To get values from the store use Reselect library.
Reselect is a popular library that provides a convenient way of getting values from the store in
a React-Redux application. What makes it so good is its memoization ability.
You can read all this in the documentation. In two words, when you use the createSelector()
function,
it memoizes an output of every input selector and recalculates the resulting value only if any
of the input selectors changes its output. An important thing to note here is that reselect uses
reference equality (===) to determine a value change.
- Use general purpose selectors - for instance, you have a reducer ui which used like key-value storage, since you don't need calculating in the selector for the key-value storage you don't need memoization and can provige general selector to get value by key.
export const getUiState = (key: string) =>
({ ui }: { ui: UiState }) => ui[key];
- Keep reducers pure - without calculating and definition logic
- Avoid duplicating data in the store, store only normalized data
Use redux-saga for side effects(api requests) and long polling business processes. It's very important to separate this logic from Ui.
redux-saga
is a library that aims to make application side effects (i.e. asynchronous things like
data fetching and impure things like accessing the browser cache) easier to manage, more efficient
to execute, simple to test, and better at handling failures.
The mental model is that a saga is like a separate thread in your application that's solely
responsible for side effects. redux-saga
is a redux middleware, which means this thread
can be started, paused and cancelled from the main application with normal redux actions,
it has access to the full redux application state and it can dispatch redux actions as well.
It uses an ES6 feature called Generators to make those asynchronous flows easy to read, write and test.
(if you're not familiar with them here are some introductory links)
By doing so, these asynchronous flows look like your standard synchronous JavaScript
code. (kind of like async
/await
, but generators have a few more awesome features we need)
You might've used redux-thunk
before to handle your data fetching. Contrary to redux thunk,
you don't end up in callback hell, you can test your asynchronous flows easily and your actions stay pure.
We use Vincent Driessen's branching model.
Read details here:
- http://nvie.com/posts/a-successful-git-branching-model/
- http://danielkummer.github.io/git-flow-cheatsheet/
To make the git flow experience smoother you can use custom git commands(regular shell scripts) - git-flow
Setup a git repository for git-flow usage(store git-flow config in .git/config):
git flow init -d
We use conventional commits specification for commit messages.
To ensure that all commit messages are formatted correctly, you can use Commitizen cli tool. It provides interactive interface that creates your commit messages for you.
sudo npm install -g commitizen cz-customizable
From now on, instead of git commit
you type git cz
and let the tool do the work for you.
The following commit types are used on the project:
- feat - A new feature
- fix- A bug fix
- improvement - Improve a current implementation without adding a new feature or fixing a bug
- docs - Documentation only changes
- style - Changes that do not affect the meaning of the code(white-space, formatting, missing semi-colons, etc)
- refactor - A code change that neither fixes a bug nor adds a feature
- perf - A code change that improves performance
- test - Adding missing tests
- chore - Changes to the build process or auxiliary tools and libraries such as documentation generation
- revert - Revert to a commit
- WIP - Work in progress
You should strive for a clear informative commit message. Read How to Write a Git Commit Message.
Helpful hint: You can always edit your last commit message, before pushing, by using:
git commit --amend
After cloning the repo, initialize the local repository with gitflow(if you use it):
git flow init -d
When starting work on a new issue, branch off from the develop branch.
git checkout -b feature/<feature> develop
# git-flow:
git flow feature start <feature>
If your feature/bug/whatever have an github issue then use issue id as feature name. For instance:
git checkout -b feature/1 develop
# git-flow:
git flow feature start 1
Which mean you start working on #1 issue(/issues/1 regarding the repo).
Then, do work and commit your changes.
git push origin feature/<fature>
# git-flow:
git flow feature publish <feature>
When done, open a pull request to your feature branch.
If you have a permit to close the feature yourself:
git checkout develop
# Switched to branch 'develop'
git merge --no-ff feature/<feature>
# Use --no-ff to avoid losing information about the historical existence of a feature branch
git branch -d feature<fature>
# Deleted branch
git push origin develop
Same with git-flow:
git flow feature finish
- A pull request should have a specific goal and have a descriptive title. Do not put multiple unrelated changes in a single pull request
- Do not include any changes that are irrelevant to the goal of the pull request. This includes refactoring or reformatting unrelated code and changing or adding auxiliary files (.gitignore, etc.) in a way that is not related to your main changes.
- Make logical, not historical commits. Before you submit your work for review, you should rebase your branch (git rebase -i) and regroup your changes into logical commits. Logical commits achieve different parts of the pull request goal. Each commit should have a descriptive commit message. Logical commits within a single pull request rarely overlap in the lines of code they touch.
- If you want to amend your pull request, rewrite the branch and force-push it instead of adding new (historical) commits or creating a new pull request.
To respect local indent rules use .editorconfig
The following plug-ins are great for the syntax highlight:
Plug 'pangloss/vim-javascript'
Plug 'mxw/vim-jsx'
it will use eslint bin and config files from the project.
help wanted...
help wanted...
To handle eslint use Asynchronous Lint Engine
Plug 'w0rp/ale'
" Configure ale (linting)
let g:ale_sign_column_always = 1
let g:ale_linters = {
\'javascript': ['eslint']
\}
help wanted...
help wanted...
To analyze the code use flow as LSP server. The Language Server protocol is used between a tool (the client) and a language smartness provider (the server) to integrate features like auto complete, go to definition, find all references and alike into the tool
You need to install LSP client for the vim.
At this moment LSP mode in flow in development, but we can use LSP wrapper for the flow.
To get completions from LSP client use nvim-completion-manager
Plug 'autozimu/LanguageClient-neovim', {
\ 'branch': 'next',
\ 'do': 'bash install.sh',
\ }
let g:LanguageClient_serverCommands = {
\ 'javascript': ['flow-language-server', '--stdio', '--try-flow-bin'],
\ 'javascript.jsx': ['flow-language-server', '--stdio', '--try-flow-bin'],
\ }
imap <expr> <CR> (pumvisible() ? "\<c-y>\<Plug>(expand_or_nl)" : "\<CR>")
imap <expr> <Plug>(expand_or_nl) (cm#completed_is_snippet() ? "\<C-M>":"\<CR>")
inoremap <expr> <Tab> pumvisible() ? "\<C-n>" : "\<Tab>"
inoremap <expr> <S-Tab> pumvisible() ? "\<C-p>" : "\<S-Tab>"
It gives you autocomplete, go to definition and hover suggestion functionality.