- Intro
- How is work organized?
- Setting up your local environment
- Running tests
- Linting
- Continuous Integration
- Postman
- Accessibility
- Internationalization
- CSS
Low risk, intuitive product experimentation. Inform your product decisions with evidence from your users by trying out multiple variations of your features.
Works with whatever amount of data you have - more data is better, but you can still learn from a small amount of data.
Traffic flow to variants is adjusted in real time based on their performance. As a result poorly performing variants are quickly starved of traffic while high-performing variants are rewarded with more.
Specify the value of particular actions within your product, so that Hyp can measure success based on the total value provided by variants, rather than just conversion rates.
We aim for:
- Transparent pricing that's affordable for startups.
- Helping customers leverage the data they actually have, rather than waiting until they have a massive amount of traffic or users to start doing so.
- Offering clear, intuitive explanations of experiment results.
- Providing a great developer experience through thorough documentation, a robust sandbox environment, visibility into logs and events via a dashboard, security best practices, and a fast API.
We do almost everything in GitHub. It's nice to have a single place for Kanban boards (GitHub Projects), continuous integration (Github Actions), code review, and high level product discussions (GitHub Projects, again).
Generally speaking we decide what to work on and when by writing up proposals, discussing those proposals as a group, and prioritizing those proposals based on consideration of how much effort they will take and how much value they will provide, relative to other work we want to do.
We don't have substantive product discussions in instant messaging tools like Slack, preferring to write in long-form and respond asynchronously.
Check out the description of our product proposals board for a detailed explanation.
Right now, Hyp is a Django monolith hosted on
Heroku and backed by a PostgreSQL database. It uses numpy
for fancy math.
Django has good docs and guides and is conceptually similar to Ruby on Rails.
Here's what you need to do to get Hyp running locally on a Mac:
- Make sure you have XCode installed and up to date:
xcode-select --install
- Install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- Install Python 3:
brew install python3
. This should also gets youpip3
, Python's package manager, but if not you can trypython3 -m ensurepip --upgrade
. - Clone the Hyp repo. If you have SSH keys added to GitHub run
git clone git@github.com:Murphydbuffalo/hyp2.git
, otherwise rungit clone https://github.com/Murphydbuffalo/hyp2.git
cd hyp2
(the original Hyp was a very terrible Ruby Gem, thus the name "Hyp2").- Set up a virtual env for the project:
python3 -m venv .
. A virtual env does two things. First, it allows us to install the dependencies we need for Hyp into a directory specific to this project, which means installing those same dependencies globally or for another project won't overwrite a dependency with a different version. Second, it allows us to use a specific version of Python and Pip for the project. After running this command you should see a./lib/python3.9/site-packages
directory. This is where our Hyp-specific installations of dependencies will live (as opposed to some globalsite-packages
directory). You should also see a./bin
directory, including among other things executables forpip
andpython
that symlink to the version of Python you used to run thepython3 -m venv.
command earlier. - Run
source ./bin/activate
. This tells your computer to use the executables in./bin
by prepending that directory to yourPATH
. You can check that this worked by runningecho $PATH
, or by runningpython -V
and seeing that yourpython
executable points to some version of Python 3, not the system default of Python 2. - Install the dependencies:
python -m pip install -r requirements.txt
. Therequirements.txt
file stores the versions of all dependencies of the app, so if you update or add a dependency be sure to runpython -m pip freeze > requirements.txt
. - I highly recommend making a shell alias so you don't forget to active your virtual env. I have something like
alias hyp="cd ~/Code/hyp2 && source ./bin/activate"
in my shell config file. cd ./web
. Most of the work you do will be inside this directory. The top level directory is mainly for configuration files.- Install and run PostgreSQL:
brew install postgresql
and thenbrew services start postgresql
. If everything is set up correctly you should be able to run the server withpython manage.py runserver
. Congrats! If there are issues with your Postgres setup you can investigate by looking at the log file:tail -n 500 /usr/local/var/log/postgres.log
. - Install and run Redis:
brew install redis
and thenbrew services start redis
. Redis is the back-end for the job processing tool we use, RQ. RQ is very similar to Resque from the Ruby world. - Create the development database:
createdb hyp2
.createdb
is a command provided by Postgres. - Run the database migrations:
./helper_scripts/migrate
. By the way, thishelper_scripts
directory is something I added for convenience. Feel free to make PRs that add new scripts or update existing ones. (If you get an error about the rolepostgres
not existing, you may need to runcreateuser -s -r postgres
first) - Create a superuser for yourself:
./helper_scripts/user
- Start the app by running
./helper_scripts/app
. You should be able to login as the superuser you just created. Under the hood this helper script is actually doing three things: starting the Django server, starting an RQ worker process, and watching all of the files inweb/hyp/static/scss
for changes. If any of the Sass files changes they'll get compiled to CSS inweb/hyp/static/css
, which is what we actually serve to clients. - Run the tests
./helper_scripts/test
. If you see a warning about aweb/staticfiles
directory it's nothing to worry about. We include that directory.gitignore
because it is meant to contain our compiled production static assets. You can make the warning go away by runningmkdir ./web/staticfiles
. - Fire up a Python REPL with all of the Hyp code available for import:
./helper_scripts/shell
.
If you're on a Linux system things should largely be the same, with some key differences being:
- You'll be using something like
apt
to fetch dependencies. - You'll need to use something like Ubuntu's
service
command to run Postgres, rather than Homebrew services. - You'll need to install the Python dev package so that dependencies with native code can compile successfully. To do this you can run
sudo apt-get install python3-dev
.
Django provides manage.py
script inside web
that can be used to do things
like start the development server, run tests, open up REPLs, create and run migrations, create superusers, etc. Check out the full list of commands here.
Most of the time you can just use the relevant script in web/helper_scripts
,
which will call these commands with some added conveniences. However, feel free
to experiment with running the django commands yourself, eg:
- Run
python manage.py migrate
to run database migrations. - Run
python manage.py runserver
to start your development server. - Run
python manage.py test hyp/tests --settings web.settings.testing
to run the test suite. - Run
python manage.py shell
to run a Python REPL from which you canimport
any code you want to play around with.
We use a package called django-debug-toolbar
that is loaded up on any non-superuser
HTML page. On those pages you'll see a little DjDT
icon on the right-hand side
of the page. Click on that to see things like database queries, static assets on
the page, and errors.
We use Flake8, and will eventually use ESlint, to lint our Python and JS code, respectively. Install the necessary plugins to have these linters run in your text editor.
See .flake8
and .eslintrc
for the relevant configurations for those tools.
We use GitHub Actions for continuous integration. See .github/workflows
for
how that is set up.
The test suite and linter will run every time you push to a branch. A script to
deploy to Heroku will run whenever changes are merged into main
.
See Procfile
, runtime.txt
, and heroku.sh
for Heroku-related scripts and config
We also have GitHub's Dependabot set up to automatically make PRs for fixing
security vulnerabilities by upgrading dependencies. See .github/dependabot.yml
for details.
Postman is great for playing around with the API. We have a team account with some ready-to-go requests against the API.
Every API endpoint should be documented (in Apiary? Postman? It's a TODO) and have tests written for it.
We aim to make all of our UI accessible. This means writing semantic HTML, adding
tabindex
attributes where necessary, and (hopefully rarely) adding ARIA attributes
for JS-heavy code.
We aim to make it easy to provide translations for all of our copy. Via I18n and Google Translate? It's a TODO.
We write our CSS according to Maintainable CSS: https://maintainablecss.com/
This means we organize our styles into modules, components, states, and modifiers.
Modules are collections of components. They don't make sense/break if you remove one of the constituent components. You might use multiple modules in a page, but removing any one module shouldn't break the others.
Components are independent and reusable.
Some examples: "In a website the header, registration form, shopping basket, article, product list, navigation and homepage promo can all be considered to be modules."
"A logo module might consist of copy, an image and a link, each of which are components. Without the image the logo is broken, without the link the logo is incomplete."
To avoid difficulties overriding selectors with a high specificity we use
flat, class-based selectors of the form .module-component-state
.
Sadly, the django-sass
library doesn't allow for importing files from a parent
directory, so we don't nest our Sass files in directories like modules/
.
Instead we add _module
or _component
to the end of our SCSS filenames.
Admittedly it's pretty confusing to figure out which of these things your styles should belong to.
I put most styles in a file for a given module, eg auth_module.scss
. That
file will include the various components that make up that module.
I create separate Sass files for components when those styles are for small but
visually sensical pieces of UI that are shared across multiple modules. As
such, all of these components are considered to be part of a .shared
module,
eg .shared-logoIcon
.
I think of mixins (defined in mixins.scss
) as either helper functions that
take arguments (eg flex-container
) or as a set of styles that doesn't visually make sense on its
own (eg clickable
).
We use scheduled jobs to do things like calculate daily summary metrics for experiments. For this to work there must be a separate scheduler process running (that process will poll redis for jobs and enqueue them at the appropriate time).
In production we have a separate scheduler
dyno that's always running. If you want to use scheduled jobs in development you can run the scheduler
script in helper_scripts
.