super-graph/docs/guide.md

697 lines
22 KiB
Markdown
Raw Normal View History

2019-04-04 06:52:52 +02:00
---
sidebar: auto
---
# Guide to Super Graph
Without writing a line of code get an instant high-performance GraphQL API for your Ruby-on-Rails app. Super Graph will automatically understand your apps database and expose a secure, fast and complete GraphQL API for it. Built in support for Rails authentication and JWT tokens.
## Features
2019-07-29 07:13:33 +02:00
- Automatically learns Postgres schemas and relationships
- Supports Belongs-To, One-To-Many and Many-To-Many table relationships
- Works with Rails database schemas
2019-07-29 07:13:33 +02:00
- Full text search and aggregations
- Rails Auth supported (Redis, Memcache, Cookie)
- JWT tokens supported (Auth0, etc)
2019-07-29 07:13:33 +02:00
- Join database queries with remote data sources (APIs like Stripe, Twitter, etc)
- Generates highly optimized and fast Postgres SQL queries
- Uses prepared statements for very fast Postgres queries
- Configure with a simple config file
2019-04-04 06:52:52 +02:00
- High performance GO codebase
- Tiny docker image and low memory requirements
2019-04-11 07:39:59 +02:00
## Try it out
```bash
# download super graph source
git clone https://github.com/dosco/super-graph.git
2019-06-01 16:53:24 +02:00
# setup the demo rails app & database and run it
./demo start
2019-04-11 07:39:59 +02:00
# signin to the demo app (user1@demo.com / 123456)
open http://localhost:3000
# try the super graph web ui
open http://localhost:8080
```
2019-04-13 15:12:41 +02:00
::: warning DEMO REQUIREMENTS
This demo requires `docker` you can either install it using `brew` or from the
docker website [https://docs.docker.com/docker-for-mac/install/](https://docs.docker.com/docker-for-mac/install/)
2019-04-11 07:39:59 +02:00
:::
#### Trying out GraphQL
2019-04-04 06:52:52 +02:00
We currently support the `query` action which is used for fetching data. Support for `mutation` and `subscriptions` is work in progress. For example the below GraphQL query would fetch two products that belong to the current user where the price is greater than 10
#### GQL Query
```graphql
2019-04-13 15:12:41 +02:00
query {
2019-04-04 06:52:52 +02:00
users {
id
email
picture : avatar
password
full_name
products(limit: 2, where: { price: { gt: 10 } }) {
id
name
description
price
}
}
}
```
The above GraphQL query returns the JSON result below. It handles all
2019-04-13 15:12:41 +02:00
kinds of complexity without you having to writing a line of code.
2019-04-04 06:52:52 +02:00
For example there is a while greater than `gt` and a limit clause on a child field. And the `avatar` field is renamed to `picture`. The `password` field is blocked and not returned. Finally the relationship between the `users` table and the `products` table is auto discovered and used.
#### JSON Result
```json
{
"data": {
"users": [
{
"id": 1,
"email": "odilia@west.info",
"picture": "https://robohash.org/simur.png?size=300x300",
"full_name": "Edwin Orn",
"products": [
{
"id": 16,
"name": "Sierra Nevada Style Ale",
"description": "Belgian Abbey, 92 IBU, 4.7%, 17.4°Blg",
"price": 16.47
},
...
]
}
]
}
}
```
#### Try with an authenticated user
In development mode you can use the `X-User-ID: 4` header to set a user id so you don't have to worries about cookies etc. This can be set using the *HTTP Headers* tab at the bottom of the web UI you'll see when you visit the above link. You can also directly run queries from the commandline like below.
#### Querying the GQL endpoint
```bash
# fetch the response json directly from the endpoint using user id 5
curl 'http://localhost:8080/api/v1/graphql' \
-H 'content-type: application/json' \
-H 'X-User-ID: 5' \
--data-binary '{"query":"{ products { name price users { email }}}"}'
```
## How to GraphQL
GraphQL (GQL) is a simple query syntax that's fast replacing REST APIs. GQL is great since it allows web developers to fetch the exact data that they need without depending on changes to backend code. Also if you squint hard enough it looks a little bit like JSON :smiley:
The below query will fetch an `users` name, email and avatar image (renamed as picture). If you also need the users `id` then just add it to the query.
```graphql
query {
user {
full_name
email
picture : avatar
}
}
```
### Fetching data
2019-04-04 06:52:52 +02:00
To fetch a specific `product` by it's ID you can use the `id` argument. The real name id field will be resolved automatically so this query will work even if your id column is named something like `product_id`.
```graphql
query {
products(id: 3) {
name
}
}
```
Postgres also supports full text search using a TSV index. Super Graph makes it easy to use this full text search capability using the `search` argument.
```graphql
query {
products(search: "ale") {
name
}
}
```
### Complex queries (Where)
2019-04-04 06:52:52 +02:00
Super Graph support complex queries where you can add filters, ordering,offsets and limits on the query.
#### Logical Operators
Name | Example | Explained |
--- | --- | --- |
and | price : { and : { gt: 10.5, lt: 20 } | price > 10.5 AND price < 20
or | or : { price : { greater_than : 20 }, quantity: { gt : 0 } } | price >= 20 OR quantity > 0
not | not: { or : { quantity : { eq: 0 }, price : { eq: 0 } } } | NOT (quantity = 0 OR price = 0)
#### Other conditions
Name | Example | Explained |
--- | --- | --- |
2019-04-13 15:12:41 +02:00
eq, equals | id : { eq: 100 } | id = 100
2019-04-04 06:52:52 +02:00
neq, not_equals | id: { not_equals: 100 } | id != 100
gt, greater_than | id: { gt: 100 } | id > 100
lt, lesser_than | id: { gt: 100 } | id < 100
gte, greater_or_equals | id: { gte: 100 } | id >= 100
lte, lesser_or_equals | id: { lesser_or_equals: 100 } | id <= 100
in | status: { in: [ "A", "B", "C" ] } | status IN ('A', 'B', 'C)
nin, not_in | status: { in: [ "A", "B", "C" ] } | status IN ('A', 'B', 'C)
like | name: { like "phil%" } | Names starting with 'phil'
nlike, not_like | name: { nlike "v%m" } | Not names starting with 'v' and ending with 'm'
ilike | name: { ilike "%wOn" } | Names ending with 'won' case-insensitive
nilike, not_ilike | name: { nilike "%wOn" } | Not names ending with 'won' case-insensitive
similar | name: { similar: "%(b\|d)%" } | [Similar Docs](https://www.postgresql.org/docs/9/functions-matching.html#FUNCTIONS-SIMILARTO-REGEXP)
nsimilar, not_similar | name: { nsimilar: "%(b\|d)%" } | [Not Similar Docs](https://www.postgresql.org/docs/9/functions-matching.html#FUNCTIONS-SIMILARTO-REGEXP)
has_key | column: { has_key: 'b' } | Does JSON column contain this key
has_key_any | column: { has_key_any: [ a, b ] } | Does JSON column contain any of these keys
has_key_all | column: [ a, b ] | Does JSON column contain all of this keys
contains | column: { contains: [1, 2, 4] } | Is this array/json column a subset of value
contained_in | column: { contains: "{'a':1, 'b':2}" } | Is this array/json column a subset of these value
is_null | column: { is_null: true } | Is column value null or not
### Aggregation (Max, Count, etc)
2019-04-04 06:52:52 +02:00
You will often find the need to fetch aggregated values from the database such as `count`, `max`, `min`, etc. This is simple to do with GraphQL, just prefix the aggregation name to the field name that you want to aggregrate like `count_id`. The below query will group products by name and find the minimum price for each group. Notice the `min_price` field we're adding `min_` to price.
```graphql
query {
products {
name
min_price
}
}
```
Name | Explained |
--- | --- |
avg | Average value
count | Count the values
max | Maximum value
min | Minimum value
stddev | [Standard Deviation](https://en.wikipedia.org/wiki/Standard_deviation)
stddev_pop | Population Standard Deviation
stddev_samp | Sample Standard Deviation
variance | [Variance](https://en.wikipedia.org/wiki/Variance)
var_pop | Population Standard Variance
var_samp | Sample Standard variance
All kinds of queries are possible with GraphQL. Below is an example that uses a lot of the features available. Comments `# hello` are also valid within queries.
```graphql
query {
products(
# returns only 30 items
limit: 30,
# starts from item 10, commented out for now
# offset: 10,
# orders the response items by highest price
order_by: { price: desc },
# no duplicate prices returned
distinct: [ price ]
2019-04-13 15:12:41 +02:00
2019-04-04 06:52:52 +02:00
# only items with an id >= 30 and < 30 are returned
where: { id: { and: { greater_or_equals: 20, lt: 28 } } }) {
id
name
price
}
}
```
2019-04-20 06:35:57 +02:00
### Using variables
2019-04-20 16:45:12 +02:00
Variables (`$product_id`) and their values (`"product_id": 5`) can be passed along side the GraphQL query. Using variables makes for better client side code as well as improved server side SQL query caching. The build-in web-ui also supports setting variables. Not having to manipulate your GraphQL query string to insert values into it makes for cleaner
2019-04-20 06:35:57 +02:00
and better client side code.
```javascript
2019-04-20 16:45:12 +02:00
// Define the request object keeping the query and the variables seperate
2019-04-20 06:35:57 +02:00
var req = {
query: '{ product(id: $product_id) { name } }' ,
variables: { "product_id": 5 }
}
2019-04-20 16:45:12 +02:00
// Use the fetch api to make the query
2019-04-20 06:35:57 +02:00
fetch('http://localhost:8080/api/v1/graphql', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(req),
})
.then(res => res.json())
.then(res => console.log(res.data));
```
### Full text search
Every app these days needs search. Enought his often means reaching for something heavy like Solr. While this will work why add complexity to your infrastructure when Postgres has really great
and fast full text search built-in. And since it's part of Postgres it's also available in Super Graph.
```graphql
query {
products(
# Search for all products that contain 'ale' or some version of it
search: "ale"
# Return only matches where the price is less than 10
where: { price: { lt: 10 } }
2019-04-13 15:12:41 +02:00
# Use the search_rank to order from the best match to the worst
order_by: { search_rank: desc }) {
id
name
search_rank
search_headline_description
}
}
```
This query will use the `tsvector` column in your database table to search for products that contain the query phrase or some version of it. To get the internal relevance ranking for the search results using the `search_rank` field. And to get the highlighted context within any of the table columns you can use the `search_headline_` field prefix. For example `search_headline_name` will return the contents of the products name column which contains the matching query marked with the `<b></b>` html tags.
```json
{
"data": {
"products": [
{
"id": 11,
"name": "Maharaj",
"search_rank": 0.243171,
"search_headline_description": "Blue Moon, Vegetable Beer, Willamette, 1007 - German <b>Ale</b>, 48 IBU, 7.9%, 11.8°Blg"
},
{
"id": 12,
"name": "Schneider Aventinus",
"search_rank": 0.243171,
"search_headline_description": "Dos Equis, Wood-aged Beer, Magnum, 1099 - Whitbread <b>Ale</b>, 15 IBU, 9.5%, 13.0°Blg"
},
...
```
#### Adding search to your Rails app
It's really easy to enable Postgres search on any table within your database schema. All it takes is to create the following migration. In the below example we add a full-text search to the `products` table.
```ruby
class AddSearchColumn < ActiveRecord::Migration[5.1]
def self.up
add_column :products, :tsv, :tsvector
add_index :products, :tsv, using: "gin"
say_with_time("Adding trigger to update the ts_vector column") do
execute <<-SQL
CREATE FUNCTION products_tsv_trigger() RETURNS trigger AS $$
begin
new.tsv :=
setweight(to_tsvector('pg_catalog.english', coalesce(new.name,'')), 'A') ||
setweight(to_tsvector('pg_catalog.english', coalesce(new.description,'')), 'B');
return new;
end
$$ LANGUAGE plpgsql;
CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE ON products FOR EACH ROW EXECUTE PROCEDURE products_tsv_trigger();
SQL
end
end
def self.down
say_with_time("Removing trigger to update the tsv column") do
execute <<-SQL
DROP TRIGGER tsvectorupdate
ON products
SQL
end
remove_index :products, :tsv
remove_column :products, :tsv
end
end
```
## Remote Joins
2019-05-13 01:27:26 +02:00
It often happens that after fetching some data from the DB we need to call another API to fetch some more data and all this combined into a single JSON response. For example along with a list of users you need their last 5 payments from Stripe. This requires you to query your DB for the users and Stripe for the payments. Super Graph handles all this for you also only the fields you requested from the Stripe API are returned.
::: tip Is this fast?
Super Graph is able fetch remote data and merge it with the DB response in an efficient manner. Several optimizations such as parallel HTTP requests and a zero-allocation JSON merge algorithm makes this very fast. All of this without you having to write a line of code.
:::
2019-05-13 01:27:26 +02:00
For example you need to list the last 3 payments made by a user. You will first need to look up the user in the database and then call the Stripe API to fetch his last 3 payments. For this to work your user table in the db has a `customer_id` column that contains his Stripe customer ID.
Similiarly you could also fetch the users last tweet, lead info from Salesforce or whatever else you need. It's fine to mix up several different `remote joins` into a single GraphQL query.
2019-05-13 01:27:26 +02:00
2019-05-13 06:05:08 +02:00
### Stripe API example
2019-05-13 01:27:26 +02:00
The configuration is self explanatory. A `payments` field has been added under the `customers` table. This field is added to the `remotes` subsection that defines fields associated with `customers` that are remote and not real database columns.
The `id` parameter maps a column from the `customers` table to the `$id` variable. In this case it maps `$id` to the `customer_id` column.
```yaml
2019-05-13 06:05:08 +02:00
tables:
- name: customers
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# debug: true
2019-05-13 06:05:08 +02:00
# pass_headers:
# - cookie
# - host
set_headers:
- name: Authorization
value: Bearer <stripe_api_key>
2019-05-13 01:27:26 +02:00
```
#### How do I make use of this?
Just include `payments` like you would any other GraphQL selector under the `customers` selector. Super Graph will call the configured API for you and stitch (merge) the JSON the API sends back with the JSON generated from the database query. GraphQL features like aliases and fields all work.
```graphql
query {
customers {
id
email
payments {
customer_id
amount
billing_details
}
}
}
```
And voila here is the result. You get all of this advanced and honestly complex querying capability without writing a single line of code.
```json
"data": {
"customers": [
{
"id": 1,
"email": "linseymertz@reilly.co",
"payments": [
{
"customer_id": "cus_YCj3ndB5Mz",
"amount": 100,
"billing_details": {
"address": "1 Infinity Drive",
"zipcode": "94024"
}
},
...
```
Even tracing data is availble in the Super Graph web UI if tracing is enabled in the config. By default it is enabled in development. Additionally there you can set `debug: true` to enable http request / response dumping to help with debugging.
2019-05-13 06:05:08 +02:00
![Query Tracing](/tracing.png "Super Graph Web UI Query Tracing")
## Authentication
2019-04-15 07:29:55 +02:00
You can only have one type of auth enabled. You can either pick Rails or JWT.
### Rails Auth (Devise / Warden)
2019-04-13 15:12:41 +02:00
Almost all Rails apps use Devise or Warden for authentication. Once the user is
authenticated a session is created with the users ID. The session can either be
2019-04-13 15:12:41 +02:00
stored in the users browser as a cookie, memcache or redis. If memcache or redis is used then a cookie is set in the users browser with just the session id.
Super Graph can handle all these variations including the old and new session formats. Just enable the right `auth` config based on how your rails app is configured.
#### Cookie session store
```yaml
auth:
2019-04-15 07:29:55 +02:00
type: rails
cookie: _app_session
2019-04-08 08:47:59 +02:00
2019-04-15 07:29:55 +02:00
rails:
# Rails version this is used for reading the
# various cookies formats.
version: 5.2
# Found in 'Rails.application.config.secret_key_base'
secret_key_base: 0a248500a64c01184edb4d7ad3a805488f8097ac761b76aaa6c17c01dcb7af03a2f18ba61b2868134b9c7b79a122bc0dadff4367414a2d173297bfea92be5566
```
#### Memcache session store
```yaml
auth:
2019-04-15 07:29:55 +02:00
type: rails
cookie: _app_session
2019-04-08 08:47:59 +02:00
2019-04-15 07:29:55 +02:00
rails:
# Memcache remote cookie store.
url: memcache://127.0.0.1
```
#### Redis session store
```yaml
auth:
2019-04-15 07:29:55 +02:00
type: rails
cookie: _app_session
2019-04-08 08:47:59 +02:00
2019-04-15 07:29:55 +02:00
rails:
# Redis remote cookie store
2019-04-08 08:47:59 +02:00
url: redis://127.0.0.1:6379
password: ""
max_idle: 80
max_active: 12000
```
### JWT Token Auth
```yaml
auth:
type: jwt
2019-04-13 15:12:41 +02:00
2019-04-08 08:47:59 +02:00
jwt:
2019-04-15 07:29:55 +02:00
# the two providers are 'auth0' and 'none'
provider: auth0
2019-04-08 08:47:59 +02:00
secret: abc335bfcfdb04e50db5bb0a4d67ab9
public_key_file: /secrets/public_key.pem
public_key_type: ecdsa #rsa
```
For JWT tokens we currently support tokens from a provider like Auth0
or if you have a custom solution then we look for the `user_id` in the
`subject` claim of of the `id token`. If you pick Auth0 then we derive two variables from the token `user_id` and `user_id_provider` for to use in your filters.
We can get the JWT token either from the `authorization` header where we expect it to be a `bearer` token or if `cookie` is specified then we look there.
For validation a `secret` or a public key (ecdsa or rsa) is required. When using public keys they have to be in a PEM format file.
## Easy to setup
2019-04-04 06:52:52 +02:00
Configuration files can either be in YAML or JSON their names are derived from the `GO_ENV` variable, for example `GO_ENV=prod` will cause the `prod.yaml` config file to be used. or `GO_ENV=dev` will use the `dev.yaml`. A path to look for the config files in can be specified using the `-path <folder>` command line argument.
2019-04-09 14:43:42 +02:00
We're tried to ensure that the config file is self documenting and easy to work with.
2019-04-04 06:52:52 +02:00
```yaml
2019-04-20 06:35:57 +02:00
app_name: "Super Graph Development"
2019-04-04 06:52:52 +02:00
host_port: 0.0.0.0:8080
web_ui: true
debug_level: 1
2019-04-20 06:35:57 +02:00
2019-07-29 07:13:33 +02:00
# debug, info, warn, error, fatal, panic, disable
log_level: "info"
# Disable this in development to get a list of
# queries used. When enabled super graph
# will only allow queries from this list
# List saved to ./config/allow.list
use_allow_list: true
2019-04-04 06:52:52 +02:00
2019-04-08 08:47:59 +02:00
# Throw a 401 on auth failure for queries that need auth
2019-04-04 06:52:52 +02:00
# valid values: always, per_query, never
2019-07-29 07:13:33 +02:00
auth_fail_block: always
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
2019-04-04 06:52:52 +02:00
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
2019-04-08 08:47:59 +02:00
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
2019-04-04 06:52:52 +02:00
# inflections:
# person: people
# sheep: sheep
2019-04-13 15:12:41 +02:00
auth:
# Can be 'rails' or 'jwt'
type: rails
2019-04-08 08:47:59 +02:00
cookie: _app_session
# Comment this out if you want to disable setting
# the user_id via a header. Good for testing
2019-04-08 08:47:59 +02:00
header: X-User-ID
rails:
# Rails version this is used for reading the
# various cookies formats.
version: 5.2
2019-04-08 08:47:59 +02:00
# Found in 'Rails.application.config.secret_key_base'
secret_key_base: 0a248500a64c01184edb4d7ad3a805488f8097ac761b76aaa6c17c01dcb7af03a2f18ba61b2868134b9c7b79a122bc0dadff4367414a2d173297bfea92be5566
2019-04-13 15:12:41 +02:00
# Remote cookie store. (memcache or redis)
# url: redis://127.0.0.1:6379
# password: test
# max_idle: 80,
# max_active: 12000,
2019-04-08 08:47:59 +02:00
# In most cases you don't need these
# salt: "encrypted cookie"
# sign_salt: "signed encrypted cookie"
# auth_salt: "authenticated encrypted cookie"
2019-04-08 08:47:59 +02:00
# jwt:
# provider: auth0
# secret: abc335bfcfdb04e50db5bb0a4d67ab9
# public_key_file: /secrets/public_key.pem
# public_key_type: ecdsa #rsa
2019-04-04 06:52:52 +02:00
database:
type: postgres
host: db
port: 5432
dbname: app_development
user: postgres
password: ''
# pool_size: 10
# max_retries: 0
2019-04-13 15:12:41 +02:00
# log_level: "debug"
2019-04-04 06:52:52 +02:00
2019-04-13 15:12:41 +02:00
# Define variables here that you want to use in filters
2019-04-04 06:52:52 +02:00
variables:
account_id: "select account_id from users where id = $user_id"
2019-04-08 08:47:59 +02:00
# Define defaults to for the field key and values below
defaults:
2019-04-09 14:43:42 +02:00
filter: ["{ user_id: { eq: $user_id } }"]
2019-04-13 15:12:41 +02:00
2019-05-13 01:27:26 +02:00
# Field and table names that you wish to block
2019-04-08 08:47:59 +02:00
blacklist:
- ar_internal_metadata
- schema_migrations
- secret
- password
- encrypted
- token
2019-05-13 01:27:26 +02:00
tables:
2019-04-08 08:47:59 +02:00
- name: users
2019-04-09 14:43:42 +02:00
# This filter will overwrite defaults.filter
filter: ["{ id: { eq: $user_id } }"]
- name: products
# Multiple filters are AND'd together
filter: [
"{ price: { gt: 0 } }",
"{ price: { lt: 8 } }"
2019-04-13 15:12:41 +02:00
]
2019-04-09 14:43:42 +02:00
- name: customers
2019-04-13 15:12:41 +02:00
# No filter is used for this field not
2019-04-09 14:43:42 +02:00
# even defaults.filter
filter: none
2019-05-13 06:05:08 +02:00
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# pass_headers:
# - cookie
# - host
set_headers:
- name: Authorization
value: Bearer <stripe_api_key>
2019-04-09 14:43:42 +02:00
- # You can create new fields that have a
# real db table backing them
name: me
table: users
filter: ["{ id: { eq: $user_id } }"]
2019-04-08 08:47:59 +02:00
# - name: posts
# filter: ["{ account_id: { _eq: $account_id } }"]
2019-04-04 06:52:52 +02:00
```
If deploying into environments like Kubernetes it's useful to be able to configure things like secrets and hosts though environment variables therfore we expose the below environment variables. This is escpecially useful for secrets since they are usually injected in via a secrets management framework ie. Kubernetes Secrets
2019-04-08 08:47:59 +02:00
Keep in mind any value can be overwritten using environment variables for example `auth.jwt.public_key_type` converts to `SG_AUTH_JWT_PUBLIC_KEY_TYPE`. In short prefix `SG_`, upper case and all `.` should changed to `_`.
#### Postgres environment variables
2019-04-04 06:52:52 +02:00
```bash
SG_DATABASE_HOST
SG_DATABASE_PORT
SG_DATABASE_USER
SG_DATABASE_PASSWORD
```
#### Auth environment variables
2019-04-04 06:52:52 +02:00
```bash
2019-04-08 08:47:59 +02:00
SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
SG_AUTH_RAILS_REDIS_URL
SG_AUTH_RAILS_REDIS_PASSWORD
SG_AUTH_JWT_PUBLIC_KEY_FILE
2019-04-04 06:52:52 +02:00
```
2019-04-13 15:12:41 +02:00
## Developing Super Graph
2019-04-11 07:39:59 +02:00
If you want to build and run Super Graph from code then the below commands will build the web ui and launch Super Graph in developer mode with a watcher to rebuild on code changes. And the demo rails app is also launched to make it essier to test changes.
```bash
# yarn is needed to build the web ui
brew install yarn
# yarn install dependencies and build the web ui
(cd web && yarn install && yarn build)
# generate some stuff the go code needs
go generate ./...
# do this the only the time to setup the database
2019-07-29 07:13:33 +02:00
docker-compose run rails_app rake db:create db:migrate db:seed
2019-04-11 07:39:59 +02:00
# start super graph in development mode with a change watcher
docker-compose up
```
2019-04-04 06:52:52 +02:00
## MIT License
2019-04-13 15:12:41 +02:00
MIT Licensed | Copyright © 2018-present Vikram Rangnekar