Compare commits
2 Commits
Author | SHA1 | Date | |
---|---|---|---|
62fd1eac55 | |||
1a3d74e1ce |
4
.gitignore
vendored
4
.gitignore
vendored
@ -34,4 +34,6 @@ supergraph
|
||||
*-fuzz.zip
|
||||
crashers
|
||||
suppressions
|
||||
release
|
||||
release
|
||||
.gofuzz
|
||||
*-fuzz.zip
|
||||
|
@ -167,10 +167,13 @@ roles:
|
||||
block: false
|
||||
|
||||
- name: deals
|
||||
|
||||
query:
|
||||
limit: 3
|
||||
columns: ["name", "description" ]
|
||||
aggregation: false
|
||||
|
||||
- name: purchases
|
||||
query:
|
||||
limit: 3
|
||||
aggregation: false
|
||||
|
||||
- name: user
|
||||
@ -183,12 +186,10 @@ roles:
|
||||
query:
|
||||
limit: 50
|
||||
filters: ["{ user_id: { eq: $user_id } }"]
|
||||
columns: ["id", "name", "description", "search_rank", "search_headline_description" ]
|
||||
disable_functions: false
|
||||
|
||||
insert:
|
||||
filters: ["{ user_id: { eq: $user_id } }"]
|
||||
columns: ["id", "name", "description" ]
|
||||
presets:
|
||||
- user_id: "$user_id"
|
||||
- created_at: "now"
|
||||
|
244
docs/guide.md
244
docs/guide.md
@ -4,9 +4,9 @@ sidebar: auto
|
||||
|
||||
# Guide to Super Graph
|
||||
|
||||
Super Graph is a micro-service that instantly and without code gives you a high performance and secure GraphQL API. Your GraphQL queries are auto translated into a single fast SQL query. No more writing API code as you develop your web frontend just make the query you need and Super Graph will do the rest.
|
||||
Super Graph is a service that instantly and without code gives you a high performance and secure GraphQL API. Your GraphQL queries are auto translated into a single fast SQL query. No more spending weeks or months writing backend API code. Just make the query you need and Super Graph will do the rest.
|
||||
|
||||
Super Graph has a rich feature set like integrating with your existing Ruby on Rails apps, joining your DB with data from remote APIs, Role and Attribute based access control, Supoport for JWT tokens, DB migrations, seeding and a lot more.
|
||||
Super Graph has a rich feature set like integrating with your existing Ruby on Rails apps, joining your DB with data from remote APIs, Role and Attribute based access control, Support for JWT tokens, DB migrations, seeding and a lot more.
|
||||
|
||||
|
||||
## Features
|
||||
@ -47,14 +47,14 @@ open http://localhost:3000
|
||||
open http://localhost:8080
|
||||
```
|
||||
|
||||
::: warning DEMO REQUIREMENTS
|
||||
::: tip DEMO REQUIREMENTS
|
||||
This demo requires `docker` you can either install it using `brew` or from the
|
||||
docker website [https://docs.docker.com/docker-for-mac/install/](https://docs.docker.com/docker-for-mac/install/)
|
||||
:::
|
||||
|
||||
#### Trying out GraphQL
|
||||
|
||||
We currently fully support queries and mutations. Support for `subscriptions` is work in progress. For example the below GraphQL query would fetch two products that belong to the current user where the price is greater than 10.
|
||||
We fully support queries and mutations. For example the below GraphQL query would fetch two products that belong to the current user where the price is greater than 10.
|
||||
|
||||
#### GQL Query
|
||||
|
||||
@ -76,32 +76,6 @@ query {
|
||||
}
|
||||
```
|
||||
|
||||
In another example the below GraphQL mutation would insert a product into the database. The first part of the below example is the variable data and the second half is the GraphQL mutation. For mutations data has to always ben passed as a variable.
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"name": "Art of Computer Programming",
|
||||
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
|
||||
"price": 30.5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```graphql
|
||||
mutation {
|
||||
product(insert: $data) {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above GraphQL query returns the JSON result below. It handles all
|
||||
kinds of complexity without you having to writing a line of code.
|
||||
|
||||
For example there is a while greater than `gt` and a limit clause on a child field. And the `avatar` field is renamed to `picture`. The `password` field is blocked and not returned. Finally the relationship between the `users` table and the `products` table is auto discovered and used.
|
||||
|
||||
#### JSON Result
|
||||
|
||||
```json
|
||||
@ -128,19 +102,107 @@ For example there is a while greater than `gt` and a limit clause on a child fie
|
||||
}
|
||||
```
|
||||
|
||||
#### Try with an authenticated user
|
||||
::: tip Testing with a user
|
||||
In development mode you can use the `X-User-ID: 4` header to set a user id so you don't have to worries about cookies etc. This can be set using the *HTTP Headers* tab at the bottom of the web UI.
|
||||
:::
|
||||
|
||||
In development mode you can use the `X-User-ID: 4` header to set a user id so you don't have to worries about cookies etc. This can be set using the *HTTP Headers* tab at the bottom of the web UI you'll see when you visit the above link. You can also directly run queries from the commandline like below.
|
||||
In another example the below GraphQL mutation would insert a product into the database. The first part of the below example is the variable data and the second half is the GraphQL mutation. For mutations data has to always ben passed as a variable.
|
||||
|
||||
#### Querying the GQL endpoint
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"name": "Art of Computer Programming",
|
||||
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
|
||||
"price": 30.5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
```graphql
|
||||
mutation {
|
||||
product(insert: $data) {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# fetch the response json directly from the endpoint using user id 5
|
||||
curl 'http://localhost:8080/api/v1/graphql' \
|
||||
-H 'content-type: application/json' \
|
||||
-H 'X-User-ID: 5' \
|
||||
--data-binary '{"query":"{ products { name price users { email }}}"}'
|
||||
## Why Super Graph
|
||||
|
||||
Let's take a simple example say you want to fetch 5 products priced over 12 dollars along with the photos of the products and the users that owns them. Additionally also fetch the last 10 of your own purchases along with the name and ID of the product you purchased. This is a common type of query to render a view in say an ecommerce app. Lets be honest it's not very exciting write and maintain. Keep in mind the data needed will only continue to grow and change as your app evolves. Developers might find that most ORMs will not be able to do all of this in a single SQL query and will require n+1 queries to fetch all the data and assembly it into the right JSON response.
|
||||
|
||||
What if I told you Super Graph will fetch all this data with a single SQL query and without you having to write a single line of code. Also as your app evolves feel free to evolve the query as you like. In our experience Super Graph saves us hundreds or thousands of man hours that we can put towards the more exciting parts of our app.
|
||||
|
||||
#### GraphQL Query
|
||||
|
||||
```graphql
|
||||
query {
|
||||
products(limit 5, where: { price: { gt: 12 } }) {
|
||||
id
|
||||
name
|
||||
description
|
||||
price
|
||||
photos {
|
||||
url
|
||||
}
|
||||
user {
|
||||
id
|
||||
email
|
||||
picture : avatar
|
||||
full_name
|
||||
}
|
||||
}
|
||||
purchases(
|
||||
limit 10,
|
||||
order_by: { created_at: desc } ,
|
||||
where: { user_id: { eq: $user_id } }
|
||||
) {
|
||||
id
|
||||
created_at
|
||||
product {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### JSON Result
|
||||
|
||||
```json
|
||||
|
||||
"data": {
|
||||
"products": [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Oaked Arrogant Bastard Ale",
|
||||
"description": "Coors lite, European Amber Lager, Perle, 1272 - American Ale II, 38 IBU, 6.4%, 9.7°Blg",
|
||||
"price": 20,
|
||||
"photos: [{
|
||||
"url": "https://www.scienceworld.ca/wp-content/uploads/science-world-beer-flavours.jpg"
|
||||
}],
|
||||
"user": {
|
||||
"id": 1,
|
||||
"email": "user0@demo.com",
|
||||
"picture": "https://robohash.org/sitaliquamquaerat.png?size=300x300&set=set1",
|
||||
"full_name": "Mrs. Wilhemina Hilpert"
|
||||
}
|
||||
},
|
||||
...
|
||||
]
|
||||
},
|
||||
"purchases": [
|
||||
{
|
||||
"id": 5,
|
||||
"created_at": "2020-01-24T05:34:39.880599",
|
||||
"product": {
|
||||
"id": 45,
|
||||
"name": "Brooklyn Black",
|
||||
}
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Get Started
|
||||
@ -1169,18 +1231,20 @@ end
|
||||
|
||||
## API Security
|
||||
|
||||
One of the the most common questions I get asked if what happens if a user out on the internet issues queries
|
||||
that we don't want issued. For example how do we stop him from fetching all users or the emails of users. Our answer to this is that it is not an issue as this cannot happen, let me explain.
|
||||
One of the the most common questions I get asked is what happens if a user out on the internet sends queries
|
||||
that we don't want run. For example how do we stop him from fetching all users or the emails of users. Our answer to this is that it is not an issue as this cannot happen, let me explain.
|
||||
|
||||
Super Graph runs in one of two modes `development` or `production`, this is controlled via the config value `production: false` when it's false it's running in development mode and when true, production. In development mode all the **named** quries (including mutations) you run are saved into the allow list (`./config/allow.list`). I production mode when Super Graph starts only the queries from this allow list file are registered with the database as (prepared statements)[https://stackoverflow.com/questions/8263371/how-can-prepared-statements-protect-from-sql-injection-attacks]. Prepared statements are designed by databases to be fast and secure. They protect against all kinds of sql injection attacks and since they are pre-processed and pre-planned they are much faster to run then raw sql queries. Also there's no GraphQL to SQL compiling happening in production mode which makes your queries lighting fast as they directly goto the database with almost no overhead.
|
||||
Super Graph runs in one of two modes `development` or `production`, this is controlled via the config value `production: false` when it's false it's running in development mode and when true, production. In development mode all the **named** queries (including mutations) are saved to the allow list `./config/allow.list`. While in production mode when Super Graph starts only the queries from this allow list file are registered with the database as [prepared statements](https://stackoverflow.com/questions/8263371/how-can-prepared-statements-protect-from-sql-injection-attacks).
|
||||
|
||||
In short in production only queries listed in the allow list file (`./config/allow.list`) can be used all other queries will be blocked.
|
||||
Prepared statements are designed by databases to be fast and secure. They protect against all kinds of sql injection attacks and since they are pre-processed and pre-planned they are much faster to run then raw sql queries. Also there's no GraphQL to SQL compiling happening in production mode which makes your queries lighting fast as they are directly sent to the database with almost no overhead.
|
||||
|
||||
In short in production only queries listed in the allow list file `./config/allow.list` can be used, all other queries will be blocked.
|
||||
|
||||
::: tip How to think about the allow list?
|
||||
The allow list file is essentially a list of all your exposed API calls and the data thats passes within them in plain text. It's very easy to build tooling to do things like parsing this file within your tests to ensure fields like `credit_card_no` are not accidently leaked. It's a great way to build compliance tooling and ensure your user data is always safe.
|
||||
The allow list file is essentially a list of all your exposed API calls and the data that passes within them. It's very easy to build tooling to do things like parsing this file within your tests to ensure fields like `credit_card_no` are not accidently leaked. It's a great way to build compliance tooling and ensure your user data is always safe.
|
||||
:::
|
||||
|
||||
This is an example of a named query `getUserWithProducts` is the name you've given to this query it can be anything you like but should be unique across all you're queries. Only named queries are saved in the allow list in development mode the allow list is not modified in production mode.
|
||||
This is an example of a named query, `getUserWithProducts` is the name you've given to this query it can be anything you like but should be unique across all you're queries. Only named queries are saved in the allow list in development mode.
|
||||
|
||||
|
||||
```graphql
|
||||
@ -1201,7 +1265,7 @@ query getUserWithProducts {
|
||||
|
||||
## Authentication
|
||||
|
||||
You can only have one type of auth enabled. You can either pick Rails or JWT.
|
||||
You can only have one type of auth enabled either Rails or JWT.
|
||||
|
||||
### Ruby on Rails
|
||||
|
||||
@ -1255,7 +1319,7 @@ auth:
|
||||
max_active: 12000
|
||||
```
|
||||
|
||||
### JWT Token Auth
|
||||
### JWT Tokens
|
||||
|
||||
```yaml
|
||||
auth:
|
||||
@ -1269,14 +1333,67 @@ auth:
|
||||
public_key_type: ecdsa #rsa
|
||||
```
|
||||
|
||||
For JWT tokens we currently support tokens from a provider like Auth0
|
||||
or if you have a custom solution then we look for the `user_id` in the
|
||||
`subject` claim of of the `id token`. If you pick Auth0 then we derive two variables from the token `user_id` and `user_id_provider` for to use in your filters.
|
||||
For JWT tokens we currently support tokens from a provider like Auth0 or if you have a custom solution then we look for the `user_id` in the `subject` claim of of the `id token`. If you pick Auth0 then we derive two variables from the token `user_id` and `user_id_provider` for to use in your filters.
|
||||
|
||||
We can get the JWT token either from the `authorization` header where we expect it to be a `bearer` token or if `cookie` is specified then we look there.
|
||||
|
||||
For validation a `secret` or a public key (ecdsa or rsa) is required. When using public keys they have to be in a PEM format file.
|
||||
|
||||
### HTTP Headers
|
||||
|
||||
```yaml
|
||||
header:
|
||||
name: X-AppEngine-QueueName
|
||||
exists: true
|
||||
#value: default
|
||||
```
|
||||
|
||||
Header auth is usually the best option to authenticate requests to the action endpoints. For example you
|
||||
might want to use an action to refresh a materalized view every hour and only want a cron service like the Google AppEngine Cron service to make that request in this case a config similar to the one above will do.
|
||||
|
||||
The `exists: true` parameter ensures that only the existance of the header is checked not its value. The `value` parameter lets you confirm that the value matches the one assgined to the parameter. This helps in the case you are using a shared secret to protect the endpoint.
|
||||
|
||||
### Named Auth
|
||||
|
||||
```yaml
|
||||
# You can add additional named auths to use with actions
|
||||
# In this example actions using this auth can only be
|
||||
# called from the Google Appengine Cron service that
|
||||
# sets a special header to all it's requests
|
||||
auths:
|
||||
- name: from_taskqueue
|
||||
type: header
|
||||
header:
|
||||
name: X-Appengine-Cron
|
||||
exists: true
|
||||
```
|
||||
|
||||
In addition to the default auth configuration you can create additional named auth configurations to be used
|
||||
with features like `actions`. For example while your main GraphQL endpoint uses JWT for authentication you may want to use a header value to ensure your actions can only be called by clients having access to a shared secret
|
||||
or security header.
|
||||
|
||||
## Actions
|
||||
|
||||
Actions is a very useful feature that is currently work in progress. For now the best use case for actions is to
|
||||
refresh database tables like materialized views or call a database procedure to refresh a cache table, etc. An action creates an http endpoint that anyone can call to have the SQL query executed. The below example will create an endpoint `/api/v1/actions/refresh_leaderboard_users` any request send to that endpoint will cause the sql query to be executed. the `auth_name` points to a named auth that should be used to secure this endpoint. In future we have big plans to allow your own custom code to run using actions.
|
||||
|
||||
```yaml
|
||||
actions:
|
||||
- name: refresh_leaderboard_users
|
||||
sql: REFRESH MATERIALIZED VIEW CONCURRENTLY "leaderboard_users"
|
||||
auth_name: from_taskqueue
|
||||
```
|
||||
|
||||
#### Using CURL to test a query
|
||||
|
||||
```bash
|
||||
# fetch the response json directly from the endpoint using user id 5
|
||||
curl 'http://localhost:8080/api/v1/graphql' \
|
||||
-H 'content-type: application/json' \
|
||||
-H 'X-User-ID: 5' \
|
||||
--data-binary '{"query":"{ products { name price users { email }}}"}'
|
||||
```
|
||||
|
||||
## Access Control
|
||||
|
||||
It's common for APIs to control what information they return or insert based on the role of the user. In Super Graph we have two primary roles `user` and `anon` the first for users where a `user_id` is available the latter for users where it's not.
|
||||
@ -1521,6 +1638,22 @@ auth:
|
||||
# public_key_file: /secrets/public_key.pem
|
||||
# public_key_type: ecdsa #rsa
|
||||
|
||||
# header:
|
||||
# name: dnt
|
||||
# exists: true
|
||||
# value: localhost:8080
|
||||
|
||||
# You can add additional named auths to use with actions
|
||||
# In this example actions using this auth can only be
|
||||
# called from the Google Appengine Cron service that
|
||||
# sets a special header to all it's requests
|
||||
auths:
|
||||
- name: from_taskqueue
|
||||
type: header
|
||||
header:
|
||||
name: X-Appengine-Cron
|
||||
exists: true
|
||||
|
||||
database:
|
||||
type: postgres
|
||||
host: db
|
||||
@ -1551,6 +1684,17 @@ database:
|
||||
- encrypted
|
||||
- token
|
||||
|
||||
# Create custom actions with their own api endpoints
|
||||
# For example the below action will be available at /api/v1/actions/refresh_leaderboard_users
|
||||
# A request to this url will execute the configured SQL query
|
||||
# which in this case refreshes a materialized view in the database.
|
||||
# The auth_name is from one of the configured auths
|
||||
actions:
|
||||
- name: refresh_leaderboard_users
|
||||
sql: REFRESH MATERIALIZED VIEW CONCURRENTLY "leaderboard_users"
|
||||
auth_name: from_taskqueue
|
||||
|
||||
|
||||
tables:
|
||||
- name: customers
|
||||
remotes:
|
||||
|
1
go.mod
1
go.mod
@ -12,6 +12,7 @@ require (
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible
|
||||
github.com/dlclark/regexp2 v1.2.0 // indirect
|
||||
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733
|
||||
github.com/dvyukov/go-fuzz v0.0.0-20191206100749-a378175e205c // indirect
|
||||
github.com/fsnotify/fsnotify v1.4.7
|
||||
github.com/garyburd/redigo v1.6.0
|
||||
github.com/go-sourcemap/sourcemap v2.1.2+incompatible // indirect
|
||||
|
2
go.sum
2
go.sum
@ -54,6 +54,8 @@ github.com/dlclark/regexp2 v1.2.0 h1:8sAhBGEM0dRWogWqWyQeIJnxjWO6oIjl8FKqREDsGfk
|
||||
github.com/dlclark/regexp2 v1.2.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
|
||||
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733 h1:cyNc40Dx5YNEO94idePU8rhVd3dn+sd04Arh0kDBAaw=
|
||||
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733/go.mod h1:Mw6PkjjMXWbTj+nnj4s3QPXq1jaT0s5pC0iFD4+BOAA=
|
||||
github.com/dvyukov/go-fuzz v0.0.0-20191206100749-a378175e205c h1:/bXaeEuNG6V0HeyEGw11DYLW5BGsOPlcVRIXbHNUWSo=
|
||||
github.com/dvyukov/go-fuzz v0.0.0-20191206100749-a378175e205c/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/garyburd/redigo v1.6.0 h1:0VruCpn7yAIIu7pWVClQC8wxCJEcG3nyzpMSHKi1PQc=
|
||||
|
54
psql/fuzz.go
Normal file
54
psql/fuzz.go
Normal file
@ -0,0 +1,54 @@
|
||||
// +build gofuzz
|
||||
|
||||
package psql
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"github.com/dosco/super-graph/qcode"
|
||||
)
|
||||
|
||||
var (
|
||||
qcompileTest, _ = qcode.NewCompiler(qcode.Config{})
|
||||
|
||||
schema = getTestSchema()
|
||||
|
||||
vars = NewVariables(map[string]string{
|
||||
"admin_account_id": "5",
|
||||
})
|
||||
|
||||
pcompileTest = NewCompiler(Config{
|
||||
Schema: schema,
|
||||
Vars: vars,
|
||||
})
|
||||
)
|
||||
|
||||
// FuzzerEntrypoint for Fuzzbuzz
|
||||
func Fuzz(data []byte) int {
|
||||
gql := `mutation {
|
||||
product(insert: $data) {
|
||||
id
|
||||
name
|
||||
user {
|
||||
id
|
||||
full_name
|
||||
email
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
qc, err := qcompileTest.Compile([]byte(gql), "user")
|
||||
if err != nil {
|
||||
panic("qcompile can't fail")
|
||||
}
|
||||
|
||||
vars := map[string]json.RawMessage{
|
||||
"data": json.RawMessage(data),
|
||||
}
|
||||
|
||||
_, _, err = pcompileTest.CompileEx(qc, vars)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return 1
|
||||
}
|
@ -15,7 +15,10 @@ func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer,
|
||||
|
||||
insert, ok := vars[qc.ActionVar]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("Variable '%s' not !defined", qc.ActionVar)
|
||||
return 0, fmt.Errorf("variable '%s' not defined", qc.ActionVar)
|
||||
}
|
||||
if len(insert) == 0 {
|
||||
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
|
||||
}
|
||||
|
||||
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`)
|
||||
|
@ -446,7 +446,10 @@ func (c *compilerContext) renderUpsert(qc *qcode.QCode, w io.Writer,
|
||||
|
||||
upsert, ok := vars[qc.ActionVar]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("Variable '%s' not defined", qc.ActionVar)
|
||||
return 0, fmt.Errorf("variable '%s' not defined", qc.ActionVar)
|
||||
}
|
||||
if len(upsert) == 0 {
|
||||
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
|
||||
}
|
||||
|
||||
if ti.PrimaryCol == nil {
|
||||
|
@ -3,7 +3,6 @@ package psql
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/dosco/super-graph/qcode"
|
||||
@ -128,97 +127,7 @@ func TestMain(m *testing.M) {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
tables := []DBTable{
|
||||
DBTable{Name: "customers", Type: "table"},
|
||||
DBTable{Name: "users", Type: "table"},
|
||||
DBTable{Name: "products", Type: "table"},
|
||||
DBTable{Name: "purchases", Type: "table"},
|
||||
DBTable{Name: "tags", Type: "table"},
|
||||
DBTable{Name: "tag_count", Type: "json"},
|
||||
}
|
||||
|
||||
columns := [][]DBColumn{
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "full_name", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "phone", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "email", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "encrypted_password", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "reset_password_token", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "reset_password_sent_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "remember_created_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 10, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "full_name", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "phone", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "avatar", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "email", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "encrypted_password", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "reset_password_token", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "reset_password_sent_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "remember_created_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 10, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 11, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "name", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "description", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "price", Type: "numeric(7,2)", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "user_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "users", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 6, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "tsv", Type: "tsvector", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "tags", Type: "text[]", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tags", FKeyColID: []int16{3}, Array: true},
|
||||
DBColumn{ID: 9, Name: "tag_count", Type: "json", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tag_count", FKeyColID: []int16{}}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "customer_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "customers", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 3, Name: "product_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "products", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 4, Name: "sale_type", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "quantity", Type: "integer", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "due_date", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "returned", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "name", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "slug", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "tag_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tags", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 2, Name: "count", Type: "int", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
}
|
||||
|
||||
for i := range tables {
|
||||
tables[i].Key = strings.ToLower(tables[i].Name)
|
||||
for n := range columns[i] {
|
||||
columns[i][n].Key = strings.ToLower(columns[i][n].Name)
|
||||
}
|
||||
}
|
||||
|
||||
schema := &DBSchema{
|
||||
ver: 110000,
|
||||
t: make(map[string]*DBTableInfo),
|
||||
rm: make(map[string]map[string]*DBRel),
|
||||
}
|
||||
|
||||
aliases := map[string][]string{
|
||||
"users": []string{"mes"},
|
||||
}
|
||||
|
||||
for i, t := range tables {
|
||||
err := schema.addTable(t, columns[i], aliases)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
for i, t := range tables {
|
||||
err := schema.updateRelationships(t, columns[i])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
schema := getTestSchema()
|
||||
|
||||
vars := NewVariables(map[string]string{
|
||||
"admin_account_id": "5",
|
||||
|
@ -825,13 +825,11 @@ func (c *compilerContext) renderFrom(sel *qcode.Select, ti *DBTableInfo, rel *DB
|
||||
}
|
||||
|
||||
func (c *compilerContext) renderOrderByColumns(sel *qcode.Select, ti *DBTableInfo) {
|
||||
colsRendered := len(sel.Cols) != 0
|
||||
//colsRendered := len(sel.Cols) != 0
|
||||
|
||||
for i := range sel.OrderBy {
|
||||
if colsRendered {
|
||||
//io.WriteString(w, ", ")
|
||||
io.WriteString(c.w, `, `)
|
||||
}
|
||||
//io.WriteString(w, ", ")
|
||||
io.WriteString(c.w, `, `)
|
||||
|
||||
col := sel.OrderBy[i].Col
|
||||
//fmt.Fprintf(w, `"%s_%d"."%s" AS "%s_%d_%s_ob"`,
|
||||
|
@ -151,6 +151,7 @@ SELECT
|
||||
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
|
||||
CASE
|
||||
WHEN f.attndims != 0 THEN true
|
||||
WHEN right(pg_catalog.format_type(f.atttypid,f.atttypmod), 2) = '[]' THEN true
|
||||
ELSE false
|
||||
END AS array,
|
||||
CASE
|
||||
@ -175,7 +176,7 @@ FROM pg_attribute f
|
||||
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
|
||||
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
|
||||
WHERE c.relkind = ('r'::char)
|
||||
WHERE c.relkind IN ('r', 'v', 'm', 'f')
|
||||
AND n.nspname = $1 -- Replace with Schema name
|
||||
AND c.relname = $2 -- Replace with table name
|
||||
AND f.attnum > 0
|
||||
|
102
psql/test_schema.go
Normal file
102
psql/test_schema.go
Normal file
@ -0,0 +1,102 @@
|
||||
package psql
|
||||
|
||||
import (
|
||||
"log"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func getTestSchema() *DBSchema {
|
||||
tables := []DBTable{
|
||||
DBTable{Name: "customers", Type: "table"},
|
||||
DBTable{Name: "users", Type: "table"},
|
||||
DBTable{Name: "products", Type: "table"},
|
||||
DBTable{Name: "purchases", Type: "table"},
|
||||
DBTable{Name: "tags", Type: "table"},
|
||||
DBTable{Name: "tag_count", Type: "json"},
|
||||
}
|
||||
|
||||
columns := [][]DBColumn{
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "full_name", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "phone", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "email", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "encrypted_password", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "reset_password_token", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "reset_password_sent_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "remember_created_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 10, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "full_name", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "phone", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "avatar", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "email", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "encrypted_password", Type: "character varying", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "reset_password_token", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "reset_password_sent_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "remember_created_at", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 10, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 11, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "name", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "description", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 4, Name: "price", Type: "numeric(7,2)", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "user_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "users", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 6, Name: "created_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "updated_at", Type: "timestamp without time zone", NotNull: true, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 8, Name: "tsv", Type: "tsvector", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 9, Name: "tags", Type: "text[]", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tags", FKeyColID: []int16{3}, Array: true},
|
||||
DBColumn{ID: 9, Name: "tag_count", Type: "json", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tag_count", FKeyColID: []int16{}}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "customer_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "customers", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 3, Name: "product_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "products", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 4, Name: "sale_type", Type: "character varying", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 5, Name: "quantity", Type: "integer", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 6, Name: "due_date", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 7, Name: "returned", Type: "timestamp without time zone", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "id", Type: "bigint", NotNull: true, PrimaryKey: true, UniqueKey: true},
|
||||
DBColumn{ID: 2, Name: "name", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false},
|
||||
DBColumn{ID: 3, Name: "slug", Type: "text", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
[]DBColumn{
|
||||
DBColumn{ID: 1, Name: "tag_id", Type: "bigint", NotNull: false, PrimaryKey: false, UniqueKey: false, FKeyTable: "tags", FKeyColID: []int16{1}},
|
||||
DBColumn{ID: 2, Name: "count", Type: "int", NotNull: false, PrimaryKey: false, UniqueKey: false}},
|
||||
}
|
||||
|
||||
for i := range tables {
|
||||
tables[i].Key = strings.ToLower(tables[i].Name)
|
||||
for n := range columns[i] {
|
||||
columns[i][n].Key = strings.ToLower(columns[i][n].Name)
|
||||
}
|
||||
}
|
||||
|
||||
schema := &DBSchema{
|
||||
ver: 110000,
|
||||
t: make(map[string]*DBTableInfo),
|
||||
rm: make(map[string]map[string]*DBRel),
|
||||
}
|
||||
|
||||
aliases := map[string][]string{
|
||||
"users": []string{"mes"},
|
||||
}
|
||||
|
||||
for i, t := range tables {
|
||||
err := schema.addTable(t, columns[i], aliases)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
for i, t := range tables {
|
||||
err := schema.updateRelationships(t, columns[i])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
return schema
|
||||
}
|
@ -15,7 +15,10 @@ func (c *compilerContext) renderUpdate(qc *qcode.QCode, w io.Writer,
|
||||
|
||||
update, ok := vars[qc.ActionVar]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("Variable '%s' not !defined", qc.ActionVar)
|
||||
return 0, fmt.Errorf("variable '%s' not !defined", qc.ActionVar)
|
||||
}
|
||||
if len(update) == 0 {
|
||||
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
|
||||
}
|
||||
|
||||
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`)
|
||||
|
@ -4,7 +4,11 @@ package qcode
|
||||
|
||||
// FuzzerEntrypoint for Fuzzbuzz
|
||||
func Fuzz(data []byte) int {
|
||||
GetQType(string(data))
|
||||
qt := GetQType(string(data))
|
||||
|
||||
if qt > QTUpsert {
|
||||
panic("qt > QTUpsert")
|
||||
}
|
||||
|
||||
qcompile, _ := NewCompiler(Config{})
|
||||
_, err := qcompile.Compile(data, "user")
|
||||
|
41
serv/actions.go
Normal file
41
serv/actions.go
Normal file
@ -0,0 +1,41 @@
|
||||
package serv
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type actionFn func(w http.ResponseWriter, r *http.Request) error
|
||||
|
||||
func newAction(a configAction) (http.Handler, error) {
|
||||
var fn actionFn
|
||||
var err error
|
||||
|
||||
if len(a.SQL) != 0 {
|
||||
fn, err = newSQLAction(a)
|
||||
} else {
|
||||
return nil, fmt.Errorf("invalid config for action '%s'", a.Name)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
httpFn := func(w http.ResponseWriter, r *http.Request) {
|
||||
if err := fn(w, r); err != nil {
|
||||
errlog.Error().Err(err).Send()
|
||||
errorResp(w, err)
|
||||
}
|
||||
}
|
||||
|
||||
return http.HandlerFunc(httpFn), nil
|
||||
}
|
||||
|
||||
func newSQLAction(a configAction) (actionFn, error) {
|
||||
fn := func(w http.ResponseWriter, r *http.Request) error {
|
||||
_, err := db.Exec(r.Context(), a.SQL)
|
||||
return err
|
||||
}
|
||||
|
||||
return fn, nil
|
||||
}
|
60
serv/auth.go
60
serv/auth.go
@ -3,7 +3,6 @@ package serv
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type ctxkey int
|
||||
@ -14,7 +13,7 @@ const (
|
||||
userRoleKey
|
||||
)
|
||||
|
||||
func headerAuth(next http.Handler) http.HandlerFunc {
|
||||
func headerAuth(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
@ -37,28 +36,53 @@ func headerAuth(next http.Handler) http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func withAuth(next http.Handler) http.Handler {
|
||||
at := conf.Auth.Type
|
||||
ru := conf.Auth.Rails.URL
|
||||
func headerHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
hdr := authc.Header
|
||||
|
||||
if conf.Auth.CredsInHeader {
|
||||
next = headerAuth(next)
|
||||
if len(hdr.Name) == 0 {
|
||||
errlog.Fatal().Str("auth", authc.Name).Msg("no header.name defined")
|
||||
}
|
||||
|
||||
switch at {
|
||||
if !hdr.Exists && len(hdr.Value) == 0 {
|
||||
errlog.Fatal().Str("auth", authc.Name).Msg("no header.value defined")
|
||||
}
|
||||
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
var fo1 bool
|
||||
value := r.Header.Get(hdr.Name)
|
||||
|
||||
switch {
|
||||
case hdr.Exists:
|
||||
fo1 = (len(value) == 0)
|
||||
|
||||
default:
|
||||
fo1 = (value != hdr.Value)
|
||||
}
|
||||
|
||||
if fo1 {
|
||||
http.Error(w, "401 unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
}
|
||||
}
|
||||
|
||||
func withAuth(next http.Handler, authc configAuth) http.Handler {
|
||||
if authc.CredsInHeader {
|
||||
next = headerAuth(authc, next)
|
||||
}
|
||||
|
||||
switch authc.Type {
|
||||
case "rails":
|
||||
if strings.HasPrefix(ru, "memcache:") {
|
||||
return railsMemcacheHandler(next)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(ru, "redis:") {
|
||||
return railsRedisHandler(next)
|
||||
}
|
||||
|
||||
return railsCookieHandler(next)
|
||||
return railsHandler(authc, next)
|
||||
|
||||
case "jwt":
|
||||
return jwtHandler(next)
|
||||
return jwtHandler(authc, next)
|
||||
|
||||
case "header":
|
||||
return headerHandler(authc, next)
|
||||
|
||||
}
|
||||
|
||||
return next
|
||||
|
@ -14,18 +14,18 @@ const (
|
||||
jwtAuth0 int = iota + 1
|
||||
)
|
||||
|
||||
func jwtHandler(next http.Handler) http.HandlerFunc {
|
||||
func jwtHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
var key interface{}
|
||||
var jwtProvider int
|
||||
|
||||
cookie := conf.Auth.Cookie
|
||||
cookie := authc.Cookie
|
||||
|
||||
if conf.Auth.JWT.Provider == "auth0" {
|
||||
if authc.JWT.Provider == "auth0" {
|
||||
jwtProvider = jwtAuth0
|
||||
}
|
||||
|
||||
secret := conf.Auth.JWT.Secret
|
||||
publicKeyFile := conf.Auth.JWT.PubKeyFile
|
||||
secret := authc.JWT.Secret
|
||||
publicKeyFile := authc.JWT.PubKeyFile
|
||||
|
||||
switch {
|
||||
case len(secret) != 0:
|
||||
@ -37,7 +37,7 @@ func jwtHandler(next http.Handler) http.HandlerFunc {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
}
|
||||
|
||||
switch conf.Auth.JWT.PubKeyType {
|
||||
switch authc.JWT.PubKeyType {
|
||||
case "ecdsa":
|
||||
key, err = jwt.ParseECPublicKeyFromPEM(kd)
|
||||
|
||||
|
@ -6,32 +6,47 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
"github.com/bradfitz/gomemcache/memcache"
|
||||
"github.com/dosco/super-graph/rails"
|
||||
"github.com/garyburd/redigo/redis"
|
||||
)
|
||||
|
||||
func railsRedisHandler(next http.Handler) http.HandlerFunc {
|
||||
cookie := conf.Auth.Cookie
|
||||
func railsHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
ru := authc.Rails.URL
|
||||
|
||||
if strings.HasPrefix(ru, "memcache:") {
|
||||
return railsMemcacheHandler(authc, next)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(ru, "redis:") {
|
||||
return railsRedisHandler(authc, next)
|
||||
}
|
||||
|
||||
return railsCookieHandler(authc, next)
|
||||
}
|
||||
|
||||
func railsRedisHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
cookie := authc.Cookie
|
||||
if len(cookie) == 0 {
|
||||
errlog.Fatal().Msg("no auth.cookie defined")
|
||||
}
|
||||
|
||||
if len(conf.Auth.Rails.URL) == 0 {
|
||||
if len(authc.Rails.URL) == 0 {
|
||||
errlog.Fatal().Msg("no auth.rails.url defined")
|
||||
}
|
||||
|
||||
rp := &redis.Pool{
|
||||
MaxIdle: conf.Auth.Rails.MaxIdle,
|
||||
MaxActive: conf.Auth.Rails.MaxActive,
|
||||
MaxIdle: authc.Rails.MaxIdle,
|
||||
MaxActive: authc.Rails.MaxActive,
|
||||
Dial: func() (redis.Conn, error) {
|
||||
c, err := redis.DialURL(conf.Auth.Rails.URL)
|
||||
c, err := redis.DialURL(authc.Rails.URL)
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
}
|
||||
|
||||
pwd := conf.Auth.Rails.Password
|
||||
pwd := authc.Rails.Password
|
||||
if len(pwd) != 0 {
|
||||
if _, err := c.Do("AUTH", pwd); err != nil {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
@ -66,17 +81,17 @@ func railsRedisHandler(next http.Handler) http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func railsMemcacheHandler(next http.Handler) http.HandlerFunc {
|
||||
cookie := conf.Auth.Cookie
|
||||
func railsMemcacheHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
cookie := authc.Cookie
|
||||
if len(cookie) == 0 {
|
||||
errlog.Fatal().Msg("no auth.cookie defined")
|
||||
}
|
||||
|
||||
if len(conf.Auth.Rails.URL) == 0 {
|
||||
if len(authc.Rails.URL) == 0 {
|
||||
errlog.Fatal().Msg("no auth.rails.url defined")
|
||||
}
|
||||
|
||||
rURL, err := url.Parse(conf.Auth.Rails.URL)
|
||||
rURL, err := url.Parse(authc.Rails.URL)
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
}
|
||||
@ -108,13 +123,13 @@ func railsMemcacheHandler(next http.Handler) http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func railsCookieHandler(next http.Handler) http.HandlerFunc {
|
||||
cookie := conf.Auth.Cookie
|
||||
func railsCookieHandler(authc configAuth, next http.Handler) http.HandlerFunc {
|
||||
cookie := authc.Cookie
|
||||
if len(cookie) == 0 {
|
||||
errlog.Fatal().Msg("no auth.cookie defined")
|
||||
}
|
||||
|
||||
ra, err := railsAuth(conf)
|
||||
ra, err := railsAuth(authc)
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
}
|
||||
@ -139,13 +154,13 @@ func railsCookieHandler(next http.Handler) http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func railsAuth(c *config) (*rails.Auth, error) {
|
||||
secret := c.Auth.Rails.SecretKeyBase
|
||||
func railsAuth(authc configAuth) (*rails.Auth, error) {
|
||||
secret := authc.Rails.SecretKeyBase
|
||||
if len(secret) == 0 {
|
||||
return nil, errors.New("no auth.rails.secret_key_base defined")
|
||||
}
|
||||
|
||||
version := c.Auth.Rails.Version
|
||||
version := authc.Rails.Version
|
||||
if len(version) == 0 {
|
||||
return nil, errors.New("no auth.rails.version defined")
|
||||
}
|
||||
@ -155,16 +170,16 @@ func railsAuth(c *config) (*rails.Auth, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(c.Auth.Rails.Salt) != 0 {
|
||||
ra.Salt = c.Auth.Rails.Salt
|
||||
if len(authc.Rails.Salt) != 0 {
|
||||
ra.Salt = authc.Rails.Salt
|
||||
}
|
||||
|
||||
if len(conf.Auth.Rails.SignSalt) != 0 {
|
||||
ra.SignSalt = c.Auth.Rails.SignSalt
|
||||
if len(authc.Rails.SignSalt) != 0 {
|
||||
ra.SignSalt = authc.Rails.SignSalt
|
||||
}
|
||||
|
||||
if len(conf.Auth.Rails.AuthSalt) != 0 {
|
||||
ra.AuthSalt = c.Auth.Rails.AuthSalt
|
||||
if len(authc.Rails.AuthSalt) != 0 {
|
||||
ra.AuthSalt = authc.Rails.AuthSalt
|
||||
}
|
||||
|
||||
return ra, nil
|
||||
|
@ -311,3 +311,13 @@ func getMigrationVars() map[string]interface{} {
|
||||
"env": strings.ToLower(os.Getenv("GO_ENV")),
|
||||
}
|
||||
}
|
||||
|
||||
func initConfOnce() {
|
||||
var err error
|
||||
|
||||
if conf == nil {
|
||||
if conf, err = initConf(); err != nil {
|
||||
errlog.Fatal().Err(err).Msg("failed to read config")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -7,23 +7,22 @@ import (
|
||||
func cmdServ(cmd *cobra.Command, args []string) {
|
||||
var err error
|
||||
|
||||
initWatcher(confPath)
|
||||
|
||||
if conf, err = initConf(); err != nil {
|
||||
fatalInProd(err, "failed to read config")
|
||||
}
|
||||
|
||||
if conf != nil {
|
||||
db, err = initDBPool(conf)
|
||||
db, err = initDBPool(conf)
|
||||
|
||||
if err == nil {
|
||||
initCompiler()
|
||||
initAllowList(confPath)
|
||||
initPreparedList(confPath)
|
||||
} else {
|
||||
fatalInProd(err, "failed to connect to database")
|
||||
}
|
||||
if err != nil {
|
||||
fatalInProd(err, "failed to connect to database")
|
||||
}
|
||||
|
||||
initWatcher(confPath)
|
||||
initCompiler()
|
||||
initResolvers()
|
||||
initAllowList(confPath)
|
||||
initPreparedList(confPath)
|
||||
|
||||
startHTTP()
|
||||
}
|
||||
|
128
serv/config.go
128
serv/config.go
@ -33,30 +33,8 @@ type config struct {
|
||||
|
||||
Inflections map[string]string
|
||||
|
||||
Auth struct {
|
||||
Type string
|
||||
Cookie string
|
||||
CredsInHeader bool `mapstructure:"creds_in_header"`
|
||||
|
||||
Rails struct {
|
||||
Version string
|
||||
SecretKeyBase string `mapstructure:"secret_key_base"`
|
||||
URL string
|
||||
Password string
|
||||
MaxIdle int `mapstructure:"max_idle"`
|
||||
MaxActive int `mapstructure:"max_active"`
|
||||
Salt string
|
||||
SignSalt string `mapstructure:"sign_salt"`
|
||||
AuthSalt string `mapstructure:"auth_salt"`
|
||||
}
|
||||
|
||||
JWT struct {
|
||||
Provider string
|
||||
Secret string
|
||||
PubKeyFile string `mapstructure:"public_key_file"`
|
||||
PubKeyType string `mapstructure:"public_key_type"`
|
||||
}
|
||||
}
|
||||
Auth configAuth
|
||||
Auths []configAuth
|
||||
|
||||
DB struct {
|
||||
Type string
|
||||
@ -77,6 +55,8 @@ type config struct {
|
||||
Tables []configTable
|
||||
} `mapstructure:"database"`
|
||||
|
||||
Actions []configAction
|
||||
|
||||
Tables []configTable
|
||||
|
||||
RolesQuery string `mapstructure:"roles_query"`
|
||||
@ -85,6 +65,38 @@ type config struct {
|
||||
abacEnabled bool
|
||||
}
|
||||
|
||||
type configAuth struct {
|
||||
Name string
|
||||
Type string
|
||||
Cookie string
|
||||
CredsInHeader bool `mapstructure:"creds_in_header"`
|
||||
|
||||
Rails struct {
|
||||
Version string
|
||||
SecretKeyBase string `mapstructure:"secret_key_base"`
|
||||
URL string
|
||||
Password string
|
||||
MaxIdle int `mapstructure:"max_idle"`
|
||||
MaxActive int `mapstructure:"max_active"`
|
||||
Salt string
|
||||
SignSalt string `mapstructure:"sign_salt"`
|
||||
AuthSalt string `mapstructure:"auth_salt"`
|
||||
}
|
||||
|
||||
JWT struct {
|
||||
Provider string
|
||||
Secret string
|
||||
PubKeyFile string `mapstructure:"public_key_file"`
|
||||
PubKeyType string `mapstructure:"public_key_type"`
|
||||
}
|
||||
|
||||
Header struct {
|
||||
Name string
|
||||
Value string
|
||||
Exists bool
|
||||
}
|
||||
}
|
||||
|
||||
type configColumn struct {
|
||||
Name string
|
||||
Type string
|
||||
@ -156,6 +168,12 @@ type configRole struct {
|
||||
tablesMap map[string]*configRoleTable
|
||||
}
|
||||
|
||||
type configAction struct {
|
||||
Name string
|
||||
SQL string
|
||||
AuthName string `mapstructure:"auth_name"`
|
||||
}
|
||||
|
||||
func newConfig(name string) *viper.Viper {
|
||||
vi := viper.New()
|
||||
|
||||
@ -283,26 +301,48 @@ func (c *config) Init(vi *viper.Viper) error {
|
||||
func (c *config) validate() {
|
||||
rm := make(map[string]struct{})
|
||||
|
||||
for i := range c.Roles {
|
||||
name := c.Roles[i].Name
|
||||
for _, v := range c.Roles {
|
||||
name := strings.ToLower(v.Name)
|
||||
|
||||
if _, ok := rm[name]; ok {
|
||||
errlog.Fatal().Msgf("duplicate config for role '%s'", c.Roles[i].Name)
|
||||
errlog.Fatal().Msgf("duplicate config for role '%s'", v.Name)
|
||||
}
|
||||
rm[name] = struct{}{}
|
||||
}
|
||||
|
||||
tm := make(map[string]struct{})
|
||||
|
||||
for i := range c.Tables {
|
||||
name := c.Tables[i].Name
|
||||
for _, v := range c.Tables {
|
||||
name := strings.ToLower(v.Name)
|
||||
|
||||
if _, ok := tm[name]; ok {
|
||||
errlog.Fatal().Msgf("duplicate config for table '%s'", c.Tables[i].Name)
|
||||
errlog.Fatal().Msgf("duplicate config for table '%s'", v.Name)
|
||||
}
|
||||
tm[name] = struct{}{}
|
||||
}
|
||||
|
||||
am := make(map[string]struct{})
|
||||
|
||||
for _, v := range c.Auths {
|
||||
name := strings.ToLower(v.Name)
|
||||
|
||||
if _, ok := am[name]; ok {
|
||||
errlog.Fatal().Msgf("duplicate config for auth '%s'", v.Name)
|
||||
}
|
||||
am[name] = struct{}{}
|
||||
}
|
||||
|
||||
for _, v := range c.Actions {
|
||||
if len(v.AuthName) == 0 {
|
||||
continue
|
||||
}
|
||||
authName := strings.ToLower(v.AuthName)
|
||||
|
||||
if _, ok := am[authName]; !ok {
|
||||
errlog.Fatal().Msgf("invalid auth_name for action '%s'", v.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if len(c.RolesQuery) == 0 {
|
||||
logger.Warn().Msgf("no 'roles_query' defined.")
|
||||
}
|
||||
@ -349,3 +389,31 @@ func sanitize(s string) string {
|
||||
return strings.ToLower(m)
|
||||
})
|
||||
}
|
||||
|
||||
func getConfigName() string {
|
||||
if len(os.Getenv("GO_ENV")) == 0 {
|
||||
return "dev"
|
||||
}
|
||||
|
||||
ge := strings.ToLower(os.Getenv("GO_ENV"))
|
||||
|
||||
switch {
|
||||
case strings.HasPrefix(ge, "pro"):
|
||||
return "prod"
|
||||
|
||||
case strings.HasPrefix(ge, "sta"):
|
||||
return "stage"
|
||||
|
||||
case strings.HasPrefix(ge, "tes"):
|
||||
return "test"
|
||||
|
||||
case strings.HasPrefix(ge, "dev"):
|
||||
return "dev"
|
||||
}
|
||||
|
||||
return ge
|
||||
}
|
||||
|
||||
func isDev() bool {
|
||||
return strings.HasPrefix(os.Getenv("GO_ENV"), "dev")
|
||||
}
|
||||
|
14
serv/init.go
14
serv/init.go
@ -148,20 +148,6 @@ func initCompiler() {
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Msg("failed to initialize compilers")
|
||||
}
|
||||
|
||||
if err := initResolvers(); err != nil {
|
||||
errlog.Fatal().Err(err).Msg("failed to initialized resolvers")
|
||||
}
|
||||
}
|
||||
|
||||
func initConfOnce() {
|
||||
var err error
|
||||
|
||||
if conf == nil {
|
||||
if conf, err = initConf(); err != nil {
|
||||
errlog.Fatal().Err(err).Msg("failed to read config")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func initAllowList(cpath string) {
|
||||
|
@ -22,16 +22,20 @@ type resolvFn struct {
|
||||
Fn func(h http.Header, id []byte) ([]byte, error)
|
||||
}
|
||||
|
||||
func initResolvers() error {
|
||||
func initResolvers() {
|
||||
var err error
|
||||
rmap = make(map[uint64]*resolvFn)
|
||||
|
||||
for _, t := range conf.Tables {
|
||||
err := initRemotes(t)
|
||||
err = initRemotes(t)
|
||||
if err != nil {
|
||||
return err
|
||||
break
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Msg("failed to initialize resolvers")
|
||||
}
|
||||
}
|
||||
|
||||
func initRemotes(t configTable) error {
|
98
serv/serv.go
98
serv/serv.go
@ -101,9 +101,14 @@ func startHTTP() {
|
||||
hostPort = defaultHP
|
||||
}
|
||||
|
||||
routes, err := routeHandler()
|
||||
if err != nil {
|
||||
errlog.Fatal().Err(err).Send()
|
||||
}
|
||||
|
||||
srv := &http.Server{
|
||||
Addr: hostPort,
|
||||
Handler: routeHandler(),
|
||||
Handler: routes,
|
||||
ReadTimeout: 5 * time.Second,
|
||||
WriteTimeout: 10 * time.Second,
|
||||
MaxHeaderBytes: 1 << 20,
|
||||
@ -140,59 +145,74 @@ func startHTTP() {
|
||||
<-idleConnsClosed
|
||||
}
|
||||
|
||||
func routeHandler() http.Handler {
|
||||
var apiH http.Handler
|
||||
|
||||
if conf != nil && conf.HTTPGZip {
|
||||
gzipH := gziphandler.MustNewGzipLevelHandler(6)
|
||||
apiH = gzipH(http.HandlerFunc(apiV1))
|
||||
} else {
|
||||
apiH = http.HandlerFunc(apiV1)
|
||||
}
|
||||
|
||||
func routeHandler() (http.Handler, error) {
|
||||
mux := http.NewServeMux()
|
||||
|
||||
if conf != nil {
|
||||
mux.HandleFunc("/health", health)
|
||||
mux.Handle("/api/v1/graphql", withAuth(apiH))
|
||||
if conf == nil {
|
||||
return mux, nil
|
||||
}
|
||||
|
||||
if conf.WebUI {
|
||||
mux.Handle("/", http.FileServer(rice.MustFindBox("../web/build").HTTPBox()))
|
||||
routes := map[string]http.Handler{
|
||||
"/health": http.HandlerFunc(health),
|
||||
"/api/v1/graphql": withAuth(http.HandlerFunc(apiV1), conf.Auth),
|
||||
}
|
||||
|
||||
if err := setActionRoutes(routes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if conf.WebUI {
|
||||
routes["/"] = http.FileServer(rice.MustFindBox("../web/build").HTTPBox())
|
||||
}
|
||||
|
||||
if conf.HTTPGZip {
|
||||
gz := gziphandler.MustNewGzipLevelHandler(6)
|
||||
for k, v := range routes {
|
||||
routes[k] = gz(v)
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range routes {
|
||||
mux.Handle(k, v)
|
||||
}
|
||||
|
||||
fn := func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Server", serverName)
|
||||
mux.ServeHTTP(w, r)
|
||||
}
|
||||
|
||||
return http.HandlerFunc(fn)
|
||||
return http.HandlerFunc(fn), nil
|
||||
}
|
||||
|
||||
func getConfigName() string {
|
||||
if len(os.Getenv("GO_ENV")) == 0 {
|
||||
return "dev"
|
||||
func setActionRoutes(routes map[string]http.Handler) error {
|
||||
var err error
|
||||
|
||||
for _, a := range conf.Actions {
|
||||
var fn http.Handler
|
||||
|
||||
fn, err = newAction(a)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
|
||||
p := fmt.Sprintf("/api/v1/actions/%s", strings.ToLower(a.Name))
|
||||
|
||||
if authc, ok := findAuth(a.AuthName); ok {
|
||||
routes[p] = withAuth(fn, authc)
|
||||
} else {
|
||||
routes[p] = fn
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
ge := strings.ToLower(os.Getenv("GO_ENV"))
|
||||
func findAuth(name string) (configAuth, bool) {
|
||||
var authc configAuth
|
||||
|
||||
switch {
|
||||
case strings.HasPrefix(ge, "pro"):
|
||||
return "prod"
|
||||
|
||||
case strings.HasPrefix(ge, "sta"):
|
||||
return "stage"
|
||||
|
||||
case strings.HasPrefix(ge, "tes"):
|
||||
return "test"
|
||||
|
||||
case strings.HasPrefix(ge, "dev"):
|
||||
return "dev"
|
||||
for _, a := range conf.Auths {
|
||||
if strings.EqualFold(a.Name, name) {
|
||||
return a, true
|
||||
}
|
||||
}
|
||||
|
||||
return ge
|
||||
}
|
||||
|
||||
func isDev() bool {
|
||||
return strings.HasPrefix(os.Getenv("GO_ENV"), "dev")
|
||||
return authc, false
|
||||
}
|
||||
|
@ -7,6 +7,7 @@ import (
|
||||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
"github.com/dosco/super-graph/jsn"
|
||||
@ -127,9 +128,14 @@ func findStmt(role string, stmts []stmt) *stmt {
|
||||
}
|
||||
|
||||
func fatalInProd(err error, msg string) {
|
||||
if isDev() {
|
||||
errlog.Error().Err(err).Msg(msg)
|
||||
} else {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
if !isDev() {
|
||||
errlog.Fatal().Err(err).Msg(msg)
|
||||
}
|
||||
|
||||
errlog.Error().Err(err).Msg(msg)
|
||||
|
||||
wg.Add(1)
|
||||
wg.Wait()
|
||||
}
|
||||
|
35
tmpl/dev.yml
35
tmpl/dev.yml
@ -49,7 +49,7 @@ migrations_path: ./config/migrations
|
||||
# sheep: sheep
|
||||
|
||||
auth:
|
||||
# Can be 'rails' or 'jwt'
|
||||
# Can be 'rails', 'jwt' or 'header'
|
||||
type: rails
|
||||
cookie: _{% app_name_slug %}_session
|
||||
|
||||
@ -83,6 +83,22 @@ auth:
|
||||
# public_key_file: /secrets/public_key.pem
|
||||
# public_key_type: ecdsa #rsa
|
||||
|
||||
# header:
|
||||
# name: dnt
|
||||
# exists: true
|
||||
# value: localhost:8080
|
||||
|
||||
# You can add additional named auths to use with actions
|
||||
# In this example actions using this auth can only be
|
||||
# called from the Google Appengine Cron service that
|
||||
# sets a special header to all it's requests
|
||||
auths:
|
||||
- name: from_taskqueue
|
||||
type: header
|
||||
header:
|
||||
name: X-Appengine-Cron
|
||||
exists: true
|
||||
|
||||
database:
|
||||
type: postgres
|
||||
host: db
|
||||
@ -116,6 +132,16 @@ database:
|
||||
- encrypted
|
||||
- token
|
||||
|
||||
# Create custom actions with their own api endpoints
|
||||
# For example the below action will be available at /api/v1/actions/refresh_leaderboard_users
|
||||
# A request to this url will execute the configured SQL query
|
||||
# which in this case refreshes a materialized view in the database.
|
||||
# The auth_name is from one of the configured auths
|
||||
actions:
|
||||
- name: refresh_leaderboard_users
|
||||
sql: REFRESH MATERIALIZED VIEW CONCURRENTLY "leaderboard_users"
|
||||
auth_name: from_taskqueue
|
||||
|
||||
tables:
|
||||
- name: customers
|
||||
remotes:
|
||||
@ -137,6 +163,7 @@ tables:
|
||||
name: me
|
||||
table: users
|
||||
|
||||
|
||||
roles_query: "SELECT * FROM users WHERE id = $user_id"
|
||||
|
||||
roles:
|
||||
@ -168,20 +195,16 @@ roles:
|
||||
query:
|
||||
limit: 50
|
||||
filters: ["{ user_id: { eq: $user_id } }"]
|
||||
columns: ["id", "name", "description" ]
|
||||
disable_functions: false
|
||||
|
||||
insert:
|
||||
filters: ["{ user_id: { eq: $user_id } }"]
|
||||
columns: ["id", "name", "description" ]
|
||||
presets:
|
||||
- user_id: "$user_id"
|
||||
- created_at: "now"
|
||||
|
||||
update:
|
||||
filters: ["{ user_id: { eq: $user_id } }"]
|
||||
columns:
|
||||
- id
|
||||
- name
|
||||
presets:
|
||||
- updated_at: "now"
|
||||
|
||||
|
Reference in New Issue
Block a user