Compare commits

...

41 Commits

Author SHA1 Message Date
d3e32f944a fix: json content type breaks web ui 2020-05-11 21:09:12 -04:00
3bf9f02a9f fix: bug with reading config file by name 2020-05-10 11:26:48 -04:00
533c767e1d fix: benchmark was failing. Also added a benchmark for the chirino/graphql version gql parser to compare results. (#62) 2020-05-07 10:48:01 -04:00
84d55dbc8a feat: remove data from variables saved to allow.list 2020-05-07 10:27:40 -04:00
5aafff6310 chore: add InteliJ editor project files to the .gitignore list. (#61) 2020-05-07 10:24:29 -04:00
840aaf64ff fix: return response as application/json (#59) 2020-05-07 10:24:12 -04:00
7bbb56a328 fix get functions parameters without name (#60) 2020-05-07 03:04:37 -04:00
394b08b2fe chore: update changelog 2020-05-03 21:01:16 -04:00
842252f9e2 fix: fix issue with skipping prepared statements for some roles on error 2020-05-03 20:52:26 -04:00
279f5616d1 fix: fix for issues reported by deepsource 2020-05-03 16:08:34 -04:00
04bb88f74b Add .deepsource.toml 2020-05-03 19:57:42 +00:00
38ed6dbc5f fix: bug with single quote ecape in production mode 2020-05-01 02:20:45 -04:00
ec2f8d0c58 chore: pickup latest version of chirino/graphql module for it’s schema api simplifications. (#58) 2020-05-01 02:03:35 -04:00
9b51065414 fix: grammatical errors (#57) 2020-04-25 09:57:59 -04:00
1a70603b1a feat: add option to set the cache-control header 2020-04-24 20:45:03 -04:00
505335d872 feat: add config to set api endpoint prefix 2020-04-24 01:23:35 -04:00
bdc8c65a09 fix: fix issues with code examples 2020-04-23 21:25:09 -04:00
03fe29b088 fix: improve documentation of the config object 2020-04-23 21:25:09 -04:00
5857efdd70 fix: correct spellings and language in README.md (#55)
* Update README.md

* Code review: fix go get again
2020-04-23 21:01:00 -04:00
bdffe7b14e fix: add a benchmark around the GraphQL api function 2020-04-23 01:42:16 -04:00
ae7cde0433 feat: add support for single argument Postgres functions 2020-04-22 20:51:14 -04:00
6293d37e73 fix: upgrade packages in the web ui 2020-04-21 21:05:14 -04:00
7a3fe5a1df fix: Only include the bulk update arguments on the plur… (#54)
* introspection fix: Only include the bulk update arguments on the plural versions of the fields.

* Fixes error graphql: Unknown type "String!"
2020-04-21 10:41:28 -04:00
2a32c179ba feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. (#53) 2020-04-21 10:03:05 -04:00
0a02bde219 fix: block introspection queries in production mode 2020-04-20 02:06:58 -04:00
966aa9ce8c feat: add some initial introspection support. (#52) 2020-04-19 23:48:49 -04:00
6f18d56ca0 fix: update queries generate invalid sql 2020-04-19 13:40:14 -04:00
c400461835 fix: prepared statements not working in prod mode 2020-04-19 12:54:37 -04:00
a6691de1b7 fix: remove multi-line graphql query in log 2020-04-19 02:50:09 -04:00
e6934cda02 fix: vars not sanitized in roles_query 2020-04-18 17:46:40 -04:00
4cf7956ff5 feat: add cockroachdb support. (#50)
This PR changes the generated SQL so that it's also compatible with CockroachDB.
Notable changes:
* use `SELECT to_jsonb("__sr_0".*)`  instead of `SELECT to_jsonb("__sr_0")`
* don't use `json_populate_record`, use the `CAST` and `->>` instead.  For example:

  instead of: `SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t`

  do: `CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i`

This PR also adds some integration tests against an actual database instance.  If you have the cockroachdb binary installed on your PATH,
the test suite will startup a temporary cockroachdb instance on a random port to test against.  It is stopped and the tmp data files are deleted once the test ends.  It will also run the integration tests against database
pointed at by your `SG_POSTGRESQL_TEST_URL` environment variable if it’s set.

Also includes some small formatting changes introduced by `gofmt -w .`
2020-04-18 17:42:17 -04:00
5356455904 Fix issue with relative paths and config files 2020-04-17 10:56:26 -04:00
074aded5c0 Upgrade UI and app templates 2020-04-16 10:27:10 -04:00
c7557f761f Fix broken build 2020-04-16 01:28:55 -04:00
09d6460a13 Make go get to install work. 2020-04-16 00:26:32 -04:00
40c99e9ef3 Fix issue with missing build variables 2020-04-13 00:50:54 -04:00
75ff5510d4 Fix issue with failing db cmds 2020-04-13 00:43:18 -04:00
1370d24985 Fix issue with make install 2020-04-12 20:35:31 -04:00
ef50c1957b Fix CloudRun connection issue 2020-04-12 10:09:37 -04:00
41ea6ef6f5 Fix readme add library usage 2020-04-11 16:41:10 -04:00
a266517d17 Remove config package 2020-04-11 02:45:06 -04:00
159 changed files with 8509 additions and 4769 deletions

View File

@ -5,18 +5,18 @@ info:
repository_url: https://github.com/dosco/super-graph repository_url: https://github.com/dosco/super-graph
options: options:
commits: commits:
# filters: filters:
# Type: Type:
# - feat - feat
# - fix - fix
# - perf - perf
# - refactor - refactor
commit_groups: commit_groups:
# title_maps: title_maps:
# feat: Features feat: Features
# fix: Bug Fixes fix: Bug Fixes
# perf: Performance Improvements perf: Performance Improvements
# refactor: Code Refactoring refactor: Code Refactoring
header: header:
pattern: "^((\\w+)\\s.*)$" pattern: "^((\\w+)\\s.*)$"
pattern_maps: pattern_maps:

8
.deepsource.toml Normal file
View File

@ -0,0 +1,8 @@
version = 1
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_path = "github.com/dosco/super-graph"

5
.gitignore vendored
View File

@ -23,16 +23,19 @@
/tmp/runner-build /tmp/runner-build
/demo/tmp /demo/tmp
.idea
*.iml
.vscode .vscode
.DS_Store .DS_Store
.swp .swp
.release .release
main main
super-graph super-graph
supergraph
*-fuzz.zip *-fuzz.zip
crashers crashers
suppressions suppressions
release release
.gofuzz .gofuzz
*-fuzz.zip *-fuzz.zip
*.test

View File

@ -7,7 +7,7 @@ rules:
- name: run - name: run
match: \.go$ match: \.go$
ignore: web|examples|docs|_test\.go$ ignore: web|examples|docs|_test\.go$
command: go run cmd/main.go serv command: go run main.go serv
- name: test - name: test
match: _test\.go$ match: _test\.go$
command: go test -cover {PKG} command: go test -cover {PKG}

View File

@ -1,401 +1,371 @@
<a name="unreleased"></a> <a name="unreleased"></a>
## [Unreleased] ## [Unreleased]
### Add
- Add config driven custom table relationships
- Add support for `websearch_to_tsquery` in PG 11
### Create <a name="v0.13.22"></a>
- Create CODE_OF_CONDUCT.md ## [v0.13.22] - 2020-05-01
### Fix <a name="v0.13.21"></a>
- Fix bug with remote join example ## [v0.13.21] - 2020-04-24
- Fix grammer / syntax
### Update <a name="v0.13.20"></a>
- Update issue templates ## [v0.13.20] - 2020-04-24
- Update CONTRIBUTING.md
- Update issue templates <a name="v0.13.19"></a>
- Update feature_request.md ## [v0.13.19] - 2020-04-23
<a name="v0.13.18"></a>
## [v0.13.18] - 2020-04-23
<a name="v0.13.17"></a>
## [v0.13.17] - 2020-04-22
<a name="v0.13.16"></a>
## [v0.13.16] - 2020-04-21
### Features
- feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. ([#53](https://github.com/dosco/super-graph/issues/53))
<a name="v0.13.15"></a>
## [v0.13.15] - 2020-04-20
<a name="v0.13.14"></a>
## [v0.13.14] - 2020-04-19
<a name="v0.13.13"></a>
## [v0.13.13] - 2020-04-19
<a name="v0.13.12"></a>
## [v0.13.12] - 2020-04-19
<a name="v0.13.11"></a>
## [v0.13.11] - 2020-04-18
<a name="v0.13.10"></a>
## [v0.13.10] - 2020-04-17
<a name="v0.13.9"></a>
## [v0.13.9] - 2020-04-16
<a name="v0.13.8"></a>
## [v0.13.8] - 2020-04-16
<a name="v0.13.7"></a>
## [v0.13.7] - 2020-04-16
<a name="v0.13.6"></a>
## [v0.13.6] - 2020-04-13
<a name="v0.13.5"></a>
## [v0.13.5] - 2020-04-13
<a name="v0.13.4"></a>
## [v0.13.4] - 2020-04-12
<a name="v0.13.3"></a>
## [v0.13.3] - 2020-04-12
<a name="v0.13.2"></a>
## [v0.13.2] - 2020-04-11
<a name="v0.13.1"></a>
## [v0.13.1] - 2020-04-11
<a name="v0.13.0"></a>
## [v0.13.0] - 2020-04-10
<a name="v0.12.49"></a>
## [v0.12.49] - 2020-04-01
<a name="v0.12.48"></a>
## [v0.12.48] - 2020-03-31
<a name="v0.12.47"></a>
## [v0.12.47] - 2020-03-30
<a name="v0.12.46"></a>
## [v0.12.46] - 2020-03-21
<a name="v0.12.45"></a>
## [v0.12.45] - 2020-03-18
<a name="v0.12.44"></a>
## [v0.12.44] - 2020-03-16
<a name="v0.12.43"></a>
## [v0.12.43] - 2020-03-16
<a name="v0.12.42"></a>
## [v0.12.42] - 2020-03-14
<a name="v0.12.41"></a>
## [v0.12.41] - 2020-03-06
<a name="v0.12.40"></a>
## [v0.12.40] - 2020-03-06
<a name="v0.12.39"></a>
## [v0.12.39] - 2020-03-06
<a name="v0.12.38"></a>
## [v0.12.38] - 2020-03-05
<a name="v0.12.37"></a>
## [v0.12.37] - 2020-03-04
<a name="v0.12.36"></a>
## [v0.12.36] - 2020-03-04
<a name="v0.12.35"></a>
## [v0.12.35] - 2020-03-03
<a name="v0.12.34"></a>
## [v0.12.34] - 2020-03-03
<a name="v0.12.33"></a>
## [v0.12.33] - 2020-02-29
<a name="v0.12.32"></a>
## [v0.12.32] - 2020-02-24
### Bug Fixes
- fix "Try the demo app" in docs ([#38](https://github.com/dosco/super-graph/issues/38))
<a name="v0.12.31"></a>
## [v0.12.31] - 2020-02-23
<a name="v0.12.30"></a>
## [v0.12.30] - 2020-02-23
<a name="v0.12.29"></a>
## [v0.12.29] - 2020-02-21
<a name="v0.12.28"></a>
## [v0.12.28] - 2020-02-20
<a name="v0.12.27"></a>
## [v0.12.27] - 2020-02-19
<a name="v0.12.26"></a>
## [v0.12.26] - 2020-02-11
<a name="v0.12.25"></a>
## [v0.12.25] - 2020-02-10
<a name="v0.12.24"></a>
## [v0.12.24] - 2020-02-03
<a name="v0.12.23"></a>
## [v0.12.23] - 2020-02-02
<a name="v0.12.22"></a>
## [v0.12.22] - 2020-02-01
<a name="v0.12.21"></a>
## [v0.12.21] - 2020-01-31
<a name="v0.12.20"></a>
## [v0.12.20] - 2020-01-28
<a name="v0.12.19"></a>
## [v0.12.19] - 2020-01-26
<a name="v0.12.18"></a>
## [v0.12.18] - 2020-01-20
<a name="v0.12.17"></a>
## [v0.12.17] - 2020-01-20
<a name="v0.12.16"></a>
## [v0.12.16] - 2020-01-19
<a name="v0.12.15"></a>
## [v0.12.15] - 2020-01-17
<a name="v0.12.14"></a>
## [v0.12.14] - 2020-01-17
<a name="v0.12.13"></a>
## [v0.12.13] - 2020-01-16
<a name="v0.12.12"></a>
## [v0.12.12] - 2020-01-15
<a name="v0.12.11"></a>
## [v0.12.11] - 2020-01-14
<a name="v0.12.10"></a>
## [v0.12.10] - 2020-01-14
<a name="v0.12.9"></a>
## [v0.12.9] - 2020-01-14
<a name="v0.12.8"></a>
## [v0.12.8] - 2020-01-13
<a name="v0.12.7"></a>
## [v0.12.7] - 2020-01-11
### Pull Requests
- Merge pull request [#22](https://github.com/dosco/super-graph/issues/22) from bhaskarmurthy/fix-grammer-syntax
<a name="v0.12.6"></a> <a name="v0.12.6"></a>
## [v0.12.6] - 2019-12-02 ## [v0.12.6] - 2019-12-02
### Add
- Add support for `websearch_to_tsquery` in PG 11
<a name="v0.12.5"></a> <a name="v0.12.5"></a>
## [v0.12.5] - 2019-11-30 ## [v0.12.5] - 2019-11-30
### Add
- Add a guide to the internals of the codebase
- Add a CONTRIBUTING.md guide for contributors
- Add a CHANGLOG.md
- Add issue templates
### Fix
- Fix for missing filters on nested selectors
### Refactor
- Refactor rename 'Select.Table` to `Select.Name`
<a name="v0.12.4"></a> <a name="v0.12.4"></a>
## [v0.12.4] - 2019-11-28 ## [v0.12.4] - 2019-11-28
### Move
- Move license from MIT to Apache 2.0. Add Makefile
<a name="v0.12.3"></a> <a name="v0.12.3"></a>
## [v0.12.3] - 2019-11-26 ## [v0.12.3] - 2019-11-26
### Added
- Added support for query names to the allow.list
<a name="v0.12.2"></a> <a name="v0.12.2"></a>
## [v0.12.2] - 2019-11-25 ## [v0.12.2] - 2019-11-25
### Fix
- Fix bug with compiling anon queries
<a name="v0.12.1"></a> <a name="v0.12.1"></a>
## [v0.12.1] - 2019-11-22 ## [v0.12.1] - 2019-11-22
### Move
- Move sql query logging from info to debug
<a name="v0.12.0"></a> <a name="v0.12.0"></a>
## [v0.12.0] - 2019-11-22 ## [v0.12.0] - 2019-11-22
### Use
- Use logger error instead of panic in goja handlers
<a name="v0.11.9"></a> <a name="v0.11.9"></a>
## [v0.11.9] - 2019-11-22 ## [v0.11.9] - 2019-11-22
### Add
- Add a db:reset command only for dev mode
<a name="v0.11.8"></a> <a name="v0.11.8"></a>
## [v0.11.8] - 2019-11-21 ## [v0.11.8] - 2019-11-21
### Optimize
- Optimize db queries limit use of transactions
<a name="v0.11.7"></a> <a name="v0.11.7"></a>
## [v0.11.7] - 2019-11-19 ## [v0.11.7] - 2019-11-19
### Added
- Added support for multi-root queries
<a name="v0.11.6"></a> <a name="v0.11.6"></a>
## [v0.11.6] - 2019-11-15 ## [v0.11.6] - 2019-11-15
### Fix
- Fix issues with JWT auth
- Fix bug with migration filename generation
- Fix bug with migration file name
<a name="v0.11.5"></a> <a name="v0.11.5"></a>
## [v0.11.5] - 2019-11-10 ## [v0.11.5] - 2019-11-10
### Fix
- Fix bug with migration template name
<a name="v0.11.4"></a> <a name="v0.11.4"></a>
## [v0.11.4] - 2019-11-10 ## [v0.11.4] - 2019-11-10
### Fix
- Fix bug with creating new migrations
<a name="v0.11.3"></a> <a name="v0.11.3"></a>
## [v0.11.3] - 2019-11-09 ## [v0.11.3] - 2019-11-09
### Fix
- Fix macro syntax bug in app templates
<a name="v0.11.2"></a> <a name="v0.11.2"></a>
## [v0.11.2] - 2019-11-07 ## [v0.11.2] - 2019-11-07
### Fix
- Fix bugs and add new production mode
<a name="v0.11.1"></a> <a name="v0.11.1"></a>
## [v0.11.1] - 2019-11-05 ## [v0.11.1] - 2019-11-05
### Add
- Add nested where clause to filter based on related tables
### Block
- Block unauthorized requests when 'anon' role is not defined
### Update
- Update docs and website with new features
<a name="v0.11"></a> <a name="v0.11"></a>
## [v0.11] - 2019-11-01 ## [v0.11] - 2019-11-01
### Add
- Add config driven presets for insert, update and upsert
- Add config driven presets for insert, update and upserta
- Add RBAC option to disable functions eg. count
- Add fuzz testing to 'serv' for the GQL hash parser
- Add fuzz testing to 'jsn' and 'qcode'
- Add ability to block queries and mutations by role
- Add built in 'anon' and 'user' roles
- Add role based access control
### Allow
- Allow config files to inherit from other config files
### Change
- Change config key inherit to inherits
### Get
- Get RBAC working for queries and mutations
### Optimize
- Optimize prepared statement flow for RBAC
### Preserve
- Preserve allow.list ordering on save
### Update
- Update filters section in guide
### Pull Requests ### Pull Requests
- Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac - Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac
<a name="v0.10.1"></a> <a name="v0.10.1"></a>
## [v0.10.1] - 2019-10-06 ## [v0.10.1] - 2019-10-06
### Add
- Add ability to set filters per operation / action
- Add upsert mutation
### Pull Requests ### Pull Requests
- Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder - Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder
<a name="v0.10"></a> <a name="v0.10"></a>
## [v0.10] - 2019-10-04 ## [v0.10] - 2019-10-04
### Fix
- Fix return values for bulk mutations and delete
- Fix issues with mutation SQL
- Fix broken demo app
- Fix typo in 'across'
### Remove
- Remove extra link from README
### Update
- Update docs, getting started guide and mutations
### Pull Requests ### Pull Requests
- Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes - Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes
<a name="v0.9"></a> <a name="v0.9"></a>
## [v0.9] - 2019-10-01 ## [v0.9] - 2019-10-01
### Fix
- Fix demo rails app broken build
<a name="v0.8"></a> <a name="v0.8"></a>
## [v0.8] - 2019-09-30 ## [v0.8] - 2019-09-30
### Fix
- Fix invalid import bug
### Update
- Update documentation site
<a name="v0.7"></a> <a name="v0.7"></a>
## [v0.7] - 2019-09-29 ## [v0.7] - 2019-09-29
### Failure
- Failure to prepare statements should be a warning
### Fix
- Fix duplicte column bug
<a name="v0.6"></a> <a name="v0.6"></a>
## [v0.6] - 2019-09-29 ## [v0.6] - 2019-09-29
### Add
- Add database setup commands
- Add binary compression back to Dockerfile
- Add initialization command to setup new apps
- Add migrate command
- Add database seeding capability
- Add session variable for user id
- Add delete mutation
- Add update mutation
- Add insert mutation with bulk insert
- Add GoTO Aug, 19 presentation
- Add support for prepared statements
- Add end-to-end benchmaking
- Add object pooling for parser expressions
- Add request / response debugging for remote joins
- Add a presentation about GraphQL
- Add validation for remote JSON
- Add tracing for API stitching
- Add REST API stitching
- Add SQL query cacheing
- Add support for GraphQL variables
- Add fuzz testing to qcode
- Add test for Rails Redis cookie store integration
- Add an install guide
### Change
- Change fuzz test name to qcode
- Change logo from PNG to SVG
### Enabke
- Enabke reload on config change
### Fix
- Fix missing config name bug
- Fix new app templates
- Fix help message for migrate
- Fix session variable bug
- Fix test failures in `psql` and `serv`
- Fix demo docker services startup order
- Fix wrong value for false token bug. Reported by [@ThisIsMissEm](https://github.com/ThisIsMissEm)
- Fix allow.list file discovery bug
- Fix bug with allow list path
- Fix wrong value for use_allow_list in dev config
- Fix startup bug in demo script
- Fix url bug in allow list
- Fix bug [#676](https://github.com/dosco/super-graph/issues/676) found by fuzzer
- Fix race-condition in remote joins
- Fix cookie passing in web ui
- Fix bug with passing cookies in web ui
- Fix null pointer with invalid argument values
- Fix infinite loop bug in lexer
- Fix null pointer issue found by fuzz test
- Fix issue with fuzzbuzz config
- Fix demo to run as memory only
- Fix auth documentation
- Fix issue with web ui sizing
- Fix issue preventing docker-compose deploy
- Fix try demo documentation
### Futher
- Futher reduce allocations across hot paths
- Futher reduce allocations on the compiler hot path
- Futher optimize json parsing and editing performance
### Highlight
- Highlight top features better on the site
### Improve
- Improve readability of json parser code
- Improve the motivation section in the readme
- Improve the demo experience
### Make
- Make remote joins use parallel http requests
### Merge
- Merge branch 'master' into optimize-psql
### New
- New low allocation fast json parsing and editing library
### Optimize
- Optimize lexer and fix bugs
- Optimize the sql generator hot path
### Reduce
- Reduce alllocations done by the stack
- Reduce steps to run the demo
- Reduce allocations and improve perf over 50%
### Remove
- Remove unused packages
- Remove the 'hello' test app folder
- Remove other allocations in psql
### Use
- Use hash's as ids for table relationships
### Watch
- Watch and reload on config changes
<a name="v0.5"></a> <a name="v0.5"></a>
## [v0.5] - 2019-04-10 ## [v0.5] - 2019-04-10
### Add
- Add supprt for new Rails 5.2 aes-256-gcm cookies
- Add query support for ts_rank and ts_headline
- Add full text search support using TSV indexes
- Add missing assets folder
- Add fetch by ID feature
- Add documentation
### Cleanup
- Cleanup and redesign config files
### Fix
- Fix bug with auth config parsing
### Redesign
- Redesign config file architecture
### Reduce
- Reduce realloc of maps and slices
### Update
- Update docs with full-text search information
<a name="v0.4"></a> <a name="v0.4"></a>
## [v0.4] - 2019-04-01 ## [v0.4] - 2019-04-01
<a name="v0.3"></a> <a name="v0.3"></a>
## [v0.3] - 2019-04-01 ## [v0.3] - 2019-04-01
### Add
- Add SQL execution timing and tracing
- Add support for HAVING with aggregate queries
- Add aggregrate functions to GQL queries
- Add Auth0 JWT support
- Add React UI building to the docker build flow
- Add compiler profiling
- Add bechmarks for GQL to SQL compile
- Add tests for gql to sql compile
### Cleanup
- Cleanup Dockerfile
### Fix
- Fix recurring packer issue docker hub builds
- Fix issue with asset packer breaking Docker builds
- Fix missing git package in Dockerfile
- Fix docker ignore values
- Fix image build failure on docker hub
- Fix build issue in Dockerfile
- Fix bugs and document the 'where' clause
- Fix perf issue with inflections
### Optimize
- Optimize docker image
### Pack
- Pack web UI with app into a single binary
### Upgrade
- Upgrade web UI packages
<a name="0.3"></a> <a name="0.3"></a>
## 0.3 - 2019-03-24 ## 0.3 - 2019-03-24
### First
- First commit
### Fix [Unreleased]: https://github.com/dosco/super-graph/compare/v0.13.22...HEAD
- Fix license to MIT [v0.13.22]: https://github.com/dosco/super-graph/compare/v0.13.21...v0.13.22
[v0.13.21]: https://github.com/dosco/super-graph/compare/v0.13.20...v0.13.21
[v0.13.20]: https://github.com/dosco/super-graph/compare/v0.13.19...v0.13.20
[Unreleased]: https://github.com/dosco/super-graph/compare/v0.12.6...HEAD [v0.13.19]: https://github.com/dosco/super-graph/compare/v0.13.18...v0.13.19
[v0.13.18]: https://github.com/dosco/super-graph/compare/v0.13.17...v0.13.18
[v0.13.17]: https://github.com/dosco/super-graph/compare/v0.13.16...v0.13.17
[v0.13.16]: https://github.com/dosco/super-graph/compare/v0.13.15...v0.13.16
[v0.13.15]: https://github.com/dosco/super-graph/compare/v0.13.14...v0.13.15
[v0.13.14]: https://github.com/dosco/super-graph/compare/v0.13.13...v0.13.14
[v0.13.13]: https://github.com/dosco/super-graph/compare/v0.13.12...v0.13.13
[v0.13.12]: https://github.com/dosco/super-graph/compare/v0.13.11...v0.13.12
[v0.13.11]: https://github.com/dosco/super-graph/compare/v0.13.10...v0.13.11
[v0.13.10]: https://github.com/dosco/super-graph/compare/v0.13.9...v0.13.10
[v0.13.9]: https://github.com/dosco/super-graph/compare/v0.13.8...v0.13.9
[v0.13.8]: https://github.com/dosco/super-graph/compare/v0.13.7...v0.13.8
[v0.13.7]: https://github.com/dosco/super-graph/compare/v0.13.6...v0.13.7
[v0.13.6]: https://github.com/dosco/super-graph/compare/v0.13.5...v0.13.6
[v0.13.5]: https://github.com/dosco/super-graph/compare/v0.13.4...v0.13.5
[v0.13.4]: https://github.com/dosco/super-graph/compare/v0.13.3...v0.13.4
[v0.13.3]: https://github.com/dosco/super-graph/compare/v0.13.2...v0.13.3
[v0.13.2]: https://github.com/dosco/super-graph/compare/v0.13.1...v0.13.2
[v0.13.1]: https://github.com/dosco/super-graph/compare/v0.13.0...v0.13.1
[v0.13.0]: https://github.com/dosco/super-graph/compare/v0.12.49...v0.13.0
[v0.12.49]: https://github.com/dosco/super-graph/compare/v0.12.48...v0.12.49
[v0.12.48]: https://github.com/dosco/super-graph/compare/v0.12.47...v0.12.48
[v0.12.47]: https://github.com/dosco/super-graph/compare/v0.12.46...v0.12.47
[v0.12.46]: https://github.com/dosco/super-graph/compare/v0.12.45...v0.12.46
[v0.12.45]: https://github.com/dosco/super-graph/compare/v0.12.44...v0.12.45
[v0.12.44]: https://github.com/dosco/super-graph/compare/v0.12.43...v0.12.44
[v0.12.43]: https://github.com/dosco/super-graph/compare/v0.12.42...v0.12.43
[v0.12.42]: https://github.com/dosco/super-graph/compare/v0.12.41...v0.12.42
[v0.12.41]: https://github.com/dosco/super-graph/compare/v0.12.40...v0.12.41
[v0.12.40]: https://github.com/dosco/super-graph/compare/v0.12.39...v0.12.40
[v0.12.39]: https://github.com/dosco/super-graph/compare/v0.12.38...v0.12.39
[v0.12.38]: https://github.com/dosco/super-graph/compare/v0.12.37...v0.12.38
[v0.12.37]: https://github.com/dosco/super-graph/compare/v0.12.36...v0.12.37
[v0.12.36]: https://github.com/dosco/super-graph/compare/v0.12.35...v0.12.36
[v0.12.35]: https://github.com/dosco/super-graph/compare/v0.12.34...v0.12.35
[v0.12.34]: https://github.com/dosco/super-graph/compare/v0.12.33...v0.12.34
[v0.12.33]: https://github.com/dosco/super-graph/compare/v0.12.32...v0.12.33
[v0.12.32]: https://github.com/dosco/super-graph/compare/v0.12.31...v0.12.32
[v0.12.31]: https://github.com/dosco/super-graph/compare/v0.12.30...v0.12.31
[v0.12.30]: https://github.com/dosco/super-graph/compare/v0.12.29...v0.12.30
[v0.12.29]: https://github.com/dosco/super-graph/compare/v0.12.28...v0.12.29
[v0.12.28]: https://github.com/dosco/super-graph/compare/v0.12.27...v0.12.28
[v0.12.27]: https://github.com/dosco/super-graph/compare/v0.12.26...v0.12.27
[v0.12.26]: https://github.com/dosco/super-graph/compare/v0.12.25...v0.12.26
[v0.12.25]: https://github.com/dosco/super-graph/compare/v0.12.24...v0.12.25
[v0.12.24]: https://github.com/dosco/super-graph/compare/v0.12.23...v0.12.24
[v0.12.23]: https://github.com/dosco/super-graph/compare/v0.12.22...v0.12.23
[v0.12.22]: https://github.com/dosco/super-graph/compare/v0.12.21...v0.12.22
[v0.12.21]: https://github.com/dosco/super-graph/compare/v0.12.20...v0.12.21
[v0.12.20]: https://github.com/dosco/super-graph/compare/v0.12.19...v0.12.20
[v0.12.19]: https://github.com/dosco/super-graph/compare/v0.12.18...v0.12.19
[v0.12.18]: https://github.com/dosco/super-graph/compare/v0.12.17...v0.12.18
[v0.12.17]: https://github.com/dosco/super-graph/compare/v0.12.16...v0.12.17
[v0.12.16]: https://github.com/dosco/super-graph/compare/v0.12.15...v0.12.16
[v0.12.15]: https://github.com/dosco/super-graph/compare/v0.12.14...v0.12.15
[v0.12.14]: https://github.com/dosco/super-graph/compare/v0.12.13...v0.12.14
[v0.12.13]: https://github.com/dosco/super-graph/compare/v0.12.12...v0.12.13
[v0.12.12]: https://github.com/dosco/super-graph/compare/v0.12.11...v0.12.12
[v0.12.11]: https://github.com/dosco/super-graph/compare/v0.12.10...v0.12.11
[v0.12.10]: https://github.com/dosco/super-graph/compare/v0.12.9...v0.12.10
[v0.12.9]: https://github.com/dosco/super-graph/compare/v0.12.8...v0.12.9
[v0.12.8]: https://github.com/dosco/super-graph/compare/v0.12.7...v0.12.8
[v0.12.7]: https://github.com/dosco/super-graph/compare/v0.12.6...v0.12.7
[v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6 [v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6
[v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5 [v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5
[v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4 [v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4

View File

@ -1,10 +1,12 @@
# stage: 1 # stage: 1
FROM node:10 as react-build FROM node:10 as react-build
WORKDIR /web WORKDIR /web
COPY /cmd/internal/serv/web/ ./ COPY /internal/serv/web/ ./
RUN yarn RUN yarn
RUN yarn build RUN yarn build
# stage: 2 # stage: 2
FROM golang:1.14-alpine as go-build FROM golang:1.14-alpine as go-build
RUN apk update && \ RUN apk update && \
@ -22,8 +24,8 @@ RUN chmod 755 /usr/local/bin/sops
WORKDIR /app WORKDIR /app
COPY . /app COPY . /app
RUN mkdir -p /app/cmd/internal/serv/web/build RUN mkdir -p /app/internal/serv/web/build
COPY --from=react-build /web/build/ ./cmd/internal/serv/web/build COPY --from=react-build /web/build/ ./internal/serv/web/build
RUN go mod vendor RUN go mod vendor
RUN make build RUN make build
@ -31,6 +33,8 @@ RUN echo "Compressing binary, will take a bit of time..." && \
upx --ultra-brute -qq super-graph && \ upx --ultra-brute -qq super-graph && \
upx -t super-graph upx -t super-graph
# stage: 3 # stage: 3
FROM alpine:latest FROM alpine:latest
WORKDIR / WORKDIR /
@ -41,7 +45,7 @@ RUN mkdir -p /config
COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=go-build /app/config/* /config/ COPY --from=go-build /app/config/* /config/
COPY --from=go-build /app/super-graph . COPY --from=go-build /app/super-graph .
COPY --from=go-build /app/cmd/scripts/start.sh . COPY --from=go-build /app/internal/scripts/start.sh .
COPY --from=go-build /usr/local/bin/sops . COPY --from=go-build /usr/local/bin/sops .
RUN chmod +x /super-graph RUN chmod +x /super-graph

View File

@ -12,30 +12,30 @@ endif
export GO111MODULE := on export GO111MODULE := on
# Build-time Go variables # Build-time Go variables
version = github.com/dosco/super-graph/serv.version version = github.com/dosco/super-graph/internal/serv.version
gitBranch = github.com/dosco/super-graph/serv.gitBranch gitBranch = github.com/dosco/super-graph/internal/serv.gitBranch
lastCommitSHA = github.com/dosco/super-graph/serv.lastCommitSHA lastCommitSHA = github.com/dosco/super-graph/internal/serv.lastCommitSHA
lastCommitTime = github.com/dosco/super-graph/serv.lastCommitTime lastCommitTime = github.com/dosco/super-graph/internal/serv.lastCommitTime
BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}' BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}'
.PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS) .PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS)
test: test:
@go test -v ./... @go test -v -short -race ./...
BIN_DIR := $(GOPATH)/bin BIN_DIR := $(GOPATH)/bin
GORICE := $(BIN_DIR)/rice GORICE := $(BIN_DIR)/rice
GOLANGCILINT := $(BIN_DIR)/golangci-lint GOLANGCILINT := $(BIN_DIR)/golangci-lint
GITCHGLOG := $(BIN_DIR)/git-chglog GITCHGLOG := $(BIN_DIR)/git-chglog
WEB_BUILD_DIR := ./cmd/internal/serv/web/build/manifest.json WEB_BUILD_DIR := ./internal/serv/web/build/manifest.json
$(GORICE): $(GORICE):
@GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice @GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice
$(WEB_BUILD_DIR): $(WEB_BUILD_DIR):
@echo "First install Yarn and create a build of the web UI then re-run make install" @echo "First install Yarn and create a build of the web UI then re-run make install"
@echo "Run this command: yarn --cwd cmd/internal/serv/web/ build" @echo "Run this command: yarn --cwd internal/serv/web/ build"
@exit 1 @exit 1
$(GITCHGLOG): $(GITCHGLOG):
@ -45,7 +45,7 @@ changelog: $(GITCHGLOG)
@git-chglog $(ARGS) @git-chglog $(ARGS)
$(GOLANGCILINT): $(GOLANGCILINT):
@GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.21.0 @GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.25.1
lint: $(GOLANGCILINT) lint: $(GOLANGCILINT)
@golangci-lint run ./... --skip-dirs-use-default @golangci-lint run ./... --skip-dirs-use-default
@ -57,7 +57,7 @@ os = $(word 1, $@)
$(PLATFORMS): lint test $(PLATFORMS): lint test
@mkdir -p release @mkdir -p release
@GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 cmd/main.go @GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 main.go
release: windows linux darwin release: windows linux darwin
@ -69,7 +69,7 @@ gen: $(GORICE) $(WEB_BUILD_DIR)
@go generate ./... @go generate ./...
$(BINARY): clean $(BINARY): clean
@go build $(BUILD_FLAGS) -o $(BINARY) cmd/main.go @go build $(BUILD_FLAGS) -o $(BINARY) main.go
clean: clean:
@rm -f $(BINARY) @rm -f $(BINARY)
@ -77,11 +77,10 @@ clean:
run: clean run: clean
@go run $(BUILD_FLAGS) main.go $(ARGS) @go run $(BUILD_FLAGS) main.go $(ARGS)
install: gen install: clean build
@echo $(GOPATH)
@echo "Commit Hash: `git rev-parse HEAD`" @echo "Commit Hash: `git rev-parse HEAD`"
@echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`" @echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`"
@go install $(BUILD_FLAGS) cmd @mv $(BINARY) $(GOPATH)/bin/$(BINARY)
@echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32` @echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`
uninstall: clean uninstall: clean

View File

@ -1,28 +1,68 @@
<!-- <a href="https://supergraph.dev"><img src="https://supergraph.dev/hologram.svg" width="100" height="100" align="right" /></a> --> <img src="docs/guide/.vuepress/public/super-graph.png" width="250" />
<img src="docs/.vuepress/public/super-graph.png" width="250" /> ### Build web products faster. Secure high-performance GraphQL
### Build web products faster. Secure high performance GraphQL [![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg)](https://pkg.go.dev/github.com/dosco/super-graph/core?tab=doc)
![Apache 2.0](https://img.shields.io/github/license/dosco/super-graph.svg?style=flat-square)
![Apache Public License 2.0](https://img.shields.io/github/license/dosco/super-graph.svg) ![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg?style=flat-square)
![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg) ![Cloud native](https://img.shields.io/badge/cloud--native-enabled-blue.svg?style=flat-squareg)
![Cloud native](https://img.shields.io/badge/cloud--native-enabled-blue.svg)
[![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ) [![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ)
## What's Super Graph?
## What is Super Graph Designed to 100x your developer productivity. Super Graph will instantly, and without you writing any code, provide a high performance GraphQL API for your PostgresSQL DB. GraphQL queries are compiled into a single fast SQL query. Super Graph is a Go library and a service, use it in your own code or run it as a separate service.
Is designed to 100x your developer productivity. Super Graph will instantly and without you writing code provide you a high performance and secure GraphQL API for Postgres DB. GraphQL queries are translated into a single fast SQL query. No more writing API code as you develop ## Using it as a service
your web frontend just make the query you need and Super Graph will do the rest.
Super Graph has a rich feature set like integrating with your existing Ruby on Rails apps, joining your DB with data from remote APIs, role and attribute based access control, support for JWT tokens, built-in DB mutations and seeding, and a lot more. ```console
go get github.com/dosco/super-graph
super-graph new <app_name>
```
![GraphQL](docs/.vuepress/public/graphql.png?raw=true "") ## Using it in your own code
```golang
package main
## The story of Super Graph? import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib"
)
After working on several products through my career I find that we spend way too much time on building API backends. Most APIs also require constant updating, this costs real time and money. func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil {
log.Fatal(err)
}
sg, err := core.NewSuperGraph(nil, db)
if err != nil {
log.Fatal(err)
}
query := `
query {
posts {
id
title
}
}`
res, err := sg.GraphQL(context.Background(), query, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(res.Data))
}
```
## About Super Graph
After working on several products through my career I found that we spend way too much time on building API backends. Most APIs also need constant updating, and this costs time and money.
It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see. It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see.
@ -30,36 +70,27 @@ I didn't want to write this code anymore, I wanted the computer to do it. Enter
Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL. Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL.
This compiler is what sits at the heart of Super Graph with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations and everything else needed for you to build production ready apps with it. This compiler is what sits at the heart of Super Graph, with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations, and everything else needed for you to build production-ready apps with it.
## Features ## Features
- Complex nested queries and mutations - Complex nested queries and mutations
- Auto learns database tables and relationships - Auto learns database tables and relationships
- Role and Attribute based access control - Role and Attribute-based access control
- Full text search and aggregations - Opaque cursor-based efficient pagination
- Full-text search and aggregations
- JWT tokens supported (Auth0, etc) - JWT tokens supported (Auth0, etc)
- Join database queries with remote REST APIs - Join database queries with remote REST APIs
- Also works with existing Ruby-On-Rails apps - Also works with existing Ruby-On-Rails apps
- Rails authentication supported (Redis, Memcache, Cookie) - Rails authentication supported (Redis, Memcache, Cookie)
- A simple config file - A simple config file
- High performance GO codebase - High performance Go codebase
- Tiny docker image and low memory requirements - Tiny docker image and low memory requirements
- Fuzz tested for security - Fuzz tested for security
- Database migrations tool - Database migrations tool
- Database seeding tool - Database seeding tool
- Works with Postgres and YugabyteDB - Works with Postgres and YugabyteDB
## Get started
```
git clone https://github.com/dosco/super-graph
cd ./super-graph
make install
super-graph new <app_name>
```
## Documentation ## Documentation
[supergraph.dev](https://supergraph.dev) [supergraph.dev](https://supergraph.dev)
@ -79,4 +110,3 @@ Twitter or Discord.
Copyright (c) 2019-present Vikram Rangnekar Copyright (c) 2019-present Vikram Rangnekar

View File

@ -1,29 +0,0 @@
package serv
import (
"fmt"
"os"
"github.com/dosco/super-graph/config"
"github.com/spf13/cobra"
)
func cmdConfDump(cmd *cobra.Command, args []string) {
if len(args) != 1 {
cmd.Help() //nolint: errcheck
os.Exit(1)
}
fname := fmt.Sprintf("%s.%s", config.GetConfigName(), args[0])
conf, err := initConf()
if err != nil {
log.Fatalf("ERR failed to read config: %s", err)
}
if err := conf.WriteConfigAs(fname); err != nil {
log.Fatalf("ERR failed to write config: %s", err)
}
log.Printf("INF config dumped to ./%s", fname)
}

View File

@ -1,88 +0,0 @@
package serv
import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/config"
_ "github.com/jackc/pgx/v4/stdlib"
)
func initConf() (*config.Config, error) {
return config.NewConfigWithLogger(confPath, log)
}
func initDB(c *config.Config) (*sql.DB, error) {
var db *sql.DB
var err error
cs := fmt.Sprintf("postgres://%s:%s@%s:%d/%s",
c.DB.User, c.DB.Password,
c.DB.Host, c.DB.Port, c.DB.DBName)
for i := 1; i < 10; i++ {
db, err = sql.Open("pgx", cs)
if err == nil {
break
}
time.Sleep(time.Duration(i*100) * time.Millisecond)
}
if err != nil {
return nil, err
}
return db, nil
// config, _ := pgxpool.ParseConfig("")
// config.ConnConfig.Host = c.DB.Host
// config.ConnConfig.Port = c.DB.Port
// config.ConnConfig.Database = c.DB.DBName
// config.ConnConfig.User = c.DB.User
// config.ConnConfig.Password = c.DB.Password
// config.ConnConfig.RuntimeParams = map[string]string{
// "application_name": c.AppName,
// "search_path": c.DB.Schema,
// }
// switch c.LogLevel {
// case "debug":
// config.ConnConfig.LogLevel = pgx.LogLevelDebug
// case "info":
// config.ConnConfig.LogLevel = pgx.LogLevelInfo
// case "warn":
// config.ConnConfig.LogLevel = pgx.LogLevelWarn
// case "error":
// config.ConnConfig.LogLevel = pgx.LogLevelError
// default:
// config.ConnConfig.LogLevel = pgx.LogLevelNone
// }
// config.ConnConfig.Logger = NewSQLLogger(logger)
// // if c.DB.MaxRetries != 0 {
// // opt.MaxRetries = c.DB.MaxRetries
// // }
// if c.DB.PoolSize != 0 {
// config.MaxConns = conf.DB.PoolSize
// }
// var db *pgxpool.Pool
// var err error
// for i := 1; i < 10; i++ {
// db, err = pgxpool.ConnectConfig(context.Background(), config)
// if err == nil {
// break
// }
// time.Sleep(time.Duration(i*100) * time.Millisecond)
// }
// if err != nil {
// return nil, err
// }
// return db, nil
}

View File

@ -1,37 +0,0 @@
package serv
import "net/http"
//nolint: errcheck
func introspect(w http.ResponseWriter) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": null,
"subscriptionType": null
}
},
"extensions":{
"tracing":{
"version":1,
"startTime":"2019-06-04T19:53:31.093Z",
"endTime":"2019-06-04T19:53:31.108Z",
"duration":15219720,
"execution": {
"resolvers": [{
"path": ["__schema"],
"parentType": "Query",
"fieldName": "__schema",
"returnType": "__Schema!",
"startOffset": 50950,
"duration": 17187
}]
}
}
}
}`))
}

View File

@ -1,7 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3">
<g fill="#61DAFB">
<path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/>
<circle cx="420.9" cy="296.5" r="45.7"/>
<path d="M520.5 78.1z"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 2.6 KiB

755
config/allow.list Normal file
View File

@ -0,0 +1,755 @@
# http://localhost:8080/
variables {
"data": [
{
"name": "Protect Ya Neck",
"created_at": "now",
"updated_at": "now"
},
{
"name": "Enter the Wu-Tang",
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(insert: $data) {
id
name
}
}
variables {
"update": {
"name": "Wu-Tang",
"description": "No description needed"
},
"product_id": 1
}
mutation {
products(id: $product_id, update: $update) {
id
name
description
}
}
query {
users {
id
email
picture: avatar
products(limit: 2, where: {price: {gt: 10}}) {
id
name
description
}
}
}
variables {
"data": [
{
"name": "Gumbo1",
"created_at": "now",
"updated_at": "now"
},
{
"name": "Gumbo2",
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(id: 199, delete: true) {
id
name
}
}
query {
products {
id
name
user {
email
}
}
}
variables {
"data": {
"product_id": 5
}
}
mutation {
products(id: $product_id, delete: true) {
id
name
}
}
query {
products {
id
name
price
users {
email
}
}
}
variables {
"data": {
"email": "gfk@myspace.com",
"full_name": "Ghostface Killah",
"created_at": "now",
"updated_at": "now"
}
}
mutation {
user(insert: $data) {
id
}
}
variables {
"update": {
"name": "Helloo",
"description": "World \u003c\u003e"
},
"user": 123
}
mutation {
products(id: 5, update: $update) {
id
name
description
}
}
variables {
"data": {
"name": "WOOO",
"price": 50.5
}
}
mutation {
products(insert: $data) {
id
name
}
}
query getProducts {
products {
id
name
price
description
}
}
query {
deals {
id
name
price
}
}
variables {
"beer": "smoke"
}
query beerSearch {
products(search: $beer) {
id
name
search_rank
search_headline_description
}
}
query {
user {
id
full_name
}
}
variables {
"data": {
"email": "goo1@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
user(insert: $data) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"email": "goo12@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": [
{
"name": "Banana 1",
"price": 1.1,
"created_at": "now",
"updated_at": "now"
},
{
"name": "Banana 2",
"price": 2.2,
"created_at": "now",
"updated_at": "now"
}
]
}
}
mutation {
user(insert: $data) {
id
full_name
email
products {
id
name
price
}
}
}
variables {
"data": {
"name": "Banana 3",
"price": 1.1,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "a2@a.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
products(insert: $data) {
id
name
price
user {
id
full_name
email
}
}
}
variables {
"update": {
"name": "my_name",
"description": "my_desc"
}
}
mutation {
product(id: 15, update: $update, where: {id: {eq: 1}}) {
id
name
}
}
variables {
"update": {
"name": "my_name",
"description": "my_desc"
}
}
mutation {
product(update: $update, where: {id: {eq: 1}}) {
id
name
}
}
variables {
"update": {
"name": "my_name 2",
"description": "my_desc 2"
}
}
mutation {
product(update: $update, where: {id: {eq: 1}}) {
id
name
description
}
}
variables {
"data": {
"sale_type": "tuutuu",
"quantity": 5,
"due_date": "now",
"customer": {
"email": "thedude1@rug.com",
"full_name": "The Dude"
},
"product": {
"name": "Apple",
"price": 1.25
}
}
}
mutation {
purchase(update: $data, id: 5) {
sale_type
quantity
due_date
customer {
id
full_name
email
}
product {
id
name
price
}
}
}
variables {
"data": {
"email": "thedude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"where": {
"id": 2
},
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
user(update: $data, where: {id: {eq: 8}}) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"email": "thedude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"where": {
"id": 2
},
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
query {
user(where: {id: {eq: 8}}) {
id
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "thedude@rug.com"
}
}
}
query {
user {
email
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "booboo@demo.com"
}
}
}
mutation {
product(update: $data, id: 6) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "booboo@demo.com"
}
}
}
query {
product(id: 6) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"email": "thedude123@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"connect": {
"id": 7
},
"disconnect": {
"id": 8
}
}
}
}
mutation {
user(update: $data, id: 6) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 5,
"email": "test@test.com"
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"email": "thed44ude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"connect": {
"id": 5
}
}
}
}
mutation {
user(insert: $data) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 5
}
}
}
}
mutation {
product(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": [
{
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 6
}
}
},
{
"name": "Coconut",
"price": 2.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 3
}
}
}
]
}
mutation {
products(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": [
{
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
},
{
"name": "Coconut",
"price": 2.25,
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"connect": {
"id": 5,
"email": "test@test.com"
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"connect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"disconnect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user_id
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"disconnect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 2) {
id
name
user_id
}
}

View File

@ -1,505 +0,0 @@
// Package config provides the config values needed for Super Graph
// For detailed documentation visit https://supergraph.dev
package config
import (
"fmt"
"log"
"os"
"path"
"strings"
"time"
"github.com/gobuffalo/flect"
"github.com/spf13/viper"
)
const (
LogLevelNone int = iota
LogLevelInfo
LogLevelWarn
LogLevelError
LogLevelDebug
)
// Config struct holds the Super Graph config values
type Config struct {
Core `mapstructure:",squash"`
Serv `mapstructure:",squash"`
vi *viper.Viper
log *log.Logger
logLevel int
roles map[string]*Role
abacEnabled bool
valid bool
}
// Core struct contains core specific config value
type Core struct {
Env string
Production bool
LogLevel string `mapstructure:"log_level"`
SecretKey string `mapstructure:"secret_key"`
SetUserID bool `mapstructure:"set_user_id"`
Vars map[string]string `mapstructure:"variables"`
Blocklist []string
Tables []Table
RolesQuery string `mapstructure:"roles_query"`
Roles []Role
}
// Serv struct contains config values used by the Super Graph service
type Serv struct {
AppName string `mapstructure:"app_name"`
HostPort string `mapstructure:"host_port"`
Host string
Port string
HTTPGZip bool `mapstructure:"http_compress"`
WebUI bool `mapstructure:"web_ui"`
EnableTracing bool `mapstructure:"enable_tracing"`
UseAllowList bool `mapstructure:"use_allow_list"`
WatchAndReload bool `mapstructure:"reload_on_config_change"`
AuthFailBlock bool `mapstructure:"auth_fail_block"`
SeedFile string `mapstructure:"seed_file"`
MigrationsPath string `mapstructure:"migrations_path"`
AllowedOrigins []string `mapstructure:"cors_allowed_origins"`
DebugCORS bool `mapstructure:"cors_debug"`
Inflections map[string]string
Auth Auth
Auths []Auth
DB struct {
Type string
Host string
Port uint16
DBName string
User string
Password string
Schema string
PoolSize int32 `mapstructure:"pool_size"`
MaxRetries int `mapstructure:"max_retries"`
PingTimeout time.Duration `mapstructure:"ping_timeout"`
} `mapstructure:"database"`
Actions []Action
}
// Auth struct contains authentication related config values used by the Super Graph service
type Auth struct {
Name string
Type string
Cookie string
CredsInHeader bool `mapstructure:"creds_in_header"`
Rails struct {
Version string
SecretKeyBase string `mapstructure:"secret_key_base"`
URL string
Password string
MaxIdle int `mapstructure:"max_idle"`
MaxActive int `mapstructure:"max_active"`
Salt string
SignSalt string `mapstructure:"sign_salt"`
AuthSalt string `mapstructure:"auth_salt"`
}
JWT struct {
Provider string
Secret string
PubKeyFile string `mapstructure:"public_key_file"`
PubKeyType string `mapstructure:"public_key_type"`
}
Header struct {
Name string
Value string
Exists bool
}
}
// Column struct defines a database column
type Column struct {
Name string
Type string
ForeignKey string `mapstructure:"related_to"`
}
// Table struct defines a database table
type Table struct {
Name string
Table string
Blocklist []string
Remotes []Remote
Columns []Column
}
// Remote struct defines a remote API endpoint
type Remote struct {
Name string
ID string
Path string
URL string
Debug bool
PassHeaders []string `mapstructure:"pass_headers"`
SetHeaders []struct {
Name string
Value string
} `mapstructure:"set_headers"`
}
// Query struct contains access control values for query operations
type Query struct {
Limit int
Filters []string
Columns []string
DisableFunctions bool `mapstructure:"disable_functions"`
Block bool
}
// Insert struct contains access control values for insert operations
type Insert struct {
Filters []string
Columns []string
Presets map[string]string
Block bool
}
// Insert struct contains access control values for update operations
type Update struct {
Filters []string
Columns []string
Presets map[string]string
Block bool
}
// Delete struct contains access control values for delete operations
type Delete struct {
Filters []string
Columns []string
Block bool
}
// RoleTable struct contains role specific access control values for a database table
type RoleTable struct {
Name string
Query Query
Insert Insert
Update Update
Delete Delete
}
// Role struct contains role specific access control values for for all database tables
type Role struct {
Name string
Match string
Tables []RoleTable
tablesMap map[string]*RoleTable
}
// Action struct contains config values for a Super Graph service action
type Action struct {
Name string
SQL string
AuthName string `mapstructure:"auth_name"`
}
// NewConfig function reads in the config file for the environment specified in the GO_ENV
// environment variable. This is the best way to create a new Super Graph config.
func NewConfig(path string) (*Config, error) {
return NewConfigWithLogger(path, log.New(os.Stdout, "", 0))
}
// NewConfigWithLogger function reads in the config file for the environment specified in the GO_ENV
// environment variable. This is the best way to create a new Super Graph config.
func NewConfigWithLogger(path string, logger *log.Logger) (*Config, error) {
vi := newViper(path, GetConfigName())
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
inherits := vi.GetString("inherits")
if len(inherits) != 0 {
vi = newViper(path, inherits)
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
if vi.IsSet("inherits") {
return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)",
inherits,
vi.GetString("inherits"))
}
vi.SetConfigName(GetConfigName())
if err := vi.MergeInConfig(); err != nil {
return nil, err
}
}
c := &Config{log: logger, vi: vi}
if err := vi.Unmarshal(&c); err != nil {
return nil, fmt.Errorf("failed to decode config, %v", err)
}
if err := c.init(); err != nil {
return nil, fmt.Errorf("failed to initialize config: %w", err)
}
return c, nil
}
// NewConfigFrom function initializes a Config struct that you manually created
// so it can be used by Super Graph
func NewConfigFrom(c *Config, configPath string, logger *log.Logger) (*Config, error) {
c.vi = newViper(configPath, GetConfigName())
c.log = logger
if err := c.init(); err != nil {
return nil, err
}
return c, nil
}
func newViper(configPath, filename string) *viper.Viper {
vi := viper.New()
vi.SetEnvPrefix("SG")
vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
vi.AutomaticEnv()
vi.SetConfigName(filename)
vi.AddConfigPath(configPath)
vi.AddConfigPath("./config")
vi.SetDefault("host_port", "0.0.0.0:8080")
vi.SetDefault("web_ui", false)
vi.SetDefault("enable_tracing", false)
vi.SetDefault("auth_fail_block", "always")
vi.SetDefault("seed_file", "seed.js")
vi.SetDefault("database.type", "postgres")
vi.SetDefault("database.host", "localhost")
vi.SetDefault("database.port", 5432)
vi.SetDefault("database.user", "postgres")
vi.SetDefault("database.schema", "public")
vi.SetDefault("env", "development")
vi.BindEnv("env", "GO_ENV") //nolint: errcheck
vi.BindEnv("host", "HOST") //nolint: errcheck
vi.BindEnv("port", "PORT") //nolint: errcheck
vi.SetDefault("auth.rails.max_idle", 80)
vi.SetDefault("auth.rails.max_active", 12000)
return vi
}
func (c *Config) init() error {
switch c.Core.LogLevel {
case "debug":
c.logLevel = LogLevelDebug
case "error":
c.logLevel = LogLevelError
case "warn":
c.logLevel = LogLevelWarn
case "info":
c.logLevel = LogLevelInfo
default:
c.logLevel = LogLevelNone
}
if c.UseAllowList {
c.Production = true
}
for k, v := range c.Inflections {
flect.AddPlural(k, v)
}
// Tables: Validate and sanitize
tm := make(map[string]struct{})
for i := 0; i < len(c.Tables); i++ {
t := &c.Tables[i]
t.Name = flect.Pluralize(strings.ToLower(t.Name))
if _, ok := tm[t.Name]; ok {
c.Tables = append(c.Tables[:i], c.Tables[i+1:]...)
c.log.Printf("WRN duplicate table found: %s", t.Name)
}
tm[t.Name] = struct{}{}
t.Table = flect.Pluralize(strings.ToLower(t.Table))
}
// Variables: Validate and sanitize
for k, v := range c.Vars {
c.Vars[k] = sanitize(v)
}
// Roles: validate and sanitize
c.RolesQuery = sanitize(c.RolesQuery)
c.roles = make(map[string]*Role)
for i := 0; i < len(c.Roles); i++ {
r := &c.Roles[i]
r.Name = strings.ToLower(r.Name)
if _, ok := c.roles[r.Name]; ok {
c.Roles = append(c.Roles[:i], c.Roles[i+1:]...)
c.log.Printf("WRN duplicate role found: %s", r.Name)
}
r.Match = sanitize(r.Match)
r.tablesMap = make(map[string]*RoleTable)
for n, table := range r.Tables {
r.tablesMap[table.Name] = &r.Tables[n]
}
c.roles[r.Name] = r
}
if _, ok := c.roles["user"]; !ok {
u := Role{Name: "user"}
c.Roles = append(c.Roles, u)
c.roles["user"] = &u
}
if _, ok := c.roles["anon"]; !ok {
c.log.Printf("WRN unauthenticated requests will be blocked. no role 'anon' defined")
c.AuthFailBlock = true
}
if len(c.RolesQuery) == 0 {
c.log.Printf("WRN roles_query not defined: attribute based access control disabled")
}
if len(c.RolesQuery) == 0 {
c.abacEnabled = false
} else {
switch len(c.Roles) {
case 0, 1:
c.abacEnabled = false
case 2:
_, ok1 := c.roles["anon"]
_, ok2 := c.roles["user"]
c.abacEnabled = !(ok1 && ok2)
default:
c.abacEnabled = true
}
}
// Auths: validate and sanitize
am := make(map[string]struct{})
for i := 0; i < len(c.Auths); i++ {
a := &c.Auths[i]
a.Name = strings.ToLower(a.Name)
if _, ok := am[a.Name]; ok {
c.Auths = append(c.Auths[:i], c.Auths[i+1:]...)
c.log.Printf("WRN duplicate auth found: %s", a.Name)
}
am[a.Name] = struct{}{}
}
// Actions: validate and sanitize
axm := make(map[string]struct{})
for i := 0; i < len(c.Actions); i++ {
a := &c.Actions[i]
a.Name = strings.ToLower(a.Name)
a.AuthName = strings.ToLower(a.AuthName)
if _, ok := axm[a.Name]; ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
c.log.Printf("WRN duplicate action found: %s", a.Name)
}
if _, ok := am[a.AuthName]; !ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
c.log.Printf("WRN invalid auth_name '%s' for auth: %s", a.AuthName, a.Name)
}
axm[a.Name] = struct{}{}
}
c.valid = true
return nil
}
// GetDBTableAliases function returns a map with database tables as keys
// and a list of aliases as values
func (c *Config) GetDBTableAliases() map[string][]string {
m := make(map[string][]string, len(c.Tables))
for i := range c.Tables {
t := c.Tables[i]
if len(t.Table) == 0 || len(t.Columns) != 0 {
continue
}
m[t.Table] = append(m[t.Table], t.Name)
}
return m
}
// IsABACEnabled function returns true if attribute based access control is enabled
func (c *Config) IsABACEnabled() bool {
return c.abacEnabled
}
// IsAnonRoleDefined function returns true if the config has configuration for the `anon` role
func (c *Config) IsAnonRoleDefined() bool {
_, ok := c.roles["anon"]
return ok
}
// GetRole function returns returns the Role struct by name
func (c *Config) GetRole(name string) *Role {
role := c.roles[name]
return role
}
// ConfigPathUsed function returns the path to the current config file (excluding filename)
func (c *Config) ConfigPathUsed() string {
return path.Dir(c.vi.ConfigFileUsed())
}
// WriteConfigAs function writes the config to a file
// Format defined by extension (eg: .yml, .json)
func (c *Config) WriteConfigAs(fname string) error {
return c.vi.WriteConfigAs(fname)
}
// Log function returns the logger
func (c *Config) Log() *log.Logger {
return c.log
}
// LogLevel function returns the log level
func (c *Config) LogLevel() int {
return c.logLevel
}
// IsValid function returns true if the Config struct is initialized and valid
func (c *Config) IsValid() bool {
return c.valid
}
// GetTable function returns the RoleTable struct for a Role by table name
func (r *Role) GetTable(name string) *RoleTable {
table := r.tablesMap[name]
return table
}

View File

@ -1,13 +0,0 @@
package config
import (
"testing"
)
func TestInitConf(t *testing.T) {
_, err := NewConfig("../examples/rails-app/config/supergraph")
if err != nil {
t.Fatal(err.Error())
}
}

233
config/dev.yml Normal file
View File

@ -0,0 +1,233 @@
app_name: "Super Graph Development"
host_port: 0.0.0.0:8080
web_ui: true
# debug, error, warn, info, none
log_level: "debug"
# enable or disable http compression (uses gzip)
http_compress: true
# When production mode is 'true' only queries
# from the allow list are permitted.
# When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list
production: false
# Throw a 401 on auth failure for queries that need auth
auth_fail_block: false
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
# Watch the config folder and reload Super Graph
# with the new configs when a change is detected
reload_on_config_change: true
# File that points to the database seeding script
# seed_file: seed.js
# Path pointing to where the migrations can be found
migrations_path: ./migrations
# Secret key for general encryption operations like
# encrypting the cursor data
secret_key: supercalifajalistics
# CORS: A list of origins a cross-domain request can be executed from.
# If the special * value is present in the list, all origins will be allowed.
# An origin may contain a wildcard (*) to replace 0 or more
# characters (i.e.: http://*.domain.com).
cors_allowed_origins: ["*"]
# Debug Cross Origin Resource Sharing requests
cors_debug: true
# Default API path prefix is /api you can change it if you like
# api_path: "/data"
# Cache-Control header can help cache queries if your CDN supports cache-control
# on POST requests (does not work with not mutations)
# cache_control: "public, max-age=300, s-maxage=600"
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
# inflections:
# person: people
# sheep: sheep
auth:
# Can be 'rails' or 'jwt'
type: rails
cookie: _app_session
# Comment this out if you want to disable setting
# the user_id via a header for testing.
# Disable in production
creds_in_header: true
rails:
# Rails version this is used for reading the
# various cookies formats.
version: 5.2
# Found in 'Rails.application.config.secret_key_base'
secret_key_base: 0a248500a64c01184edb4d7ad3a805488f8097ac761b76aaa6c17c01dcb7af03a2f18ba61b2868134b9c7b79a122bc0dadff4367414a2d173297bfea92be5566
# Remote cookie store. (memcache or redis)
# url: redis://redis:6379
# password: ""
# max_idle: 80
# max_active: 12000
# In most cases you don't need these
# salt: "encrypted cookie"
# sign_salt: "signed encrypted cookie"
# auth_salt: "authenticated encrypted cookie"
# jwt:
# provider: auth0
# secret: abc335bfcfdb04e50db5bb0a4d67ab9
# public_key_file: /secrets/public_key.pem
# public_key_type: ecdsa #rsa
database:
type: postgres
host: db
port: 5432
dbname: app_development
user: postgres
password: postgres
#schema: "public"
#pool_size: 10
#max_retries: 0
#log_level: "debug"
# Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc
set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 1m
# Define additional variables here to be used with filters
variables:
admin_account_id: "5"
# Field and table names that you wish to block
blocklist:
- ar_internal_metadata
- schema_migrations
- secret
- password
- encrypted
- token
tables:
- name: customers
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# debug: true
pass_headers:
- cookie
set_headers:
- name: Host
value: 0.0.0.0
# - name: Authorization
# value: Bearer <stripe_api_key>
- # You can create new fields that have a
# real db table backing them
name: me
table: users
- name: deals
table: products
- name: users
columns:
- name: email
related_to: products.name
roles_query: "SELECT * FROM users WHERE id = $user_id"
roles:
- name: anon
tables:
- name: products
query:
limit: 10
columns: ["id", "name", "description" ]
aggregation: false
insert:
block: false
update:
block: false
delete:
block: false
- name: deals
query:
limit: 3
aggregation: false
- name: purchases
query:
limit: 3
aggregation: false
- name: user
tables:
- name: users
query:
filters: ["{ id: { _eq: $user_id } }"]
- name: products
query:
limit: 50
filters: ["{ user_id: { eq: $user_id } }"]
disable_functions: false
insert:
filters: ["{ user_id: { eq: $user_id } }"]
presets:
- user_id: "$user_id"
- created_at: "now"
- updated_at: "now"
update:
filters: ["{ user_id: { eq: $user_id } }"]
columns:
- id
- name
presets:
- updated_at: "now"
delete:
block: true
- name: admin
match: id = 1000
tables:
- name: users
filters: []

67
config/prod.yml Normal file
View File

@ -0,0 +1,67 @@
# Inherit config from this other config file
# so I only need to overwrite some values
inherits: dev
app_name: "Super Graph Production"
host_port: 0.0.0.0:8080
web_ui: false
# debug, error, warn, info, none
log_level: "info"
# enable or disable http compression (uses gzip)
http_compress: true
# When production mode is 'true' only queries
# from the allow list are permitted.
# When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list
production: true
# Throw a 401 on auth failure for queries that need auth
auth_fail_block: true
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
# File that points to the database seeding script
# seed_file: seed.js
# Path pointing to where the migrations can be found
# migrations_path: ./migrations
# Secret key for general encryption operations like
# encrypting the cursor data
# secret_key: supercalifajalistics
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
database:
type: postgres
host: db
port: 5432
dbname: app_production
user: postgres
password: postgres
#pool_size: 10
#max_retries: 0
#log_level: "debug"
# Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc
set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 5m

116
config/seed.js Normal file
View File

@ -0,0 +1,116 @@
var user_count = 10
customer_count = 100
product_count = 50
purchase_count = 100
var users = []
customers = []
products = []
for (i = 0; i < user_count; i++) {
var pwd = fake.password()
var data = {
full_name: fake.name(),
avatar: fake.avatar_url(200),
phone: fake.phone(),
email: fake.email(),
password: pwd,
password_confirmation: pwd,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
user(insert: $data) { \
id \
} \
}", { data: data })
users.push(res.user)
}
for (i = 0; i < product_count; i++) {
var n = Math.floor(Math.random() * users.length)
var user = users[n]
var desc = [
fake.beer_style(),
fake.beer_hop(),
fake.beer_yeast(),
fake.beer_ibu(),
fake.beer_alcohol(),
fake.beer_blg(),
].join(", ")
var data = {
name: fake.beer_name(),
description: desc,
price: fake.price()
//user_id: user.id,
//created_at: "now",
//updated_at: "now"
}
var res = graphql(" \
mutation { \
product(insert: $data) { \
id \
} \
}", { data: data }, {
user_id: 5
})
products.push(res.product)
}
for (i = 0; i < customer_count; i++) {
var pwd = fake.password()
var data = {
stripe_id: "CUS-" + fake.uuid(),
full_name: fake.name(),
phone: fake.phone(),
email: fake.email(),
password: pwd,
password_confirmation: pwd,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
customer(insert: $data) { \
id \
} \
}", { data: data })
customers.push(res.customer)
}
for (i = 0; i < purchase_count; i++) {
var sale_type = fake.rand_string(["rented", "bought"])
if (sale_type === "rented") {
var due_date = fake.date()
var returned = fake.date()
}
var data = {
customer_id: customers[Math.floor(Math.random() * customer_count)].id,
product_id: products[Math.floor(Math.random() * product_count)].id,
sale_type: sale_type,
quantity: Math.floor(Math.random() * 10),
due_date: due_date,
returned: returned,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
purchase(insert: $data) { \
id \
} \
}", { data: data })
console.log(res)
}

View File

@ -1,52 +0,0 @@
package config
import (
"os"
"regexp"
"strings"
"unicode"
)
var (
varRe1 = regexp.MustCompile(`(?mi)\$([a-zA-Z0-9_.]+)`)
varRe2 = regexp.MustCompile(`\{\{([a-zA-Z0-9_.]+)\}\}`)
)
func sanitize(s string) string {
s0 := varRe1.ReplaceAllString(s, `{{$1}}`)
s1 := strings.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return ' '
}
return r
}, s0)
return varRe2.ReplaceAllStringFunc(s1, func(m string) string {
return strings.ToLower(m)
})
}
func GetConfigName() string {
if len(os.Getenv("GO_ENV")) == 0 {
return "dev"
}
ge := strings.ToLower(os.Getenv("GO_ENV"))
switch {
case strings.HasPrefix(ge, "pro"):
return "prod"
case strings.HasPrefix(ge, "sta"):
return "stage"
case strings.HasPrefix(ge, "tes"):
return "test"
case strings.HasPrefix(ge, "dev"):
return "dev"
}
return ge
}

View File

@ -9,7 +9,6 @@
"database/sql" "database/sql"
"fmt" "fmt"
"time" "time"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib" _ "github.com/jackc/pgx/v4/stdlib"
) )
@ -17,17 +16,12 @@
func main() { func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db") db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
conf, err := config.NewConfig("./config") sg, err := core.NewSuperGraph(nil, db)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
}
sg, err = core.NewSuperGraph(conf, db)
if err != nil {
log.Fatalf(err)
} }
query := ` query := `
@ -40,7 +34,7 @@
res, err := sg.GraphQL(context.Background(), query, nil) res, err := sg.GraphQL(context.Background(), query, nil)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
fmt.Println(string(res.Data)) fmt.Println(string(res.Data))
@ -53,10 +47,10 @@ import (
"crypto/sha256" "crypto/sha256"
"database/sql" "database/sql"
"encoding/json" "encoding/json"
"fmt" _log "log"
"log" "os"
"github.com/dosco/super-graph/config" "github.com/chirino/graphql"
"github.com/dosco/super-graph/core/internal/allow" "github.com/dosco/super-graph/core/internal/allow"
"github.com/dosco/super-graph/core/internal/crypto" "github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/core/internal/psql" "github.com/dosco/super-graph/core/internal/psql"
@ -80,36 +74,45 @@ const (
// SuperGraph struct is an instance of the Super Graph engine it holds all the required information like // SuperGraph struct is an instance of the Super Graph engine it holds all the required information like
// datase schemas, relationships, etc that the GraphQL to SQL compiler would need to do it's job. // datase schemas, relationships, etc that the GraphQL to SQL compiler would need to do it's job.
type SuperGraph struct { type SuperGraph struct {
conf *config.Config conf *Config
db *sql.DB db *sql.DB
log *_log.Logger
dbinfo *psql.DBInfo
schema *psql.DBSchema schema *psql.DBSchema
allowList *allow.List allowList *allow.List
encKey [32]byte encKey [32]byte
prepared map[string]*preparedItem prepared map[string]*preparedItem
roles map[string]*Role
getRole *sql.Stmt getRole *sql.Stmt
rmap map[uint64]*resolvFn
abacEnabled bool
anonExists bool
qc *qcode.Compiler qc *qcode.Compiler
pc *psql.Compiler pc *psql.Compiler
} ge *graphql.Engine
// NewConfig functions initializes config using a config.Core struct
func NewConfig(core config.Core, configPath string, logger *log.Logger) (*config.Config, error) {
c, err := config.NewConfigFrom(&config.Config{Core: core}, configPath, logger)
if err != nil {
return nil, err
}
return c, nil
} }
// NewSuperGraph creates the SuperGraph struct, this involves querying the database to learn its // NewSuperGraph creates the SuperGraph struct, this involves querying the database to learn its
// schemas and relationships // schemas and relationships
func NewSuperGraph(conf *config.Config, db *sql.DB) (*SuperGraph, error) { func NewSuperGraph(conf *Config, db *sql.DB) (*SuperGraph, error) {
if !conf.IsValid() { return newSuperGraph(conf, db, nil)
return nil, fmt.Errorf("invalid config") }
// newSuperGraph helps with writing tests and benchmarks
func newSuperGraph(conf *Config, db *sql.DB, dbinfo *psql.DBInfo) (*SuperGraph, error) {
if conf == nil {
conf = &Config{}
} }
sg := &SuperGraph{ sg := &SuperGraph{
conf: conf, conf: conf,
db: db, db: db,
dbinfo: dbinfo,
log: _log.New(os.Stdout, "", 0),
}
if err := sg.initConfig(); err != nil {
return nil, err
} }
if err := sg.initCompilers(); err != nil { if err := sg.initCompilers(); err != nil {
@ -124,6 +127,14 @@ func NewSuperGraph(conf *config.Config, db *sql.DB) (*SuperGraph, error) {
return nil, err return nil, err
} }
if err := sg.initResolvers(); err != nil {
return nil, err
}
if err := sg.initGraphQLEgine(); err != nil {
return nil, err
}
if len(conf.SecretKey) != 0 { if len(conf.SecretKey) != 0 {
sk := sha256.Sum256([]byte(conf.SecretKey)) sk := sha256.Sum256([]byte(conf.SecretKey))
conf.SecretKey = "" conf.SecretKey = ""
@ -155,7 +166,24 @@ type Result struct {
// In developer mode all names queries are saved into a file `allow.list` and in production mode only // In developer mode all names queries are saved into a file `allow.list` and in production mode only
// queries from this file can be run. // queries from this file can be run.
func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMessage) (*Result, error) { func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMessage) (*Result, error) {
ct := scontext{Context: c, sg: sg, query: query, vars: vars} var res Result
res.op = qcode.GetQType(query)
res.name = allow.QueryName(query)
// use the chirino/graphql library for introspection queries
// disabled when allow list is enforced
if !sg.conf.UseAllowList && res.name == "IntrospectionQuery" {
r := sg.ge.ServeGraphQL(&graphql.Request{Query: query})
res.Data = r.Data
if r.Error() != nil {
res.Error = r.Error().Error()
}
return &res, r.Error()
}
ct := scontext{Context: c, sg: sg, query: query, vars: vars, res: res}
if len(vars) <= 2 { if len(vars) <= 2 {
ct.vars = nil ct.vars = nil
@ -167,9 +195,6 @@ func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMess
ct.role = "anon" ct.role = "anon"
} }
ct.res.op = qcode.GetQType(query)
ct.res.name = allow.QueryName(query)
data, err := ct.execQuery() data, err := ct.execQuery()
if err != nil { if err != nil {
return &ct.res, err return &ct.res, err
@ -179,3 +204,9 @@ func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMess
return &ct.res, nil return &ct.res, nil
} }
// GraphQLSchema function return the GraphQL schema for the underlying database connected
// to this instance of Super Graph
func (sg *SuperGraph) GraphQLSchema() (string, error) {
return sg.ge.Schema.String(), nil
}

62
core/api_test.go Normal file
View File

@ -0,0 +1,62 @@
package core
import (
"context"
"fmt"
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/dosco/super-graph/core/internal/psql"
)
func BenchmarkGraphQL(b *testing.B) {
ct := context.WithValue(context.Background(), UserIDKey, "1")
db, _, err := sqlmock.New()
if err != nil {
b.Fatal(err)
}
defer db.Close()
// mock.ExpectQuery(`^SELECT jsonb_build_object`).WithArgs()
c := &Config{DefaultBlock: true}
sg, err := newSuperGraph(c, db, psql.GetTestDBInfo())
if err != nil {
b.Fatal(err)
}
query := `
query {
products {
id
name
user {
full_name
phone
email
}
customers {
id
email
}
}
users {
id
name
}
}`
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err = sg.GraphQL(ct, query, nil)
}
})
fmt.Println(err)
//fmt.Println(mock.ExpectationsWereMet())
}

View File

@ -9,6 +9,8 @@ import (
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
) )
// argMap function is used to string replace variables with values by
// the fasttemplate code
func (c *scontext) argMap() func(w io.Writer, tag string) (int, error) { func (c *scontext) argMap() func(w io.Writer, tag string) (int, error) {
return func(w io.Writer, tag string) (int, error) { return func(w io.Writer, tag string) (int, error) {
switch tag { switch tag {
@ -56,10 +58,13 @@ func (c *scontext) argMap() func(w io.Writer, tag string) (int, error) {
return w.Write(v1) return w.Write(v1)
} }
return w.Write(escQuote(fields[0].Value)) return w.Write(escSQuote(fields[0].Value))
} }
} }
// argList function is used to create a list of arguments to pass
// to a prepared statement. FYI no escaping of single quotes is
// needed here
func (c *scontext) argList(args [][]byte) ([]interface{}, error) { func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
vars := make([]interface{}, len(args)) vars := make([]interface{}, len(args))
@ -113,7 +118,7 @@ func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
if v, ok := fields[string(av)]; ok { if v, ok := fields[string(av)]; ok {
switch v[0] { switch v[0] {
case '[', '{': case '[', '{':
vars[i] = escQuote(v) vars[i] = v
default: default:
var val interface{} var val interface{}
if err := json.Unmarshal(v, &val); err != nil { if err := json.Unmarshal(v, &val); err != nil {
@ -132,27 +137,25 @@ func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
return vars, nil return vars, nil
} }
func escQuote(b []byte) []byte { //
f := false func escSQuote(b []byte) []byte {
for i := range b { var buf *bytes.Buffer
if b[i] == '\'' {
f = true
break
}
}
if !f {
return b
}
buf := &bytes.Buffer{}
s := 0 s := 0
for i := range b { for i := range b {
if b[i] == '\'' { if b[i] == '\'' {
if buf == nil {
buf = &bytes.Buffer{}
}
buf.Write(b[s:i]) buf.Write(b[s:i])
buf.WriteString(`''`) buf.WriteString(`''`)
s = i + 1 s = i + 1
} }
} }
if buf == nil {
return b
}
l := len(b) l := len(b)
if s < (l - 1) { if s < (l - 1) {
buf.Write(b[s:l]) buf.Write(b[s:l])

13
core/args_test.go Normal file
View File

@ -0,0 +1,13 @@
package core
import "testing"
func TestEscQuote(t *testing.T) {
val := "That's the worst, don''t be calling me's again"
exp := "That''s the worst, don''''t be calling me''s again"
ret := escSQuote([]byte(val))
if exp != string(ret) {
t.Errorf("escSQuote failed: %s", string(ret))
}
}

View File

@ -7,13 +7,12 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core/internal/psql" "github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
type stmt struct { type stmt struct {
role *config.Role role *Role
qc *qcode.QCode qc *qcode.QCode
skipped uint32 skipped uint32
sql string sql string
@ -29,7 +28,7 @@ func (sg *SuperGraph) buildStmt(qt qcode.QType, query, vars []byte, role string)
return sg.buildRoleStmt(query, vars, "anon") return sg.buildRoleStmt(query, vars, "anon")
} }
if sg.conf.IsABACEnabled() { if sg.abacEnabled {
return sg.buildMultiStmt(query, vars) return sg.buildMultiStmt(query, vars)
} }
@ -41,8 +40,8 @@ func (sg *SuperGraph) buildStmt(qt qcode.QType, query, vars []byte, role string)
} }
func (sg *SuperGraph) buildRoleStmt(query, vars []byte, role string) ([]stmt, error) { func (sg *SuperGraph) buildRoleStmt(query, vars []byte, role string) ([]stmt, error) {
ro := sg.conf.GetRole(role) ro, ok := sg.roles[role]
if ro == nil { if !ok {
return nil, fmt.Errorf(`roles '%s' not defined in c.sg.config`, role) return nil, fmt.Errorf(`roles '%s' not defined in c.sg.config`, role)
} }
@ -168,16 +167,16 @@ func (sg *SuperGraph) renderUserQuery(stmts []stmt) (string, error) {
return w.String(), nil return w.String(), nil
} }
func (sg *SuperGraph) hasTablesWithConfig(qc *qcode.QCode, role *config.Role) bool { // func (sg *SuperGraph) hasTablesWithConfig(qc *qcode.QCode, role *Role) bool {
for _, id := range qc.Roots { // for _, id := range qc.Roots {
t, err := sg.schema.GetTable(qc.Selects[id].Name) // t, err := sg.schema.GetTable(qc.Selects[id].Name)
if err != nil { // if err != nil {
return false // return false
} // }
if r := role.GetTable(t.Name); r == nil { // if r := role.GetTable(t.Name); r == nil {
return false // return false
} // }
} // }
return true // return true
} // }

View File

@ -2,164 +2,207 @@ package core
import ( import (
"fmt" "fmt"
"path"
"path/filepath"
"strings" "strings"
"github.com/dosco/super-graph/config" "github.com/spf13/viper"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
) )
func addTables(c *config.Config, di *psql.DBInfo) error { // Core struct contains core specific config value
for _, t := range c.Tables { type Config struct {
if len(t.Table) == 0 || len(t.Columns) == 0 { // SecretKey is used to encrypt opaque values such as
continue // the cursor. Auto-generated if not set
} SecretKey string `mapstructure:"secret_key"`
if err := addTable(di, t.Columns, t); err != nil {
return err // UseAllowList (aka production mode) when set to true ensures
} // only queries lists in the allow.list file can be used. All
} // queries are pre-prepared so no compiling happens and things are
return nil // very fast.
UseAllowList bool `mapstructure:"use_allow_list"`
// AllowListFile if the path to allow list file if not set the
// path is assumed to tbe the same as the config path (allow.list)
AllowListFile string `mapstructure:"allow_list_file"`
// SetUserID forces the database session variable `user.id` to
// be set to the user id. This variables can be used by triggers
// or other database functions
SetUserID bool `mapstructure:"set_user_id"`
// DefaultBlock ensures only tables configured under the `anon` role
// config can be queries if the `anon` role. For example if the table
// `users` is not listed under the anon role then it will be filtered
// out of any unauthenticated queries that mention it.
DefaultBlock bool `mapstructure:"default_block"`
// Vars is a map of hardcoded variables that can be leveraged in your
// queries (eg variable admin_id will be $admin_id in the query)
Vars map[string]string `mapstructure:"variables"`
// Blocklist is a list of tables and columns that should be filtered
// out from any and all queries
Blocklist []string
// Tables contains all table specific configuration such as aliased tables
// creating relationships between tables, etc
Tables []Table
// RolesQuery if set enabled attributed based access control. This query
// is use to fetch the user attributes that then dynamically define the users
// role.
RolesQuery string `mapstructure:"roles_query"`
// Roles contains all the configuration for all the roles you want to support
// `user` and `anon` are two default roles. User role is for when a user ID is
// available and Anon when it's not.
Roles []Role
// Inflections is to add additionally singular to plural mappings
// to the engine (eg. sheep: sheep)
Inflections map[string]string `mapstructure:"inflections"`
} }
func addTable(di *psql.DBInfo, cols []config.Column, t config.Table) error { // Table struct defines a database table
bc, ok := di.GetColumn(t.Table, t.Name) type Table struct {
if !ok { Name string
return fmt.Errorf( Table string
"Column '%s' not found on table '%s'", Blocklist []string
t.Name, t.Table) Remotes []Remote
Columns []Column
} }
if bc.Type != "json" && bc.Type != "jsonb" { // Column struct defines a database column
return fmt.Errorf( type Column struct {
"Column '%s' in table '%s' is of type '%s'. Only JSON or JSONB is valid", Name string
t.Name, t.Table, bc.Type) Type string
ForeignKey string `mapstructure:"related_to"`
} }
table := psql.DBTable{ // Remote struct defines a remote API endpoint
Name: t.Name, type Remote struct {
Key: strings.ToLower(t.Name), Name string
Type: bc.Type, ID string
Path string
URL string
Debug bool
PassHeaders []string `mapstructure:"pass_headers"`
SetHeaders []struct {
Name string
Value string
} `mapstructure:"set_headers"`
} }
columns := make([]psql.DBColumn, 0, len(cols)) // Role struct contains role specific access control values for for all database tables
type Role struct {
for i := range cols { Name string
c := cols[i] Match string
columns = append(columns, psql.DBColumn{ Tables []RoleTable
Name: c.Name, tm map[string]*RoleTable
Key: strings.ToLower(c.Name),
Type: c.Type,
})
} }
di.AddTable(table, columns) // RoleTable struct contains role specific access control values for a database table
bc.FKeyTable = t.Name type RoleTable struct {
Name string
return nil Query Query
Insert Insert
Update Update
Delete Delete
} }
func addForeignKeys(c *config.Config, di *psql.DBInfo) error { // Query struct contains access control values for query operations
for _, t := range c.Tables { type Query struct {
for _, c := range t.Columns { Limit int
if len(c.ForeignKey) == 0 { Filters []string
continue Columns []string
} DisableFunctions bool `mapstructure:"disable_functions"`
if err := addForeignKey(di, c, t); err != nil { Block bool
return err
}
}
}
return nil
} }
func addForeignKey(di *psql.DBInfo, c config.Column, t config.Table) error { // Insert struct contains access control values for insert operations
c1, ok := di.GetColumn(t.Name, c.Name) type Insert struct {
if !ok { Filters []string
return fmt.Errorf( Columns []string
"Invalid table '%s' or column '%s' in config.Config", Presets map[string]string
t.Name, c.Name) Block bool
} }
v := strings.SplitN(c.ForeignKey, ".", 2) // Insert struct contains access control values for update operations
if len(v) != 2 { type Update struct {
return fmt.Errorf( Filters []string
"Invalid foreign_key in config.Config for table '%s' and column '%s", Columns []string
t.Name, c.Name) Presets map[string]string
Block bool
} }
fkt, fkc := v[0], v[1] // Delete struct contains access control values for delete operations
c2, ok := di.GetColumn(fkt, fkc) type Delete struct {
if !ok { Filters []string
return fmt.Errorf( Columns []string
"Invalid foreign_key in config.Config for table '%s' and column '%s", Block bool
t.Name, c.Name)
} }
c1.FKeyTable = fkt // ReadInConfig function reads in the config file for the environment specified in the GO_ENV
c1.FKeyColID = []int16{c2.ID} // environment variable. This is the best way to create a new Super Graph config.
func ReadInConfig(configFile string) (*Config, error) {
cpath := path.Dir(configFile)
cfile := path.Base(configFile)
vi := newViper(cpath, cfile)
return nil if err := vi.ReadInConfig(); err != nil {
return nil, err
} }
func addRoles(c *config.Config, qc *qcode.Compiler) error { inherits := vi.GetString("inherits")
for _, r := range c.Roles {
for _, t := range r.Tables { if len(inherits) != 0 {
if err := addRole(qc, r, t); err != nil { vi = newViper(cpath, inherits)
return err
if err := vi.ReadInConfig(); err != nil {
return nil, err
} }
if vi.IsSet("inherits") {
return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)",
inherits,
vi.GetString("inherits"))
}
vi.SetConfigName(cfile)
if err := vi.MergeInConfig(); err != nil {
return nil, err
} }
} }
return nil c := &Config{}
if err := vi.Unmarshal(&c); err != nil {
return nil, fmt.Errorf("failed to decode config, %v", err)
} }
func addRole(qc *qcode.Compiler, r config.Role, t config.RoleTable) error { if len(c.AllowListFile) == 0 {
blockFilter := []string{"false"} c.AllowListFile = path.Join(cpath, "allow.list")
query := qcode.QueryConfig{
Limit: t.Query.Limit,
Filters: t.Query.Filters,
Columns: t.Query.Columns,
DisableFunctions: t.Query.DisableFunctions,
} }
if t.Query.Block { return c, nil
query.Filters = blockFilter
} }
insert := qcode.InsertConfig{ func newViper(configPath, configFile string) *viper.Viper {
Filters: t.Insert.Filters, vi := viper.New()
Columns: t.Insert.Columns,
Presets: t.Insert.Presets, vi.SetEnvPrefix("SG")
vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
vi.AutomaticEnv()
if len(filepath.Ext(configFile)) != 0 {
vi.SetConfigFile(configFile)
} else {
vi.SetConfigName(configFile)
vi.AddConfigPath(configPath)
vi.AddConfigPath("./config")
} }
if t.Insert.Block { return vi
insert.Filters = blockFilter
}
update := qcode.UpdateConfig{
Filters: t.Update.Filters,
Columns: t.Update.Columns,
Presets: t.Update.Presets,
}
if t.Update.Block {
update.Filters = blockFilter
}
delete := qcode.DeleteConfig{
Filters: t.Delete.Filters,
Columns: t.Delete.Columns,
}
if t.Delete.Block {
delete.Filters = blockFilter
}
return qc.AddRole(r.Name, t.Name, qcode.TRConfig{
Query: query,
Insert: insert,
Update: update,
Delete: delete,
})
} }

View File

@ -14,6 +14,11 @@ import (
"github.com/valyala/fasttemplate" "github.com/valyala/fasttemplate"
) )
const (
OpQuery int = iota
OpMutation
)
type extensions struct { type extensions struct {
Tracing *trace `json:"tracing,omitempty"` Tracing *trace `json:"tracing,omitempty"`
} }
@ -50,25 +55,32 @@ type scontext struct {
} }
func (sg *SuperGraph) initCompilers() error { func (sg *SuperGraph) initCompilers() error {
di, err := psql.GetDBInfo(sg.db) var err error
// If sg.di is not null then it's probably set
// for tests
if sg.dbinfo == nil {
sg.dbinfo, err = psql.GetDBInfo(sg.db)
if err != nil { if err != nil {
return err return err
} }
}
if err = addTables(sg.conf, di); err != nil { if err = addTables(sg.conf, sg.dbinfo); err != nil {
return err return err
} }
if err = addForeignKeys(sg.conf, di); err != nil { if err = addForeignKeys(sg.conf, sg.dbinfo); err != nil {
return err return err
} }
sg.schema, err = psql.NewDBSchema(di, sg.conf.GetDBTableAliases()) sg.schema, err = psql.NewDBSchema(sg.dbinfo, getDBTableAliases(sg.conf))
if err != nil { if err != nil {
return err return err
} }
sg.qc, err = qcode.NewCompiler(qcode.Config{ sg.qc, err = qcode.NewCompiler(qcode.Config{
DefaultBlock: sg.conf.DefaultBlock,
Blocklist: sg.conf.Blocklist, Blocklist: sg.conf.Blocklist,
}) })
if err != nil { if err != nil {
@ -89,25 +101,25 @@ func (sg *SuperGraph) initCompilers() error {
func (c *scontext) execQuery() ([]byte, error) { func (c *scontext) execQuery() ([]byte, error) {
var data []byte var data []byte
// var st *stmt var st *stmt
var err error var err error
if c.sg.conf.Production { if c.sg.conf.UseAllowList {
data, _, err = c.resolvePreparedSQL() data, st, err = c.resolvePreparedSQL()
if err != nil {
return nil, err
}
} else { } else {
data, _, err = c.resolveSQL() data, st, err = c.resolveSQL()
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(data) == 0 || st.skipped == 0 {
return data, nil
} }
return data, nil // return c.sg.execRemoteJoin(st, data, c.req.hdr)
return c.sg.execRemoteJoin(st, data, nil)
//return execRemoteJoin(st, data, c.req.hdr)
} }
func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) { func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
@ -115,7 +127,7 @@ func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
var err error var err error
mutation := (c.res.op == qcode.QTMutation) mutation := (c.res.op == qcode.QTMutation)
useRoleQuery := c.sg.conf.IsABACEnabled() && mutation useRoleQuery := c.sg.abacEnabled && mutation
useTx := useRoleQuery || c.sg.conf.SetUserID useTx := useRoleQuery || c.sg.conf.SetUserID
if useTx { if useTx {
@ -148,7 +160,7 @@ func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
c.res.role = role c.res.role = role
ps, ok := prepared[stmtHash(c.res.name, role)] ps, ok := c.sg.prepared[stmtHash(c.res.name, role)]
if !ok { if !ok {
return nil, nil, errNotFound return nil, nil, errNotFound
} }
@ -198,7 +210,7 @@ func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
var err error var err error
mutation := (c.res.op == qcode.QTMutation) mutation := (c.res.op == qcode.QTMutation)
useRoleQuery := c.sg.conf.IsABACEnabled() && mutation useRoleQuery := c.sg.abacEnabled && mutation
useTx := useRoleQuery || c.sg.conf.SetUserID useTx := useRoleQuery || c.sg.conf.SetUserID
if useTx { if useTx {
@ -322,7 +334,20 @@ func (c *scontext) executeRoleQuery(tx *sql.Tx) (string, error) {
return role, nil return role, nil
} }
func (r *Result) Operation() string { func (r *Result) Operation() int {
switch r.op {
case qcode.QTQuery:
return OpQuery
case qcode.QTMutation, qcode.QTInsert, qcode.QTUpdate, qcode.QTUpsert, qcode.QTDelete:
return OpMutation
default:
return -1
}
}
func (r *Result) OperationName() string {
return r.op.String() return r.op.String()
} }

View File

@ -5,8 +5,8 @@ import (
"encoding/base64" "encoding/base64"
"github.com/dosco/super-graph/core/internal/crypto" "github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/jsn"
) )
func (sg *SuperGraph) encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) { func (sg *SuperGraph) encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) {

295
core/init.go Normal file
View File

@ -0,0 +1,295 @@
package core
import (
"fmt"
"regexp"
"strings"
"unicode"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/gobuffalo/flect"
)
func (sg *SuperGraph) initConfig() error {
c := sg.conf
for k, v := range c.Inflections {
flect.AddPlural(k, v)
}
// Variables: Validate and sanitize
for k, v := range c.Vars {
c.Vars[k] = sanitizeVars(v)
}
// Tables: Validate and sanitize
tm := make(map[string]struct{})
for i := 0; i < len(c.Tables); i++ {
t := &c.Tables[i]
t.Name = flect.Pluralize(strings.ToLower(t.Name))
if _, ok := tm[t.Name]; ok {
sg.conf.Tables = append(c.Tables[:i], c.Tables[i+1:]...)
sg.log.Printf("WRN duplicate table found: %s", t.Name)
}
tm[t.Name] = struct{}{}
t.Table = flect.Pluralize(strings.ToLower(t.Table))
}
sg.roles = make(map[string]*Role)
for i := 0; i < len(c.Roles); i++ {
role := &c.Roles[i]
role.Name = sanitize(role.Name)
if _, ok := sg.roles[role.Name]; ok {
c.Roles = append(c.Roles[:i], c.Roles[i+1:]...)
sg.log.Printf("WRN duplicate role found: %s", role.Name)
}
role.Match = sanitize(role.Match)
role.tm = make(map[string]*RoleTable)
for n, table := range role.Tables {
role.tm[table.Name] = &role.Tables[n]
}
sg.roles[role.Name] = role
}
// If user role not defined then create it
if _, ok := sg.roles["user"]; !ok {
ur := Role{
Name: "user",
tm: make(map[string]*RoleTable),
}
c.Roles = append(c.Roles, ur)
sg.roles["user"] = &ur
}
// If anon role is not defined and DefaultBlock is not then then create it
if _, ok := sg.roles["anon"]; !ok && !c.DefaultBlock {
ur := Role{
Name: "anon",
tm: make(map[string]*RoleTable),
}
c.Roles = append(c.Roles, ur)
sg.roles["anon"] = &ur
}
// Roles: validate and sanitize
c.RolesQuery = sanitizeVars(c.RolesQuery)
if len(c.RolesQuery) == 0 {
sg.log.Printf("WRN roles_query not defined: attribute based access control disabled")
}
_, userExists := sg.roles["user"]
_, sg.anonExists = sg.roles["anon"]
sg.abacEnabled = userExists && len(c.RolesQuery) != 0
return nil
}
func getDBTableAliases(c *Config) map[string][]string {
m := make(map[string][]string, len(c.Tables))
for i := range c.Tables {
t := c.Tables[i]
if len(t.Table) == 0 || len(t.Columns) != 0 {
continue
}
m[t.Table] = append(m[t.Table], t.Name)
}
return m
}
func addTables(c *Config, di *psql.DBInfo) error {
for _, t := range c.Tables {
if len(t.Table) == 0 || len(t.Columns) == 0 {
continue
}
if err := addTable(di, t.Columns, t); err != nil {
return err
}
}
return nil
}
func addTable(di *psql.DBInfo, cols []Column, t Table) error {
bc, ok := di.GetColumn(t.Table, t.Name)
if !ok {
return fmt.Errorf(
"Column '%s' not found on table '%s'",
t.Name, t.Table)
}
if bc.Type != "json" && bc.Type != "jsonb" {
return fmt.Errorf(
"Column '%s' in table '%s' is of type '%s'. Only JSON or JSONB is valid",
t.Name, t.Table, bc.Type)
}
table := psql.DBTable{
Name: t.Name,
Key: strings.ToLower(t.Name),
Type: bc.Type,
}
columns := make([]psql.DBColumn, 0, len(cols))
for i := range cols {
c := cols[i]
columns = append(columns, psql.DBColumn{
Name: c.Name,
Key: strings.ToLower(c.Name),
Type: c.Type,
})
}
di.AddTable(table, columns)
bc.FKeyTable = t.Name
return nil
}
func addForeignKeys(c *Config, di *psql.DBInfo) error {
for _, t := range c.Tables {
for _, c := range t.Columns {
if len(c.ForeignKey) == 0 {
continue
}
if err := addForeignKey(di, c, t); err != nil {
return err
}
}
}
return nil
}
func addForeignKey(di *psql.DBInfo, c Column, t Table) error {
c1, ok := di.GetColumn(t.Name, c.Name)
if !ok {
return fmt.Errorf(
"Invalid table '%s' or column '%s' in Config",
t.Name, c.Name)
}
v := strings.SplitN(c.ForeignKey, ".", 2)
if len(v) != 2 {
return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s",
t.Name, c.Name)
}
fkt, fkc := v[0], v[1]
c2, ok := di.GetColumn(fkt, fkc)
if !ok {
return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s",
t.Name, c.Name)
}
c1.FKeyTable = fkt
c1.FKeyColID = []int16{c2.ID}
return nil
}
func addRoles(c *Config, qc *qcode.Compiler) error {
for _, r := range c.Roles {
for _, t := range r.Tables {
if err := addRole(qc, r, t); err != nil {
return err
}
}
}
return nil
}
func addRole(qc *qcode.Compiler, r Role, t RoleTable) error {
blockFilter := []string{"false"}
query := qcode.QueryConfig{
Limit: t.Query.Limit,
Filters: t.Query.Filters,
Columns: t.Query.Columns,
DisableFunctions: t.Query.DisableFunctions,
}
if t.Query.Block {
query.Filters = blockFilter
}
insert := qcode.InsertConfig{
Filters: t.Insert.Filters,
Columns: t.Insert.Columns,
Presets: t.Insert.Presets,
}
if t.Insert.Block {
insert.Filters = blockFilter
}
update := qcode.UpdateConfig{
Filters: t.Update.Filters,
Columns: t.Update.Columns,
Presets: t.Update.Presets,
}
if t.Update.Block {
update.Filters = blockFilter
}
delete := qcode.DeleteConfig{
Filters: t.Delete.Filters,
Columns: t.Delete.Columns,
}
if t.Delete.Block {
delete.Filters = blockFilter
}
return qc.AddRole(r.Name, t.Name, qcode.TRConfig{
Query: query,
Insert: insert,
Update: update,
Delete: delete,
})
}
func (r *Role) GetTable(name string) *RoleTable {
return r.tm[name]
}
func sanitize(value string) string {
return strings.ToLower(strings.TrimSpace(value))
}
var (
varRe1 = regexp.MustCompile(`(?mi)\$([a-zA-Z0-9_.]+)`)
varRe2 = regexp.MustCompile(`\{\{([a-zA-Z0-9_.]+)\}\}`)
)
func sanitizeVars(s string) string {
s0 := varRe1.ReplaceAllString(s, `{{$1}}`)
s1 := strings.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return ' '
}
return r
}, s0)
return varRe2.ReplaceAllStringFunc(s1, func(m string) string {
return strings.ToLower(m)
})
}

View File

@ -7,9 +7,10 @@ import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os" "os"
"path"
"sort" "sort"
"strings" "strings"
"github.com/dosco/super-graph/jsn"
) )
const ( const (
@ -35,11 +36,11 @@ type Config struct {
Persist bool Persist bool
} }
func New(cpath string, conf Config) (*List, error) { func New(filename string, conf Config) (*List, error) {
al := List{} al := List{}
if len(cpath) != 0 { if len(filename) != 0 {
fp := path.Join(cpath, "allow.list") fp := filename
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
al.filepath = fp al.filepath = fp
@ -73,10 +74,10 @@ func New(cpath string, conf Config) (*List, error) {
return nil, errors.New("allow.list not found") return nil, errors.New("allow.list not found")
} }
if len(cpath) == 0 { if len(filename) == 0 {
al.filepath = "./config/allow.list" al.filepath = "./config/allow.list"
} else { } else {
al.filepath = path.Join(cpath, "allow.list") al.filepath = filename
} }
} }
@ -231,6 +232,8 @@ func (al *List) Load() ([]Item, error) {
} }
func (al *List) save(item Item) error { func (al *List) save(item Item) error {
var buf bytes.Buffer
item.Name = QueryName(item.Query) item.Name = QueryName(item.Query)
item.key = strings.ToLower(item.Name) item.key = strings.ToLower(item.Name)
@ -299,9 +302,16 @@ func (al *List) save(item Item) error {
} }
if len(v.Vars) != 0 && !bytes.Equal(v.Vars, []byte("{}")) { if len(v.Vars) != 0 && !bytes.Equal(v.Vars, []byte("{}")) {
vj, err := json.MarshalIndent(v.Vars, "", " ") buf.Reset()
if err := jsn.Clear(&buf, v.Vars); err != nil {
return fmt.Errorf("failed to clean vars: %w", err)
}
vj := json.RawMessage(buf.Bytes())
vj, err = json.MarshalIndent(vj, "", " ")
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal vars: %v", err) return fmt.Errorf("failed to marshal vars: %w", err)
} }
_, err = f.WriteString(fmt.Sprintf("variables %s\n\n", vj)) _, err = f.WriteString(fmt.Sprintf("variables %s\n\n", vj))

View File

@ -0,0 +1,88 @@
package cockraochdb_test
import (
"database/sql"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"regexp"
"sync/atomic"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
dir, err := ioutil.TempDir("", "temp-cockraochdb-")
if err != nil {
log.Fatal(err)
}
cmd := exec.Command("cockroach", "start", "--insecure", "--listen-addr", ":0", "--http-addr", ":0", "--store=path="+dir)
finder := &urlFinder{
c: make(chan bool),
}
cmd.Stdout = finder
cmd.Stderr = ioutil.Discard
err = cmd.Start()
if err != nil {
t.Skip("is CockroachDB installed?: " + err.Error())
}
fmt.Println("started temporary cockroach db")
stopped := int32(0)
stopDatabase := func() {
fmt.Println("stopping temporary cockroach db")
if atomic.CompareAndSwapInt32(&stopped, 0, 1) {
if err := cmd.Process.Kill(); err != nil {
log.Fatal(err)
}
if _, err := cmd.Process.Wait(); err != nil {
log.Fatal(err)
}
os.RemoveAll(dir)
}
}
defer stopDatabase()
// Wait till we figure out the URL we should connect to...
<-finder.c
db, err := sql.Open("pgx", finder.URL)
if err != nil {
stopDatabase()
require.NoError(t, err)
}
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
if t.Name() == "TestCockroachDB/nested_insert" {
t.Skip("nested inserts currently not working yet on cockroach db")
}
})
}
type urlFinder struct {
c chan bool
done bool
URL string
}
func (finder *urlFinder) Write(p []byte) (n int, err error) {
s := string(p)
urlRegex := regexp.MustCompile(`\nsql:\s+(postgresql:[^\s]+)\n`)
if !finder.done {
submatch := urlRegex.FindAllStringSubmatch(s, -1)
if submatch != nil {
finder.URL = submatch[0][1]
finder.done = true
close(finder.c)
}
}
return len(p), nil
}

View File

@ -0,0 +1,260 @@
package integration_tests
import (
"context"
"database/sql"
"encoding/json"
"io/ioutil"
"testing"
"github.com/dosco/super-graph/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func SetupSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`
CREATE TABLE users (
id integer PRIMARY KEY,
full_name text
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE product (
id integer PRIMARY KEY,
name text,
weight float
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE line_item (
id integer PRIMARY KEY,
product integer REFERENCES product(id),
quantity integer,
price float
)`)
require.NoError(t, err)
}
func DropSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`DROP TABLE IF EXISTS line_item`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS product`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS users`)
require.NoError(t, err)
}
func TestSuperGraph(t *testing.T, db *sql.DB, before func(t *testing.T)) {
config := core.Config{DefaultBlock: true}
config.UseAllowList = false
config.AllowListFile = "./allow.list"
config.RolesQuery = `SELECT * FROM users WHERE id = $user_id`
config.Roles = []core.Role{
core.Role{
Name: "anon",
Tables: []core.RoleTable{
core.RoleTable{Name: "users", Query: core.Query{Limit: 100}},
core.RoleTable{Name: "product", Query: core.Query{Limit: 100}},
core.RoleTable{Name: "line_item", Query: core.Query{Limit: 100}},
},
},
}
sg, err := core.NewSuperGraph(&config, db)
require.NoError(t, err)
ctx := context.Background()
t.Run("seed fixtures", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { products (insert: $products) { id } }`,
json.RawMessage(`{"products":[
{"id":1, "name":"Charmin Ultra Soft", "weight": 0.5},
{"id":2, "name":"Hand Sanitizer", "weight": 0.2},
{"id":3, "name":"Case of Corona", "weight": 1.2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"products": [{"id": 1}, {"id": 2}, {"id": 3}]}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`mutation { line_items (insert: $line_items) { id } }`,
json.RawMessage(`{"line_items":[
{"id":5001, "product":1, "price":6.95, "quantity":10},
{"id":5002, "product":2, "price":10.99, "quantity":2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001}, {"id": 5002}]}`, string(res.Data))
})
t.Run("get line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 10}}`, string(res.Data))
})
t.Run("get line items", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 10}, {"id": 5002, "price": 10.99, "quantity": 2}]}`, string(res.Data))
})
t.Run("update line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(update:$update, id:$id) { id } }`,
json.RawMessage(`{"id":5001, "update":{"quantity":20}}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 20}}`, string(res.Data))
})
t.Run("delete line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(delete:true, id:$id) { id } }`,
json.RawMessage(`{"id":5002}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5002}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 20}]}`, string(res.Data))
})
t.Run("nested insert", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_items (insert: $line_item) { id, product { name } } }`,
json.RawMessage(`{"line_item":
{"id":5003, "product": { "connect": { "id": 1} }, "price":10.95, "quantity":15}
}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5003, "product": {"name": "Charmin Ultra Soft"}}]}`, string(res.Data))
})
t.Run("schema introspection", func(t *testing.T) {
before(t)
schema, err := sg.GraphQLSchema()
require.NoError(t, err)
// Uncomment the following line if you need to regenerate the expected schema.
//ioutil.WriteFile("../introspection.graphql", []byte(schema), 0644)
expected, err := ioutil.ReadFile("../introspection.graphql")
require.NoError(t, err)
assert.Equal(t, string(expected), schema)
})
res, err := sg.GraphQL(ctx, introspectionQuery, json.RawMessage(``))
assert.NoError(t, err)
assert.Contains(t, string(res.Data),
`{"queryType":{"name":"Query"},"mutationType":{"name":"Mutation"},"subscriptionType":null,"types":`)
}
const introspectionQuery = `
query IntrospectionQuery {
__schema {
queryType { name }
mutationType { name }
subscriptionType { name }
types {
...FullType
}
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type { ...TypeRef }
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
`

View File

@ -0,0 +1,319 @@
input FloatExpression {
contained_in:String!
contains:[Float!]!
eq:Float!
equals:Float!
greater_or_equals:Float!
greater_than:Float!
gt:Float!
gte:Float!
has_key:Float!
has_key_all:[Float!]!
has_key_any:[Float!]!
ilike:String!
in:[Float!]!
is_null:Boolean!
lesser_or_equals:Float!
lesser_than:Float!
like:String!
lt:Float!
lte:Float!
neq:Float!
nilike:String!
nin:[Float!]!
nlike:String!
not_equals:Float!
not_ilike:String!
not_in:[Float!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input IntExpression {
contained_in:String!
contains:[Int!]!
eq:Int!
equals:Int!
greater_or_equals:Int!
greater_than:Int!
gt:Int!
gte:Int!
has_key:Int!
has_key_all:[Int!]!
has_key_any:[Int!]!
ilike:String!
in:[Int!]!
is_null:Boolean!
lesser_or_equals:Int!
lesser_than:Int!
like:String!
lt:Int!
lte:Int!
neq:Int!
nilike:String!
nin:[Int!]!
nlike:String!
not_equals:Int!
not_ilike:String!
not_in:[Int!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
type Mutation {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput, inserts:[line_itemInput!]!, updates:[line_itemInput!]!, upserts:[line_itemInput!]!
):line_itemOutput
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput, inserts:[productInput!]!, updates:[productInput!]!, upserts:[productInput!]!
):productOutput
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput, inserts:[userInput!]!, updates:[userInput!]!, upserts:[userInput!]!
):userOutput
}
enum OrderDirection {
asc
desc
}
type Query {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[line_itemOutput!]!
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[productOutput!]!
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[userOutput!]!
}
input StringExpression {
contained_in:String!
contains:[String!]!
eq:String!
equals:String!
greater_or_equals:String!
greater_than:String!
gt:String!
gte:String!
has_key:String!
has_key_all:[String!]!
has_key_any:[String!]!
ilike:String!
in:[String!]!
is_null:Boolean!
lesser_or_equals:String!
lesser_than:String!
like:String!
lt:String!
lte:String!
neq:String!
nilike:String!
nin:[String!]!
nlike:String!
not_equals:String!
not_ilike:String!
not_in:[String!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input line_itemExpression {
and:line_itemExpression!
id:IntExpression!
not:line_itemExpression!
or:line_itemExpression!
price:FloatExpression!
product:IntExpression!
quantity:IntExpression!
}
input line_itemInput {
id:Int!
price:Float
product:Int
quantity:Int
}
input line_itemOrderBy {
id:OrderDirection!
price:OrderDirection!
product:OrderDirection!
quantity:OrderDirection!
}
type line_itemOutput {
avg_id:Int!
avg_price:Float
avg_product:Int
avg_quantity:Int
count_id:Int!
count_price:Float
count_product:Int
count_quantity:Int
id:Int!
max_id:Int!
max_price:Float
max_product:Int
max_quantity:Int
min_id:Int!
min_price:Float
min_product:Int
min_quantity:Int
price:Float
product:Int
quantity:Int
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_price:Float
stddev_pop_product:Int
stddev_pop_quantity:Int
stddev_price:Float
stddev_product:Int
stddev_quantity:Int
stddev_samp_id:Int!
stddev_samp_price:Float
stddev_samp_product:Int
stddev_samp_quantity:Int
var_pop_id:Int!
var_pop_price:Float
var_pop_product:Int
var_pop_quantity:Int
var_samp_id:Int!
var_samp_price:Float
var_samp_product:Int
var_samp_quantity:Int
variance_id:Int!
variance_price:Float
variance_product:Int
variance_quantity:Int
}
input productExpression {
and:productExpression!
id:IntExpression!
name:StringExpression!
not:productExpression!
or:productExpression!
weight:FloatExpression!
}
input productInput {
id:Int!
name:String
weight:Float
}
input productOrderBy {
id:OrderDirection!
name:OrderDirection!
weight:OrderDirection!
}
type productOutput {
avg_id:Int!
avg_weight:Float
count_id:Int!
count_weight:Float
id:Int!
max_id:Int!
max_weight:Float
min_id:Int!
min_weight:Float
name:String
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_weight:Float
stddev_samp_id:Int!
stddev_samp_weight:Float
stddev_weight:Float
var_pop_id:Int!
var_pop_weight:Float
var_samp_id:Int!
var_samp_weight:Float
variance_id:Int!
variance_weight:Float
weight:Float
}
input userExpression {
and:userExpression!
full_name:StringExpression!
id:IntExpression!
not:userExpression!
or:userExpression!
}
input userInput {
full_name:String
id:Int!
}
input userOrderBy {
full_name:OrderDirection!
id:OrderDirection!
}
type userOutput {
avg_id:Int!
count_id:Int!
full_name:String
id:Int!
max_id:Int!
min_id:Int!
stddev_id:Int!
stddev_pop_id:Int!
stddev_samp_id:Int!
var_pop_id:Int!
var_samp_id:Int!
variance_id:Int!
}
schema {
mutation: Mutation
query: Query
}

View File

@ -0,0 +1,27 @@
package cockraochdb_test
import (
"database/sql"
"os"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
url, found := os.LookupEnv("SG_POSTGRESQL_TEST_URL")
if !found {
t.Skip("set the SG_POSTGRESQL_TEST_URL env variable if you want to run integration tests against a PostgreSQL database")
}
db, err := sql.Open("pgx", url)
require.NoError(t, err)
integration_tests.DropSchema(t, db)
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
})
}

View File

@ -167,7 +167,7 @@ func (c *compilerContext) renderColumnTypename(sel *qcode.Select, ti *DBTableInf
} }
func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error { func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error {
pl := funcPrefixLen(col.Name) pl := funcPrefixLen(c.schema.fm, col.Name)
// if pl == 0 { // if pl == 0 {
// //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name) // //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name)
// io.WriteString(c.w, `'`) // io.WriteString(c.w, `'`)

View File

@ -10,7 +10,7 @@ import (
var ( var (
qcompileTest, _ = qcode.NewCompiler(qcode.Config{}) qcompileTest, _ = qcode.NewCompiler(qcode.Config{})
schema = getTestSchema() schema = GetTestSchema()
vars = NewVariables(map[string]string{ vars = NewVariables(map[string]string{
"admin_account_id": "5", "admin_account_id": "5",

View File

@ -21,9 +21,17 @@ func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer,
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar) return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
} }
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`) io.WriteString(c.w, `WITH "_sg_input" AS (SELECT `)
if insert[0] == '[' {
io.WriteString(c.w, `json_array_elements(`)
}
io.WriteString(c.w, `'{{`)
io.WriteString(c.w, qc.ActionVar) io.WriteString(c.w, qc.ActionVar)
io.WriteString(c.w, `}}' :: json AS j)`) io.WriteString(c.w, `}}' :: json`)
if insert[0] == '[' {
io.WriteString(c.w, `)`)
}
io.WriteString(c.w, ` AS j)`)
st := util.NewStack() st := util.NewStack()
st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti}) st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti})
@ -90,26 +98,9 @@ func (c *compilerContext) renderInsertStmt(qc *qcode.QCode, w io.Writer, item re
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) renderInsertUpdateColumns(w, qc, jt, ti, sk, true)
renderNestedInsertRelColumns(w, item.kvitem, true) renderNestedInsertRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedInsertRelTables(w, item.kvitem) renderNestedInsertRelTables(w, item.kvitem)
io.WriteString(w, ` RETURNING *)`)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t RETURNING *)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t RETURNING *)`)
}
return nil return nil
} }
@ -172,21 +163,21 @@ func renderNestedInsertRelColumns(w io.Writer, item kvitem, values bool) error {
func renderNestedInsertRelTables(w io.Writer, item kvitem) error { func renderNestedInsertRelTables(w io.Writer, item kvitem) error {
if len(item.items) == 0 { if len(item.items) == 0 {
if item.relPC != nil && item.relPC.Type == RelOneToMany { if item.relPC != nil && item.relPC.Type == RelOneToMany {
quoted(w, item.relPC.Left.Table)
io.WriteString(w, `, `) io.WriteString(w, `, `)
quoted(w, item.relPC.Left.Table)
} }
} else { } else {
// Render tables needed to set values if child-to-parent // Render tables needed to set values if child-to-parent
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v.relCP.Type == RelOneToMany { if v.relCP.Type == RelOneToMany {
io.WriteString(w, `, `)
if v._ctype > 0 { if v._ctype > 0 {
io.WriteString(w, `"_x_`) io.WriteString(w, `"_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} else { } else {
quoted(w, v.relCP.Left.Table) quoted(w, v.relCP.Left.Table)
io.WriteString(w, `, `)
} }
} }
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -7,9 +7,9 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/core/internal/util" "github.com/dosco/super-graph/core/internal/util"
"github.com/dosco/super-graph/jsn"
) )
type itemType int type itemType int
@ -396,7 +396,12 @@ func renderInsertUpdateColumns(w io.Writer,
} }
if values { if values {
colWithTable(w, "t", cn.Name) io.WriteString(w, `CAST( i.j ->>`)
io.WriteString(w, `'`)
io.WriteString(w, cn.Name)
io.WriteString(w, `' AS `)
io.WriteString(w, cn.Type)
io.WriteString(w, `)`)
} else { } else {
quoted(w, cn.Name) quoted(w, cn.Name)
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"fmt" "fmt"
@ -8,6 +8,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
@ -19,7 +20,7 @@ const (
var ( var (
qcompile *qcode.Compiler qcompile *qcode.Compiler
pcompile *Compiler pcompile *psql.Compiler
expected map[string][]string expected map[string][]string
) )
@ -133,13 +134,16 @@ func TestMain(m *testing.M) {
log.Fatal(err) log.Fatal(err)
} }
schema := getTestSchema() schema, err := psql.GetTestSchema()
if err != nil {
log.Fatal(err)
}
vars := NewVariables(map[string]string{ vars := psql.NewVariables(map[string]string{
"admin_account_id": "5", "admin_account_id": "5",
}) })
pcompile = NewCompiler(Config{ pcompile = psql.NewCompiler(psql.Config{
Schema: schema, Schema: schema,
Vars: vars, Vars: vars,
}) })
@ -173,7 +177,7 @@ func TestMain(m *testing.M) {
os.Exit(m.Run()) os.Exit(m.Run())
} }
func compileGQLToPSQL(t *testing.T, gql string, vars Variables, role string) { func compileGQLToPSQL(t *testing.T, gql string, vars psql.Variables, role string) {
generateTestFile := false generateTestFile := false
if generateTestFile { if generateTestFile {

View File

@ -141,7 +141,7 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
c.renderLateralJoin(sel) c.renderLateralJoin(sel)
} }
if !ti.Singular { if !ti.IsSingular {
c.renderPluralSelect(sel, ti) c.renderPluralSelect(sel, ti)
} }
@ -178,7 +178,7 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sj", sel.ID) aliasWithID(c.w, "__sj", sel.ID)
if !ti.Singular { if !ti.IsSingular {
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sj", sel.ID) aliasWithID(c.w, "__sj", sel.ID)
} }
@ -438,7 +438,7 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
io.WriteString(c.w, `SELECT to_jsonb("__sr_`) io.WriteString(c.w, `SELECT to_jsonb("__sr_`)
int2string(c.w, sel.ID) int2string(c.w, sel.ID)
io.WriteString(c.w, `") `) io.WriteString(c.w, `".*) `)
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
for i := range sel.OrderBy { for i := range sel.OrderBy {
@ -543,7 +543,7 @@ func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skip
var cn string var cn string
for _, col := range sel.Cols { for _, col := range sel.Cols {
if n := funcPrefixLen(col.Name); n != 0 { if n := funcPrefixLen(c.schema.fm, col.Name); n != 0 {
if !sel.Functions { if !sel.Functions {
continue continue
} }
@ -706,7 +706,7 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
} }
switch { switch {
case ti.Singular: case ti.IsSingular:
io.WriteString(c.w, ` LIMIT ('1') :: integer`) io.WriteString(c.w, ` LIMIT ('1') :: integer`)
case len(sel.Paging.Limit) != 0: case len(sel.Paging.Limit) != 0:
@ -921,8 +921,6 @@ func (c *compilerContext) renderExp(ex *qcode.Exp, ti *DBTableInfo, skipNested b
st.Push('(') st.Push('(')
case qcode.OpNot: case qcode.OpNot:
//fmt.Printf("1> %s %d %s %s\n", val.Op, len(val.Children), val.Children[0].Op, val.Children[1].Op)
st.Push(val.Children[0]) st.Push(val.Children[0])
st.Push(qcode.OpNot) st.Push(qcode.OpNot)
@ -1193,7 +1191,7 @@ func (c *compilerContext) renderVal(ex *qcode.Exp, vars map[string]string, col *
io.WriteString(c.w, col.Type) io.WriteString(c.w, col.Type)
} }
func funcPrefixLen(fn string) int { func funcPrefixLen(fm map[string]*DBFunction, fn string) int {
switch { switch {
case strings.HasPrefix(fn, "avg_"): case strings.HasPrefix(fn, "avg_"):
return 4 return 4
@ -1218,6 +1216,14 @@ func funcPrefixLen(fn string) int {
case strings.HasPrefix(fn, "var_samp_"): case strings.HasPrefix(fn, "var_samp_"):
return 9 return 9
} }
fnLen := len(fn)
for k := range fm {
kLen := len(k)
if kLen < fnLen && k[0] == fn[0] && strings.HasPrefix(fn, k) && fn[kLen] == '_' {
return kLen + 1
}
}
return 0 return 0
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"bytes" "bytes"

View File

@ -11,17 +11,20 @@ type DBSchema struct {
ver int ver int
t map[string]*DBTableInfo t map[string]*DBTableInfo
rm map[string]map[string]*DBRel rm map[string]map[string]*DBRel
fm map[string]*DBFunction
} }
type DBTableInfo struct { type DBTableInfo struct {
Name string Name string
Type string Type string
Singular bool IsSingular bool
Columns []DBColumn Columns []DBColumn
PrimaryCol *DBColumn PrimaryCol *DBColumn
TSVCol *DBColumn TSVCol *DBColumn
ColMap map[string]*DBColumn ColMap map[string]*DBColumn
ColIDMap map[int16]*DBColumn ColIDMap map[int16]*DBColumn
Singular string
Plural string
} }
type RelType int type RelType int
@ -54,8 +57,10 @@ type DBRel struct {
func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) { func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
schema := &DBSchema{ schema := &DBSchema{
ver: info.Version,
t: make(map[string]*DBTableInfo), t: make(map[string]*DBTableInfo),
rm: make(map[string]map[string]*DBRel), rm: make(map[string]map[string]*DBRel),
fm: make(map[string]*DBFunction, len(info.Functions)),
} }
for i, t := range info.Tables { for i, t := range info.Tables {
@ -79,6 +84,12 @@ func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
} }
} }
for k, f := range info.Functions {
if len(f.Params) == 1 {
schema.fm[strings.ToLower(f.Name)] = &info.Functions[k]
}
}
return schema, nil return schema, nil
} }
@ -89,23 +100,28 @@ func (s *DBSchema) addTable(
colidmap := make(map[int16]*DBColumn, len(cols)) colidmap := make(map[int16]*DBColumn, len(cols))
singular := flect.Singularize(t.Key) singular := flect.Singularize(t.Key)
plural := flect.Pluralize(t.Key)
s.t[singular] = &DBTableInfo{ s.t[singular] = &DBTableInfo{
Name: t.Name, Name: t.Name,
Type: t.Type, Type: t.Type,
Singular: true, IsSingular: true,
Columns: cols, Columns: cols,
ColMap: colmap, ColMap: colmap,
ColIDMap: colidmap, ColIDMap: colidmap,
Singular: singular,
Plural: plural,
} }
plural := flect.Pluralize(t.Key)
s.t[plural] = &DBTableInfo{ s.t[plural] = &DBTableInfo{
Name: t.Name, Name: t.Name,
Type: t.Type, Type: t.Type,
Singular: false, IsSingular: false,
Columns: cols, Columns: cols,
ColMap: colmap, ColMap: colmap,
ColIDMap: colidmap, ColIDMap: colidmap,
Singular: singular,
Plural: plural,
} }
if al, ok := aliases[t.Key]; ok { if al, ok := aliases[t.Key]; ok {
@ -364,6 +380,14 @@ func (s *DBSchema) updateSchemaOTMT(
return nil return nil
} }
func (s *DBSchema) GetTableNames() []string {
var names []string
for name := range s.t {
names = append(names, name)
}
return names
}
func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) { func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) {
t, ok := s.t[table] t, ok := s.t[table]
if !ok { if !ok {
@ -424,3 +448,11 @@ func (s *DBSchema) GetRel(child, parent string) (*DBRel, error) {
} }
return rel, nil return rel, nil
} }
func (s *DBSchema) GetFunctions() []*DBFunction {
var funcs []*DBFunction
for _, f := range s.fm {
funcs = append(funcs, f)
}
return funcs
}

View File

@ -13,7 +13,8 @@ type DBInfo struct {
Version int Version int
Tables []DBTable Tables []DBTable
Columns [][]DBColumn Columns [][]DBColumn
colmap map[string]map[string]*DBColumn Functions []DBFunction
colMap map[string]map[string]*DBColumn
} }
func GetDBInfo(db *sql.DB) (*DBInfo, error) { func GetDBInfo(db *sql.DB) (*DBInfo, error) {
@ -35,41 +36,56 @@ func GetDBInfo(db *sql.DB) (*DBInfo, error) {
return nil, err return nil, err
} }
di.colmap = make(map[string]map[string]*DBColumn, len(di.Tables)) for _, t := range di.Tables {
for i, t := range di.Tables {
cols, err := GetColumns(db, "public", t.Name) cols, err := GetColumns(db, "public", t.Name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols))
for n, c := range di.Columns[i] {
di.colmap[t.Key][c.Key] = &di.Columns[i][n]
} }
di.colMap = newColMap(di.Tables, di.Columns)
di.Functions, err = GetFunctions(db)
if err != nil {
return nil, err
} }
return di, nil return di, nil
} }
func newColMap(tables []DBTable, columns [][]DBColumn) map[string]map[string]*DBColumn {
cm := make(map[string]map[string]*DBColumn, len(tables))
for i, t := range tables {
cols := columns[i]
cm[t.Key] = make(map[string]*DBColumn, len(cols))
for n, c := range cols {
cm[t.Key][c.Key] = &columns[i][n]
}
}
return cm
}
func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) { func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) {
t.ID = di.Tables[len(di.Tables)-1].ID t.ID = di.Tables[len(di.Tables)-1].ID
di.Tables = append(di.Tables, t) di.Tables = append(di.Tables, t)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols)) di.colMap[t.Key] = make(map[string]*DBColumn, len(cols))
for i := range cols { for i := range cols {
cols[i].ID = int16(i) cols[i].ID = int16(i)
c := &cols[i] c := &cols[i]
di.colmap[t.Key][c.Key] = c di.colMap[t.Key][c.Key] = c
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
} }
func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) { func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) {
v, ok := di.colmap[strings.ToLower(table)][strings.ToLower(column)] v, ok := di.colMap[strings.ToLower(table)][strings.ToLower(column)]
return v, ok return v, ok
} }
@ -237,6 +253,71 @@ ORDER BY id;`
return cols, nil return cols, nil
} }
type DBFunction struct {
Name string
Params []DBFuncParam
}
type DBFuncParam struct {
ID int
Name sql.NullString
Type string
}
func GetFunctions(db *sql.DB) ([]DBFunction, error) {
sqlStmt := `
SELECT
routines.routine_name,
parameters.specific_name,
parameters.data_type,
parameters.parameter_name,
parameters.ordinal_position
FROM
information_schema.routines
RIGHT JOIN
information_schema.parameters
ON (routines.specific_name = parameters.specific_name and parameters.ordinal_position IS NOT NULL)
WHERE
routines.specific_schema = 'public'
ORDER BY
routines.routine_name, parameters.ordinal_position;`
rows, err := db.Query(sqlStmt)
if err != nil {
return nil, fmt.Errorf("Error fetching functions: %s", err)
}
defer rows.Close()
var funcs []DBFunction
fm := make(map[string]int)
parameterIndex := 1
for rows.Next() {
var fn, fid string
fp := DBFuncParam{}
err = rows.Scan(&fn, &fid, &fp.Type, &fp.Name, &fp.ID)
if err != nil {
return nil, err
}
if !fp.Name.Valid {
fp.Name.String = string(parameterIndex)
fp.Name.Valid = true
}
if i, ok := fm[fid]; ok {
funcs[i].Params = append(funcs[i].Params, fp)
} else {
funcs = append(funcs, DBFunction{Name: fn, Params: []DBFuncParam{fp}})
fm[fid] = len(funcs) - 1
}
parameterIndex++
}
return funcs, nil
}
// func GetValType(type string) qcode.ValType { // func GetValType(type string) qcode.ValType {
// switch { // switch {
// case "bigint", "integer", "smallint", "numeric", "bigserial": // case "bigint", "integer", "smallint", "numeric", "bigserial":

View File

@ -1,11 +1,10 @@
package psql package psql
import ( import (
"log"
"strings" "strings"
) )
func getTestSchema() *DBSchema { func GetTestDBInfo() *DBInfo {
tables := []DBTable{ tables := []DBTable{
DBTable{Name: "customers", Type: "table"}, DBTable{Name: "customers", Type: "table"},
DBTable{Name: "users", Type: "table"}, DBTable{Name: "users", Type: "table"},
@ -74,36 +73,19 @@ func getTestSchema() *DBSchema {
} }
} }
schema := &DBSchema{ return &DBInfo{
ver: 110000, Version: 110000,
t: make(map[string]*DBTableInfo), Tables: tables,
rm: make(map[string]map[string]*DBRel), Columns: columns,
Functions: []DBFunction{},
colMap: newColMap(tables, columns),
}
} }
func GetTestSchema() (*DBSchema, error) {
aliases := map[string][]string{ aliases := map[string][]string{
"users": []string{"mes"}, "users": []string{"mes"},
} }
for i, t := range tables { return NewDBSchema(GetTestDBInfo(), aliases)
err := schema.addTable(t, columns[i], aliases)
if err != nil {
log.Fatal(err)
}
}
for i, t := range tables {
err := schema.firstDegreeRels(t, columns[i])
if err != nil {
log.Fatal(err)
}
}
for i, t := range tables {
err := schema.secondDegreeRels(t, columns[i])
if err != nil {
log.Fatal(err)
}
}
return schema
} }

View File

@ -1,25 +1,25 @@
=== RUN TestCompileInsert === RUN TestCompileInsert
=== RUN TestCompileInsert/simpleInsert === RUN TestCompileInsert/simpleInsert
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/singleInsert === RUN TestCompileInsert/singleInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description", "price", "user_id") SELECT "t"."name", "t"."description", "t"."price", "t"."user_id" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description", "price", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'user_id' AS bigint) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/bulkInsert === RUN TestCompileInsert/bulkInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_recordset(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/simpleInsertWithPresets === RUN TestCompileInsert/simpleInsertWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", 'now' :: timestamp without time zone, 'now' :: timestamp without time zone, '{{user_id}}' :: bigint FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone, 'now' :: timestamp without time zone, '{{user_id}}' :: bigint FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertManyToMany === RUN TestCompileInsert/nestedInsertManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price") SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t RETURNING *), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "customer_id", "product_id") SELECT "t"."sale_type", "t"."quantity", "t"."due_date", "customers"."id", "products"."id" FROM "_sg_input" i, "customers", "products", json_populate_record(NULL::purchases, i.j) t RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "customer_id", "product_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "customers"."id", "products"."id" FROM "_sg_input" i, "customers", "products" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price") SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "product_id", "customer_id") SELECT "t"."sale_type", "t"."quantity", "t"."due_date", "products"."id", "customers"."id" FROM "_sg_input" i, "products", "customers", json_populate_record(NULL::purchases, i.j) t RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "product_id", "customer_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "products"."id", "customers"."id" FROM "_sg_input" i, "products", "customers" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToMany === RUN TestCompileInsert/nestedInsertOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j->'product') t RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOne === RUN TestCompileInsert/nestedInsertOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j->'user') t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToManyWithConnect === RUN TestCompileInsert/nestedInsertOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *), "products" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnect === RUN TestCompileInsert/nestedInsertOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user", "__sj_2"."json" AS "tags" FROM (SELECT "products"."id", "products"."name", "products"."user_id", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "tags_2"."id" AS "id", "tags_2"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_0"."tags"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user", "__sj_2"."json" AS "tags" FROM (SELECT "products"."id", "products"."name", "products"."user_id", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."id" AS "id", "tags_2"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_0"."tags"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnectArray === RUN TestCompileInsert/nestedInsertOneToOneWithConnectArray
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id" = ANY((select a::bigint AS list from json_array_elements_text((i.j->'user'->'connect'->>'id')::json) AS a)) LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id" = ANY((select a::bigint AS list from json_array_elements_text((i.j->'user'->'connect'->>'id')::json) AS a)) LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileInsert (0.02s) --- PASS: TestCompileInsert (0.02s)
--- PASS: TestCompileInsert/simpleInsert (0.00s) --- PASS: TestCompileInsert/simpleInsert (0.00s)
--- PASS: TestCompileInsert/singleInsert (0.00s) --- PASS: TestCompileInsert/singleInsert (0.00s)
@ -33,67 +33,67 @@ WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id"
--- PASS: TestCompileInsert/nestedInsertOneToOneWithConnectArray (0.00s) --- PASS: TestCompileInsert/nestedInsertOneToOneWithConnectArray (0.00s)
=== RUN TestCompileMutate === RUN TestCompileMutate
=== RUN TestCompileMutate/singleUpsert === RUN TestCompileMutate/singleUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/singleUpsertWhere === RUN TestCompileMutate/singleUpsertWhere
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) WHERE (("products"."price") > '3' :: numeric(7,2)) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) WHERE (("products"."price") > '3' :: numeric(7,2)) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/bulkUpsert === RUN TestCompileMutate/bulkUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_recordset(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/delete === RUN TestCompileMutate/delete
WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '1' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '1' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileMutate (0.01s) --- PASS: TestCompileMutate (0.00s)
--- PASS: TestCompileMutate/singleUpsert (0.00s) --- PASS: TestCompileMutate/singleUpsert (0.00s)
--- PASS: TestCompileMutate/singleUpsertWhere (0.00s) --- PASS: TestCompileMutate/singleUpsertWhere (0.00s)
--- PASS: TestCompileMutate/bulkUpsert (0.00s) --- PASS: TestCompileMutate/bulkUpsert (0.00s)
--- PASS: TestCompileMutate/delete (0.00s) --- PASS: TestCompileMutate/delete (0.00s)
=== RUN TestCompileQuery === RUN TestCompileQuery
=== RUN TestCompileQuery/withComplexArgs === RUN TestCompileQuery/withComplexArgs
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT DISTINCT ON ("products"."price") "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."id") < '28' :: bigint) AND (("products"."id") >= '20' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) ORDER BY "products"."price" DESC LIMIT ('30') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT DISTINCT ON ("products"."price") "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."id") < '28' :: bigint) AND (("products"."id") >= '20' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) ORDER BY "products"."price" DESC LIMIT ('30') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereAndList === RUN TestCompileQuery/withWhereAndList
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereIsNull === RUN TestCompileQuery/withWhereIsNull
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereMultiOr === RUN TestCompileQuery/withWhereMultiOr
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND ((("products"."price") < '20' :: numeric(7,2)) OR (("products"."price") > '10' :: numeric(7,2)) OR NOT (("products"."id") IS NULL)))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND ((("products"."price") < '20' :: numeric(7,2)) OR (("products"."price") > '10' :: numeric(7,2)) OR NOT (("products"."id") IS NULL)))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/fetchByID === RUN TestCompileQuery/fetchByID
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '{{id}}' :: bigint))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '{{id}}' :: bigint))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/searchQuery === RUN TestCompileQuery/searchQuery
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."search_rank" AS "search_rank", "products_0"."search_headline_description" AS "search_headline_description" FROM (SELECT "products"."id", "products"."name", ts_rank("products"."tsv", websearch_to_tsquery('{{query}}')) AS "search_rank", ts_headline("products"."description", websearch_to_tsquery('{{query}}')) AS "search_headline_description" FROM "products" WHERE ((("products"."tsv") @@ websearch_to_tsquery('{{query}}'))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."search_rank" AS "search_rank", "products_0"."search_headline_description" AS "search_headline_description" FROM (SELECT "products"."id", "products"."name", ts_rank("products"."tsv", websearch_to_tsquery('{{query}}')) AS "search_rank", ts_headline("products"."description", websearch_to_tsquery('{{query}}')) AS "search_headline_description" FROM "products" WHERE ((("products"."tsv") @@ websearch_to_tsquery('{{query}}'))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToMany === RUN TestCompileQuery/oneToMany
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."email" AS "email", "__sj_1"."json" AS "products" FROM (SELECT "users"."email", "users"."id" FROM "users" LIMIT ('20') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email", "__sj_1"."json" AS "products" FROM (SELECT "users"."email", "users"."id" FROM "users" LIMIT ('20') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyReverse === RUN TestCompileQuery/oneToManyReverse
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."price" AS "price", "__sj_1"."json" AS "users" FROM (SELECT "products"."name", "products"."price", "products"."user_id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."email" AS "email" FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('20') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."price" AS "price", "__sj_1"."json" AS "users" FROM (SELECT "products"."name", "products"."price", "products"."user_id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."email" AS "email" FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('20') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyArray === RUN TestCompileQuery/oneToManyArray
SELECT jsonb_build_object('tags', "__sj_0"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "products_2"."name" AS "name", "products_2"."price" AS "price", "__sj_3"."json" AS "tags" FROM (SELECT "products"."name", "products"."price", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3") AS "json"FROM (SELECT "tags_3"."id" AS "id", "tags_3"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_2"."tags"))) LIMIT ('20') :: integer) AS "tags_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "tags_0"."name" AS "name", "__sj_1"."json" AS "product" FROM (SELECT "tags"."name", "tags"."slug" FROM "tags" LIMIT ('20') :: integer) AS "tags_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" WHERE ((("tags_0"."slug") = any ("products"."tags"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('tags', "__sj_0"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."name" AS "name", "products_2"."price" AS "price", "__sj_3"."json" AS "tags" FROM (SELECT "products"."name", "products"."price", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "tags_3"."id" AS "id", "tags_3"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_2"."tags"))) LIMIT ('20') :: integer) AS "tags_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "tags_0"."name" AS "name", "__sj_1"."json" AS "product" FROM (SELECT "tags"."name", "tags"."slug" FROM "tags" LIMIT ('20') :: integer) AS "tags_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" WHERE ((("tags_0"."slug") = any ("products"."tags"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToMany === RUN TestCompileQuery/manyToMany
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "__sj_1"."json" AS "customers" FROM (SELECT "products"."name", "products"."id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "customers_1"."email" AS "email", "customers_1"."full_name" AS "full_name" FROM (SELECT "customers"."email", "customers"."full_name" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_0"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "__sj_1"."json" AS "customers" FROM (SELECT "products"."name", "products"."id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "customers_1"."email" AS "email", "customers_1"."full_name" AS "full_name" FROM (SELECT "customers"."email", "customers"."full_name" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_0"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToManyReverse === RUN TestCompileQuery/manyToManyReverse
SELECT jsonb_build_object('customers', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "customers_0"."email" AS "email", "customers_0"."full_name" AS "full_name", "__sj_1"."json" AS "products" FROM (SELECT "customers"."email", "customers"."full_name", "customers"."id" FROM "customers" LIMIT ('20') :: integer) AS "customers_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers_0"."id")) WHERE ((("products"."id") = ("purchases"."product_id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('customers', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."email" AS "email", "customers_0"."full_name" AS "full_name", "__sj_1"."json" AS "products" FROM (SELECT "customers"."email", "customers"."full_name", "customers"."id" FROM "customers" LIMIT ('20') :: integer) AS "customers_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers_0"."id")) WHERE ((("products"."id") = ("purchases"."product_id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunction === RUN TestCompileQuery/aggFunction
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."count_price" AS "count_price" FROM (SELECT "products"."name", count("products"."price") AS "count_price" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."count_price" AS "count_price" FROM (SELECT "products"."name", count("products"."price") AS "count_price" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionBlockedByCol === RUN TestCompileQuery/aggFunctionBlockedByCol
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionDisabled === RUN TestCompileQuery/aggFunctionDisabled
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionWithFilter === RUN TestCompileQuery/aggFunctionWithFilter
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."max_price" AS "max_price" FROM (SELECT "products"."id", max("products"."price") AS "max_price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") > '10' :: bigint))) GROUP BY "products"."id" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."max_price" AS "max_price" FROM (SELECT "products"."id", max("products"."price") AS "max_price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") > '10' :: bigint))) GROUP BY "products"."id" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/syntheticTables === RUN TestCompileQuery/syntheticTables
SELECT jsonb_build_object('me', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = '{{user_id}}' :: bigint)) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('me', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = '{{user_id}}' :: bigint)) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/queryWithVariables === RUN TestCompileQuery/queryWithVariables
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") = '{{product_price}}' :: numeric(7,2)) AND (("products"."id") = '{{product_id}}' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") = '{{product_price}}' :: numeric(7,2)) AND (("products"."id") = '{{product_id}}' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereOnRelations === RUN TestCompileQuery/withWhereOnRelations
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" WHERE (NOT EXISTS (SELECT 1 FROM products WHERE (("products"."user_id") = ("users"."id")) AND ((("products"."price") > '3' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" WHERE (NOT EXISTS (SELECT 1 FROM products WHERE (("products"."user_id") = ("users"."id")) AND ((("products"."price") > '3' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/multiRoot === RUN TestCompileQuery/multiRoot
SELECT jsonb_build_object('customer', "__sj_0"."json", 'user', "__sj_1"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "products_2"."id" AS "id", "products_2"."name" AS "name", "__sj_3"."json" AS "customers", "__sj_4"."json" AS "customer" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_4") AS "json"FROM (SELECT "customers_4"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('1') :: integer) AS "customers_4") AS "__sr_4") AS "__sj_4" ON ('true') LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3") AS "json"FROM (SELECT "customers_3"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1", (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "customers_0"."id" AS "id" FROM (SELECT "customers"."id" FROM "customers" LIMIT ('1') :: integer) AS "customers_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('customer', "__sj_0"."json", 'user', "__sj_1"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."id" AS "id", "products_2"."name" AS "name", "__sj_3"."json" AS "customers", "__sj_4"."json" AS "customer" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_4".*) AS "json"FROM (SELECT "customers_4"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('1') :: integer) AS "customers_4") AS "__sr_4") AS "__sj_4" ON ('true') LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "customers_3"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1", (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."id" AS "id" FROM (SELECT "customers"."id" FROM "customers" LIMIT ('1') :: integer) AS "customers_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/jsonColumnAsTable === RUN TestCompileQuery/jsonColumnAsTable
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "tag_count" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "tag_count_1"."count" AS "count", "__sj_2"."json" AS "tags" FROM (SELECT "tag_count"."count", "tag_count"."tag_id" FROM "products", json_to_recordset("products"."tag_count") AS "tag_count"(tag_id bigint, count int) WHERE ((("products"."id") = ("products_0"."id"))) LIMIT ('1') :: integer) AS "tag_count_1" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "tags_2"."name" AS "name" FROM (SELECT "tags"."name" FROM "tags" WHERE ((("tags"."id") = ("tag_count_1"."tag_id"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true')) AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "tag_count" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "tag_count_1"."count" AS "count", "__sj_2"."json" AS "tags" FROM (SELECT "tag_count"."count", "tag_count"."tag_id" FROM "products", json_to_recordset("products"."tag_count") AS "tag_count"(tag_id bigint, count int) WHERE ((("products"."id") = ("products_0"."id"))) LIMIT ('1') :: integer) AS "tag_count_1" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."name" AS "name" FROM (SELECT "tags"."name" FROM "tags" WHERE ((("tags"."id") = ("tag_count_1"."tag_id"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true')) AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withCursor === RUN TestCompileQuery/withCursor
SELECT jsonb_build_object('products', "__sj_0"."json", 'products_cursor', "__sj_0"."cursor") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT to_jsonb("__sr_0") - '__cur_0' - '__cur_1' AS "json", "__cur_0", "__cur_1"FROM (SELECT "products_0"."name" AS "name", LAST_VALUE("products_0"."price") OVER() AS "__cur_0", LAST_VALUE("products_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "price", a[2] as "id" FROM string_to_array('{{cursor}}', ',') as a) SELECT "products"."name", "products"."id", "products"."price" FROM "products", "__cur" WHERE (((("products"."price") < "__cur"."price" :: numeric(7,2)) OR ((("products"."price") = "__cur"."price" :: numeric(7,2)) AND (("products"."id") > "__cur"."id" :: bigint)))) ORDER BY "products"."price" DESC, "products"."id" ASC LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json", 'products_cursor', "__sj_0"."cursor") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT to_jsonb("__sr_0".*) - '__cur_0' - '__cur_1' AS "json", "__cur_0", "__cur_1"FROM (SELECT "products_0"."name" AS "name", LAST_VALUE("products_0"."price") OVER() AS "__cur_0", LAST_VALUE("products_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "price", a[2] as "id" FROM string_to_array('{{cursor}}', ',') as a) SELECT "products"."name", "products"."id", "products"."price" FROM "products", "__cur" WHERE (((("products"."price") < "__cur"."price" :: numeric(7,2)) OR ((("products"."price") = "__cur"."price" :: numeric(7,2)) AND (("products"."id") > "__cur"."id" :: bigint)))) ORDER BY "products"."price" DESC, "products"."id" ASC LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/nullForAuthRequiredInAnon === RUN TestCompileQuery/nullForAuthRequiredInAnon
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", NULL AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", NULL AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedQuery === RUN TestCompileQuery/blockedQuery
SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE (false) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE (false) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedFunctions === RUN TestCompileQuery/blockedFunctions
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."email" AS "email" FROM (SELECT , "users"."email" FROM "users" WHERE (false) GROUP BY "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email" FROM (SELECT , "users"."email" FROM "users" WHERE (false) GROUP BY "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
--- PASS: TestCompileQuery (0.02s) --- PASS: TestCompileQuery (0.02s)
--- PASS: TestCompileQuery/withComplexArgs (0.00s) --- PASS: TestCompileQuery/withComplexArgs (0.00s)
--- PASS: TestCompileQuery/withWhereAndList (0.00s) --- PASS: TestCompileQuery/withWhereAndList (0.00s)
@ -121,23 +121,23 @@ SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coa
--- PASS: TestCompileQuery/blockedFunctions (0.00s) --- PASS: TestCompileQuery/blockedFunctions (0.00s)
=== RUN TestCompileUpdate === RUN TestCompileUpdate
=== RUN TestCompileUpdate/singleUpdate === RUN TestCompileUpdate/singleUpdate
WITH "_sg_input" AS (SELECT '{{update}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "description") = (SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE ((("products"."id") = '1' :: bigint) AND (("products"."id") = '{{id}}' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{update}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "description") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i) WHERE ((("products"."id") = '1' :: bigint) AND (("products"."id") = '{{id}}' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/simpleUpdateWithPresets === RUN TestCompileUpdate/simpleUpdateWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "updated_at") = (SELECT "t"."name", "t"."price", 'now' :: timestamp without time zone FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE (("products"."user_id") = '{{user_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone FROM "_sg_input" i) WHERE (("products"."user_id") = '{{user_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateManyToMany === RUN TestCompileUpdate/nestedUpdateManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT "t"."sale_type", "t"."quantity", "t"."due_date" FROM "_sg_input" i, json_populate_record(NULL::purchases, i.j) t) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT "t"."sale_type", "t"."quantity", "t"."due_date" FROM "_sg_input" i, json_populate_record(NULL::purchases, i.j) t) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToMany === RUN TestCompileUpdate/nestedUpdateOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t) WHERE (("users"."id") = '8' :: bigint) RETURNING "users".*), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "users" WHERE (("products"."user_id") = ("users"."id") AND "products"."id"= ((i.j->'product'->'where'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = '8' :: bigint) RETURNING "users".*), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) FROM "users" WHERE (("products"."user_id") = ("users"."id") AND "products"."id"= ((i.j->'product'->'where'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOne === RUN TestCompileUpdate/nestedUpdateOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*), "users" AS (UPDATE "users" SET ("email") = (SELECT "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j->'user') t) FROM "products" WHERE (("users"."id") = ("products"."user_id")) RETURNING "users".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*), "users" AS (UPDATE "users" SET ("email") = (SELECT CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "products" WHERE (("users"."id") = ("products"."user_id")) RETURNING "users".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToManyWithConnect === RUN TestCompileUpdate/nestedUpdateOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t) WHERE (("users"."id") = '{{id}}' :: bigint) RETURNING "users".*), "products_c" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*), "products_d" AS ( UPDATE "products" SET "user_id" = NULL FROM "users" WHERE ("products"."id"= ((i.j->'product'->'disconnect'->>'id'))::bigint) RETURNING "products".*), "products" AS (SELECT * FROM "products_c" UNION ALL SELECT * FROM "products_d") SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = '{{id}}' :: bigint) RETURNING "users".*), "products_c" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*), "products_d" AS ( UPDATE "products" SET "user_id" = NULL FROM "users" WHERE ("products"."id"= ((i.j->'product'->'disconnect'->>'id'))::bigint) RETURNING "products".*), "products" AS (SELECT * FROM "products_c" UNION ALL SELECT * FROM "products_d") SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithConnect === RUN TestCompileUpdate/nestedUpdateOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint AND "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint AND "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying AND "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying AND "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithDisconnect === RUN TestCompileUpdate/nestedUpdateOneToOneWithDisconnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileUpdate (0.02s) --- PASS: TestCompileUpdate (0.02s)
--- PASS: TestCompileUpdate/singleUpdate (0.00s) --- PASS: TestCompileUpdate/singleUpdate (0.00s)
--- PASS: TestCompileUpdate/simpleUpdateWithPresets (0.00s) --- PASS: TestCompileUpdate/simpleUpdateWithPresets (0.00s)
@ -148,4 +148,4 @@ WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FR
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithConnect (0.00s) --- PASS: TestCompileUpdate/nestedUpdateOneToOneWithConnect (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithDisconnect (0.00s) --- PASS: TestCompileUpdate/nestedUpdateOneToOneWithDisconnect (0.00s)
PASS PASS
ok github.com/dosco/super-graph/core/internal/psql 0.320s ok github.com/dosco/super-graph/core/internal/psql 0.306s

View File

@ -91,25 +91,9 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) renderInsertUpdateColumns(w, qc, jt, ti, sk, true)
renderNestedUpdateRelColumns(w, item.kvitem, true) renderNestedUpdateRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedUpdateRelTables(w, item.kvitem) renderNestedUpdateRelTables(w, item.kvitem)
io.WriteString(w, `) `)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t) `)
}
if item.id != 0 { if item.id != 0 {
// Render sql to set id values if child-to-parent // Render sql to set id values if child-to-parent
@ -137,11 +121,13 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
io.WriteString(w, `)`) io.WriteString(w, `)`)
} else { } else {
if qc.Selects[0].Where != nil {
io.WriteString(w, ` WHERE `) io.WriteString(w, ` WHERE `)
if err := c.renderWhere(&qc.Selects[0], ti); err != nil { if err := c.renderWhere(&qc.Selects[0], ti); err != nil {
return err return err
} }
} }
}
io.WriteString(w, ` RETURNING `) io.WriteString(w, ` RETURNING `)
quoted(w, ti.Name) quoted(w, ti.Name)
@ -202,9 +188,9 @@ func renderNestedUpdateRelTables(w io.Writer, item kvitem) error {
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v._ctype > 0 && v.relCP.Type == RelOneToMany { if v._ctype > 0 && v.relCP.Type == RelOneToMany {
io.WriteString(w, `"_x_`) io.WriteString(w, `, "_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} }
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -0,0 +1,17 @@
goos: darwin
goarch: amd64
pkg: github.com/dosco/super-graph/core/internal/qcode
BenchmarkQCompile
BenchmarkQCompile-16 129614 8649 ns/op 3756 B/op 28 allocs/op
BenchmarkQCompileP
BenchmarkQCompileP-16 487488 2525 ns/op 3792 B/op 28 allocs/op
BenchmarkParse
BenchmarkParse-16 127582 8731 ns/op 3902 B/op 18 allocs/op
BenchmarkParseP
BenchmarkParseP-16 561373 2223 ns/op 3903 B/op 18 allocs/op
BenchmarkSchemaParse
BenchmarkSchemaParse-16 209142 5523 ns/op 3968 B/op 57 allocs/op
BenchmarkSchemaParseP
BenchmarkSchemaParseP-16 716437 1734 ns/op 3968 B/op 57 allocs/op
PASS
ok github.com/dosco/super-graph/core/internal/qcode 8.483s

View File

@ -8,6 +8,7 @@ import (
type Config struct { type Config struct {
Blocklist []string Blocklist []string
DefaultBlock bool
} }
type QueryConfig struct { type QueryConfig struct {

View File

@ -602,7 +602,7 @@ func (t parserType) String() string {
// nodePool.Put(n) // nodePool.Put(n)
// freeList = append(freeList, Frees{n, loc}) // freeList = append(freeList, Frees{n, loc})
// } else { // } else {
// fmt.Printf(">>>>(%d) RE_FREE %d %p %s %s\n", loc, freeList[j].loc, freeList[j].n, n.Name, n.Type) // fmt.Printf("(%d) RE_FREE %d %p %s %s\n", loc, freeList[j].loc, freeList[j].n, n.Name, n.Type)
// } // }
// } // }

View File

@ -2,6 +2,7 @@ package qcode
import ( import (
"errors" "errors"
"github.com/chirino/graphql/schema"
"testing" "testing"
) )
@ -130,7 +131,7 @@ updateThread {
} }
var gql = []byte(` var gql = []byte(`
products( {products(
# returns only 30 items # returns only 30 items
limit: 30, limit: 30,
@ -148,7 +149,7 @@ var gql = []byte(`
id id
name name
price price
}`) }}`)
func BenchmarkQCompile(b *testing.B) { func BenchmarkQCompile(b *testing.B) {
qcompile, _ := NewCompiler(Config{}) qcompile, _ := NewCompiler(Config{})
@ -181,3 +182,59 @@ func BenchmarkQCompileP(b *testing.B) {
} }
}) })
} }
func BenchmarkParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
})
}
func BenchmarkSchemaParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkSchemaParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
})
}

View File

@ -170,6 +170,7 @@ const (
) )
type Compiler struct { type Compiler struct {
db bool // default block tables if not defined in anon role
tr map[string]map[string]*trval tr map[string]map[string]*trval
bl map[string]struct{} bl map[string]struct{}
} }
@ -179,7 +180,7 @@ var expPool = sync.Pool{
} }
func NewCompiler(c Config) (*Compiler, error) { func NewCompiler(c Config) (*Compiler, error) {
co := &Compiler{} co := &Compiler{db: c.DefaultBlock}
co.tr = make(map[string]map[string]*trval) co.tr = make(map[string]map[string]*trval)
co.bl = make(map[string]struct{}, len(c.Blocklist)) co.bl = make(map[string]struct{}, len(c.Blocklist))
@ -413,12 +414,12 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) { func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) {
var fil *Exp var fil *Exp
var nu bool var nu bool // user required (or not) in this filter
if trv, ok := com.tr[role][sel.Name]; ok { if trv, ok := com.tr[role][sel.Name]; ok {
fil, nu = trv.filter(qc.Type) fil, nu = trv.filter(qc.Type)
} else if role == "anon" { } else if com.db && role == "anon" {
// Tables not defined under the anon role will not be rendered // Tables not defined under the anon role will not be rendered
sel.SkipRender = true sel.SkipRender = true
} }

View File

@ -1,12 +1,17 @@
package qcode package qcode
func GetQType(gql string) QType { func GetQType(gql string) QType {
ic := false
for i := range gql { for i := range gql {
b := gql[i] b := gql[i]
if b == '{' { switch {
case b == '#':
ic = true
case b == '\n':
ic = false
case !ic && b == '{':
return QTQuery return QTQuery
} case !ic && al(b):
if al(b) {
switch b { switch b {
case 'm', 'M': case 'm', 'M':
return QTMutation return QTMutation

View File

@ -0,0 +1,50 @@
package qcode
import "testing"
func TestGetQType(t *testing.T) {
type args struct {
gql string
}
type ts struct {
name string
args args
want QType
}
tests := []ts{
ts{
name: "query",
args: args{gql: " query {"},
want: QTQuery,
},
ts{
name: "mutation",
args: args{gql: " mutation {"},
want: QTMutation,
},
ts{
name: "default query",
args: args{gql: " {"},
want: QTQuery,
},
ts{
name: "default query with comment",
args: args{gql: `# query is good
{`},
want: QTQuery,
},
ts{
name: "failed query with comment",
args: args{gql: `# query is good query {`},
want: -1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := GetQType(tt.args.gql); got != tt.want {
t.Errorf("GetQType() = %v, want %v", got, tt.want)
}
})
}
}

490
core/introspec.go Normal file
View File

@ -0,0 +1,490 @@
package core
import (
"strings"
"github.com/chirino/graphql"
"github.com/chirino/graphql/resolvers"
"github.com/chirino/graphql/schema"
"github.com/dosco/super-graph/core/internal/psql"
)
var typeMap map[string]string = map[string]string{
"smallint": "Int",
"integer": "Int",
"bigint": "Int",
"smallserial": "Int",
"serial": "Int",
"bigserial": "Int",
"decimal": "Float",
"numeric": "Float",
"real": "Float",
"double precision": "Float",
"money": "Float",
"boolean": "Boolean",
}
func (sg *SuperGraph) initGraphQLEgine() error {
engine := graphql.New()
engineSchema := engine.Schema
dbSchema := sg.schema
if err := engineSchema.Parse(`enum OrderDirection { asc desc }`); err != nil {
return err
}
gqltype := func(col psql.DBColumn) schema.Type {
typeName := typeMap[strings.ToLower(col.Type)]
if typeName == "" {
typeName = "String"
}
var t schema.Type = &schema.TypeName{Name: typeName}
if col.NotNull {
t = &schema.NonNull{OfType: t}
}
return t
}
query := &schema.Object{
Name: "Query",
Fields: schema.FieldList{},
}
mutation := &schema.Object{
Name: "Mutation",
Fields: schema.FieldList{},
}
engineSchema.Types[query.Name] = query
engineSchema.Types[mutation.Name] = mutation
engineSchema.EntryPoints[schema.Query] = query
engineSchema.EntryPoints[schema.Mutation] = mutation
//validGraphQLIdentifierRegex := regexp.MustCompile(`^[A-Za-z_][A-Za-z_0-9]*$`)
scalarExpressionTypesNeeded := map[string]bool{}
tableNames := dbSchema.GetTableNames()
funcs := dbSchema.GetFunctions()
for _, table := range tableNames {
ti, err := dbSchema.GetTable(table)
if err != nil {
return err
}
if !ti.IsSingular {
continue
}
singularName := ti.Singular
// if !validGraphQLIdentifierRegex.MatchString(singularName) {
// return errors.New("table name is not a valid GraphQL identifier: " + singularName)
// }
pluralName := ti.Plural
// if !validGraphQLIdentifierRegex.MatchString(pluralName) {
// return errors.New("table name is not a valid GraphQL identifier: " + pluralName)
// }
outputType := &schema.Object{
Name: singularName + "Output",
Fields: schema.FieldList{},
}
engineSchema.Types[outputType.Name] = outputType
inputType := &schema.InputObject{
Name: singularName + "Input",
Fields: schema.InputValueList{},
}
engineSchema.Types[inputType.Name] = inputType
orderByType := &schema.InputObject{
Name: singularName + "OrderBy",
Fields: schema.InputValueList{},
}
engineSchema.Types[orderByType.Name] = orderByType
expressionTypeName := singularName + "Expression"
expressionType := &schema.InputObject{
Name: expressionTypeName,
Fields: schema.InputValueList{
&schema.InputValue{
Name: "and",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "or",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "not",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
for _, col := range ti.Columns {
colName := col.Name
// if !validGraphQLIdentifierRegex.MatchString(colName) {
// return errors.New("column name is not a valid GraphQL identifier: " + colName)
// }
colType := gqltype(col)
nullableColType := ""
if x, ok := colType.(*schema.NonNull); ok {
nullableColType = x.OfType.(*schema.TypeName).Name
} else {
nullableColType = colType.(*schema.TypeName).Name
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: colName,
Type: colType,
})
for _, f := range funcs {
if col.Type != f.Params[0].Type {
continue
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: f.Name + "_" + colName,
Type: colType,
})
}
// If it's a numeric type...
if nullableColType == "Float" || nullableColType == "Int" {
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "avg_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "count_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "max_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "min_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_samp_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "variance_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_samp_" + colName,
Type: colType,
})
}
inputType.Fields = append(inputType.Fields, &schema.InputValue{
Name: colName,
Type: colType,
})
orderByType.Fields = append(orderByType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "OrderDirection"}},
})
scalarExpressionTypesNeeded[nullableColType] = true
expressionType.Fields = append(expressionType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: nullableColType + "Expression"}},
})
}
outputTypeName := &schema.TypeName{Name: outputType.Name}
inputTypeName := &schema.TypeName{Name: inputType.Name}
pluralOutputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: outputType.Name}}}}
pluralInputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: inputType.Name}}}}
args := schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: "To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."},
Name: "order_by",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: orderByType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "where",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "limit",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "offset",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "first",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "last",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "before",
Type: &schema.TypeName{Name: "String"},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "after",
Type: &schema.TypeName{Name: "String"},
},
}
if ti.PrimaryCol != nil {
t := gqltype(*ti.PrimaryCol)
if _, ok := t.(*schema.NonNull); !ok {
t = &schema.NonNull{OfType: t}
}
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Finds the record by the primary key"},
Name: "id",
Type: t,
})
}
if ti.TSVCol != nil {
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Performs full text search using a TSV index"},
Name: "search",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
})
}
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: singularName,
Type: outputTypeName,
Args: args,
})
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: pluralName,
Type: pluralOutputTypeName,
Args: args,
})
mutationArgs := append(args, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "insert",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "update",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upsert",
Type: inputTypeName,
},
}...)
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: singularName,
Args: mutationArgs,
Type: outputType,
})
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: pluralName,
Args: append(mutationArgs, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "inserts",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "updates",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upserts",
Type: pluralInputTypeName,
},
}...),
Type: outputType,
})
}
for typeName := range scalarExpressionTypesNeeded {
expressionType := &schema.InputObject{
Name: typeName + "Expression",
Fields: schema.InputValueList{
&schema.InputValue{
Name: "eq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "neq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "not_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "nin",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "not_in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nlike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nsimilar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "has_key",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "has_key_any",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "has_key_all",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contains",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contained_in",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "is_null",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Boolean"}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
}
if err := engineSchema.ResolveTypes(); err != nil {
return err
}
engine.Resolver = resolvers.Func(func(request *resolvers.ResolveRequest, next resolvers.Resolution) resolvers.Resolution {
resolver := resolvers.MetadataResolver.Resolve(request, next)
if resolver != nil {
return resolver
}
resolver = resolvers.MethodResolver.Resolve(request, next) // needed by the MetadataResolver
if resolver != nil {
return resolver
}
return nil
})
sg.ge = engine
return nil
}

View File

@ -3,7 +3,7 @@ package core
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/sha1" "crypto/sha256"
"database/sql" "database/sql"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
@ -23,17 +23,13 @@ type preparedItem struct {
roleArg bool roleArg bool
} }
var (
prepared map[string]*preparedItem
)
func (sg *SuperGraph) initPrepared() error { func (sg *SuperGraph) initPrepared() error {
ct := context.Background() ct := context.Background()
if sg.allowList.IsPersist() { if sg.allowList.IsPersist() {
return nil return nil
} }
prepared = make(map[string]*preparedItem) sg.prepared = make(map[string]*preparedItem)
tx, err := sg.db.BeginTx(ct, nil) tx, err := sg.db.BeginTx(ct, nil)
if err != nil { if err != nil {
@ -62,21 +58,14 @@ func (sg *SuperGraph) initPrepared() error {
} }
err := sg.prepareStmt(v) err := sg.prepareStmt(v)
if err == nil { if err != nil {
sg.log.Printf("WRN %s: %v", v.Name, err)
} else {
success++ success++
continue }
} }
// if len(v.Vars) == 0 { sg.log.Printf("INF allow list: prepared %d / %d queries", success, len(list))
// logger.Warn().Err(err).Msg(v.Query)
// } else {
// logger.Warn().Err(err).Msgf("%s %s", v.Vars, v.Query)
// }
}
// logger.Info().
// Msgf("Registered %d of %d queries from allow.list as prepared statements",
// success, len(list))
return nil return nil
} }
@ -88,19 +77,12 @@ func (sg *SuperGraph) prepareStmt(item allow.Item) error {
qt := qcode.GetQType(query) qt := qcode.GetQType(query)
ct := context.Background() ct := context.Background()
tx, err := sg.db.BeginTx(ct, nil)
if err != nil {
return err
}
defer tx.Rollback() //nolint: errcheck
switch qt { switch qt {
case qcode.QTQuery: case qcode.QTQuery:
var stmts1 []stmt var stmts1 []stmt
var err error var err error
if sg.conf.IsABACEnabled() { if sg.abacEnabled {
stmts1, err = sg.buildMultiStmt(qb, vars) stmts1, err = sg.buildMultiStmt(qb, vars)
} else { } else {
stmts1, err = sg.buildRoleStmt(qb, vars, "user") stmts1, err = sg.buildRoleStmt(qb, vars, "user")
@ -112,12 +94,12 @@ func (sg *SuperGraph) prepareStmt(item allow.Item) error {
//logger.Debug().Msgf("Prepared statement 'query %s' (user)", item.Name) //logger.Debug().Msgf("Prepared statement 'query %s' (user)", item.Name)
err = sg.prepare(ct, tx, stmts1, stmtHash(item.Name, "user")) err = sg.prepare(ct, stmts1, stmtHash(item.Name, "user"))
if err != nil { if err != nil {
return err return err
} }
if sg.conf.IsAnonRoleDefined() { if sg.anonExists {
// logger.Debug().Msgf("Prepared statement 'query %s' (anon)", item.Name) // logger.Debug().Msgf("Prepared statement 'query %s' (anon)", item.Name)
stmts2, err := sg.buildRoleStmt(qb, vars, "anon") stmts2, err := sg.buildRoleStmt(qb, vars, "anon")
@ -128,7 +110,7 @@ func (sg *SuperGraph) prepareStmt(item allow.Item) error {
return err return err
} }
err = sg.prepare(ct, tx, stmts2, stmtHash(item.Name, "anon")) err = sg.prepare(ct, stmts2, stmtHash(item.Name, "anon"))
if err != nil { if err != nil {
return err return err
} }
@ -139,36 +121,29 @@ func (sg *SuperGraph) prepareStmt(item allow.Item) error {
// logger.Debug().Msgf("Prepared statement 'mutation %s' (%s)", item.Name, role.Name) // logger.Debug().Msgf("Prepared statement 'mutation %s' (%s)", item.Name, role.Name)
stmts, err := sg.buildRoleStmt(qb, vars, role.Name) stmts, err := sg.buildRoleStmt(qb, vars, role.Name)
if err == psql.ErrAllTablesSkipped {
if err != nil {
// if len(item.Vars) == 0 {
// logger.Warn().Err(err).Msg(item.Query)
// } else {
// logger.Warn().Err(err).Msgf("%s %s", item.Vars, item.Query)
// }
continue continue
} }
if err != nil {
return err
}
err = sg.prepare(ct, tx, stmts, stmtHash(item.Name, role.Name)) err = sg.prepare(ct, stmts, stmtHash(item.Name, role.Name))
if err != nil { if err != nil {
return err return err
} }
} }
} }
if err := tx.Commit(); err != nil {
return err
}
return nil return nil
} }
func (sg *SuperGraph) prepare(ct context.Context, tx *sql.Tx, st []stmt, key string) error { func (sg *SuperGraph) prepare(ct context.Context, st []stmt, key string) error {
finalSQL, am := processTemplate(st[0].sql) finalSQL, am := processTemplate(st[0].sql)
sd, err := tx.Prepare(finalSQL) sd, err := sg.db.Prepare(finalSQL)
if err != nil { if err != nil {
return err return fmt.Errorf("prepare failed: %v: %s", err, finalSQL)
} }
sg.prepared[key] = &preparedItem{ sg.prepared[key] = &preparedItem{
@ -184,7 +159,7 @@ func (sg *SuperGraph) prepare(ct context.Context, tx *sql.Tx, st []stmt, key str
func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error { func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error {
var err error var err error
if !sg.conf.IsABACEnabled() { if !sg.abacEnabled {
return nil return nil
} }
@ -255,11 +230,18 @@ func (sg *SuperGraph) initAllowList() error {
var ac allow.Config var ac allow.Config
var err error var err error
if !sg.conf.Production { if len(sg.conf.AllowListFile) == 0 {
sg.conf.UseAllowList = false
sg.log.Printf("WRN allow list disabled no file specified")
}
// When list is not eabled it is still created and
// and new queries are saved to it.
if !sg.conf.UseAllowList {
ac = allow.Config{CreateIfNotExists: true, Persist: true} ac = allow.Config{CreateIfNotExists: true, Persist: true}
} }
sg.allowList, err = allow.New(sg.conf.ConfigPathUsed(), ac) sg.allowList, err = allow.New(sg.conf.AllowListFile, ac)
if err != nil { if err != nil {
return fmt.Errorf("failed to initialize allow list: %w", err) return fmt.Errorf("failed to initialize allow list: %w", err)
} }
@ -269,7 +251,7 @@ func (sg *SuperGraph) initAllowList() error {
// nolint: errcheck // nolint: errcheck
func stmtHash(name string, role string) string { func stmtHash(name string, role string) string {
h := sha1.New() h := sha256.New()
io.WriteString(h, strings.ToLower(name)) io.WriteString(h, strings.ToLower(name))
io.WriteString(h, role) io.WriteString(h, role)
return hex.EncodeToString(h.Sum(nil)) return hex.EncodeToString(h.Sum(nil))

View File

@ -1,253 +1,249 @@
package core package core
// import ( import (
// "bytes" "bytes"
// "errors" "errors"
// "fmt" "fmt"
// "net/http" "net/http"
// "sync" "sync"
// "github.com/cespare/xxhash/v2" "github.com/cespare/xxhash/v2"
// "github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/core/internal/qcode"
// "github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/jsn"
// ) )
// func execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) { func (sg *SuperGraph) execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) {
// var err error var err error
// if len(data) == 0 || st.skipped == 0 { sel := st.qc.Selects
// return data, nil h := xxhash.New()
// }
// sel := st.qc.Selects // fetch the field name used within the db response json
// h := xxhash.New() // that are used to mark insertion points and the mapping between
// those field names and their select objects
fids, sfmap := sg.parentFieldIds(h, sel, st.skipped)
// // fetch the field name used within the db response json // fetch the field values of the marked insertion points
// // that are used to mark insertion points and the mapping between // these values contain the id to be used with fetching remote data
// // those field names and their select objects from := jsn.Get(data, fids)
// fids, sfmap := parentFieldIds(h, sel, st.skipped) var to []jsn.Field
// // fetch the field values of the marked insertion points switch {
// // these values contain the id to be used with fetching remote data case len(from) == 1:
// from := jsn.Get(data, fids) to, err = sg.resolveRemote(hdr, h, from[0], sel, sfmap)
// var to []jsn.Field
// switch { case len(from) > 1:
// case len(from) == 1: to, err = sg.resolveRemotes(hdr, h, from, sel, sfmap)
// to, err = resolveRemote(hdr, h, from[0], sel, sfmap)
// case len(from) > 1: default:
// to, err = resolveRemotes(hdr, h, from, sel, sfmap) return nil, errors.New("something wrong no remote ids found in db response")
}
// default: if err != nil {
// return nil, errors.New("something wrong no remote ids found in db response") return nil, err
// } }
// if err != nil { var ob bytes.Buffer
// return nil, err
// }
// var ob bytes.Buffer err = jsn.Replace(&ob, data, from, to)
if err != nil {
return nil, err
}
// err = jsn.Replace(&ob, data, from, to) return ob.Bytes(), nil
// if err != nil { }
// return nil, err
// }
// return ob.Bytes(), nil func (sg *SuperGraph) resolveRemote(
// } hdr http.Header,
h *xxhash.Digest,
field jsn.Field,
sel []qcode.Select,
sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// func resolveRemote( // replacement data for the marked insertion points
// hdr http.Header, // key and value will be replaced by whats below
// h *xxhash.Digest, toA := [1]jsn.Field{}
// field jsn.Field, to := toA[:1]
// sel []qcode.Select,
// sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// // replacement data for the marked insertion points // use the json key to find the related Select object
// // key and value will be replaced by whats below k1 := xxhash.Sum64(field.Key)
// toA := [1]jsn.Field{}
// to := toA[:1]
// // use the json key to find the related Select object s, ok := sfmap[k1]
// k1 := xxhash.Sum64(field.Key) if !ok {
return nil, nil
}
p := sel[s.ParentID]
// s, ok := sfmap[k1] // then use the Table nme in the Select and it's parent
// if !ok { // to find the resolver to use for this relationship
// return nil, nil k2 := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID]
// // then use the Table nme in the Select and it's parent r, ok := sg.rmap[k2]
// // to find the resolver to use for this relationship if !ok {
// k2 := mkkey(h, s.Name, p.Name) return nil, nil
}
// r, ok := rmap[k2] id := jsn.Value(field.Value)
// if !ok { if len(id) == 0 {
// return nil, nil return nil, nil
// } }
// id := jsn.Value(field.Value) //st := time.Now()
// if len(id) == 0 {
// return nil, nil
// }
// //st := time.Now() b, err := r.Fn(hdr, id)
if err != nil {
return nil, err
}
// b, err := r.Fn(hdr, id) if len(r.Path) != 0 {
// if err != nil { b = jsn.Strip(b, r.Path)
// return nil, err }
// }
// if len(r.Path) != 0 { var ob bytes.Buffer
// b = jsn.Strip(b, r.Path)
// }
// var ob bytes.Buffer if len(s.Cols) != 0 {
err = jsn.Filter(&ob, b, colsToList(s.Cols))
if err != nil {
return nil, err
}
// if len(s.Cols) != 0 { } else {
// err = jsn.Filter(&ob, b, colsToList(s.Cols)) ob.WriteString("null")
// if err != nil { }
// return nil, err
// }
// } else { to[0] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()}
// ob.WriteString("null") return to, nil
// } }
// to[0] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()} func (sg *SuperGraph) resolveRemotes(
// return to, nil hdr http.Header,
// } h *xxhash.Digest,
from []jsn.Field,
sel []qcode.Select,
sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// func resolveRemotes( // replacement data for the marked insertion points
// hdr http.Header, // key and value will be replaced by whats below
// h *xxhash.Digest, to := make([]jsn.Field, len(from))
// from []jsn.Field,
// sel []qcode.Select,
// sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// // replacement data for the marked insertion points var wg sync.WaitGroup
// // key and value will be replaced by whats below wg.Add(len(from))
// to := make([]jsn.Field, len(from))
// var wg sync.WaitGroup var cerr error
// wg.Add(len(from))
// var cerr error for i, id := range from {
// for i, id := range from { // use the json key to find the related Select object
k1 := xxhash.Sum64(id.Key)
// // use the json key to find the related Select object s, ok := sfmap[k1]
// k1 := xxhash.Sum64(id.Key) if !ok {
return nil, nil
}
p := sel[s.ParentID]
// s, ok := sfmap[k1] // then use the Table nme in the Select and it's parent
// if !ok { // to find the resolver to use for this relationship
// return nil, nil k2 := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID]
// // then use the Table nme in the Select and it's parent r, ok := sg.rmap[k2]
// // to find the resolver to use for this relationship if !ok {
// k2 := mkkey(h, s.Name, p.Name) return nil, nil
}
// r, ok := rmap[k2] id := jsn.Value(id.Value)
// if !ok { if len(id) == 0 {
// return nil, nil return nil, nil
// } }
// id := jsn.Value(id.Value) go func(n int, id []byte, s *qcode.Select) {
// if len(id) == 0 { defer wg.Done()
// return nil, nil
// }
// go func(n int, id []byte, s *qcode.Select) { //st := time.Now()
// defer wg.Done()
// //st := time.Now() b, err := r.Fn(hdr, id)
if err != nil {
cerr = fmt.Errorf("%s: %s", s.Name, err)
return
}
// b, err := r.Fn(hdr, id) if len(r.Path) != 0 {
// if err != nil { b = jsn.Strip(b, r.Path)
// cerr = fmt.Errorf("%s: %s", s.Name, err) }
// return
// }
// if len(r.Path) != 0 { var ob bytes.Buffer
// b = jsn.Strip(b, r.Path)
// }
// var ob bytes.Buffer if len(s.Cols) != 0 {
err = jsn.Filter(&ob, b, colsToList(s.Cols))
if err != nil {
cerr = fmt.Errorf("%s: %s", s.Name, err)
return
}
// if len(s.Cols) != 0 { } else {
// err = jsn.Filter(&ob, b, colsToList(s.Cols)) ob.WriteString("null")
// if err != nil { }
// cerr = fmt.Errorf("%s: %s", s.Name, err)
// return
// }
// } else { to[n] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()}
// ob.WriteString("null") }(i, id, s)
// } }
wg.Wait()
// to[n] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()} return to, cerr
// }(i, id, s) }
// }
// wg.Wait()
// return to, cerr func (sg *SuperGraph) parentFieldIds(h *xxhash.Digest, sel []qcode.Select, skipped uint32) (
// } [][]byte,
map[uint64]*qcode.Select) {
// func parentFieldIds(h *xxhash.Digest, sel []qcode.Select, skipped uint32) ( c := 0
// [][]byte, for i := range sel {
// map[uint64]*qcode.Select) { s := &sel[i]
if isSkipped(skipped, uint32(s.ID)) {
c++
}
}
// c := 0 // list of keys (and it's related value) to extract from
// for i := range sel { // the db json response
// s := &sel[i] fm := make([][]byte, c)
// if isSkipped(skipped, uint32(s.ID)) {
// c++
// }
// }
// // list of keys (and it's related value) to extract from // mapping between the above extracted key and a Select
// // the db json response // object
// fm := make([][]byte, c) sm := make(map[uint64]*qcode.Select, c)
n := 0
// // mapping between the above extracted key and a Select for i := range sel {
// // object s := &sel[i]
// sm := make(map[uint64]*qcode.Select, c)
// n := 0
// for i := range sel { if !isSkipped(skipped, uint32(s.ID)) {
// s := &sel[i] continue
}
// if !isSkipped(skipped, uint32(s.ID)) { p := sel[s.ParentID]
// continue k := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID] if r, ok := sg.rmap[k]; ok {
// k := mkkey(h, s.Name, p.Name) fm[n] = r.IDField
n++
// if r, ok := rmap[k]; ok { k := xxhash.Sum64(r.IDField)
// fm[n] = r.IDField sm[k] = s
// n++ }
}
// k := xxhash.Sum64(r.IDField) return fm, sm
// sm[k] = s }
// }
// }
// return fm, sm func isSkipped(n uint32, pos uint32) bool {
// } return ((n & (1 << pos)) != 0)
}
// func isSkipped(n uint32, pos uint32) bool { func colsToList(cols []qcode.Column) []string {
// return ((n & (1 << pos)) != 0) var f []string
// }
// func colsToList(cols []qcode.Column) []string { for i := range cols {
// var f []string f = append(f, cols[i].Name)
}
// for i := range cols { return f
// f = append(f, cols[i].Name) }
// }
// return f
// }

View File

@ -6,93 +6,92 @@ import (
"net/http" "net/http"
"strings" "strings"
"github.com/dosco/super-graph/config" "github.com/cespare/xxhash/v2"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
) )
var (
rmap map[uint64]*resolvFn
)
type resolvFn struct { type resolvFn struct {
IDField []byte IDField []byte
Path [][]byte Path [][]byte
Fn func(h http.Header, id []byte) ([]byte, error) Fn func(h http.Header, id []byte) ([]byte, error)
} }
// func initResolvers() { func (sg *SuperGraph) initResolvers() error {
// var err error var err error
// rmap = make(map[uint64]*resolvFn) sg.rmap = make(map[uint64]*resolvFn)
// for _, t := range conf.Tables { for _, t := range sg.conf.Tables {
// err = initRemotes(t) err = sg.initRemotes(t)
// if err != nil { if err != nil {
// break break
// } }
// } }
// if err != nil { if err != nil {
// errlog.Fatal().Err(err).Msg("failed to initialize resolvers") return fmt.Errorf("failed to initialize resolvers: %v", err)
// } }
// }
// func initRemotes(t config.Table) error { return nil
// h := xxhash.New() }
// for _, r := range t.Remotes { func (sg *SuperGraph) initRemotes(t Table) error {
// // defines the table column to be used as an id in the h := xxhash.New()
// // remote request
// idcol := r.ID
// // if no table column specified in the config then for _, r := range t.Remotes {
// // use the primary key of the table as the id // defines the table column to be used as an id in the
// if len(idcol) == 0 { // remote request
// pcol, err := pcompile.IDColumn(t.Name) idcol := r.ID
// if err != nil {
// return err
// }
// idcol = pcol.Key
// }
// idk := fmt.Sprintf("__%s_%s", t.Name, idcol)
// // register a relationship between the remote data // if no table column specified in the config then
// // and the database table // use the primary key of the table as the id
if len(idcol) == 0 {
pcol, err := sg.pc.IDColumn(t.Name)
if err != nil {
return err
}
idcol = pcol.Key
}
idk := fmt.Sprintf("__%s_%s", t.Name, idcol)
// val := &psql.DBRel{Type: psql.RelRemote} // register a relationship between the remote data
// val.Left.Col = idcol // and the database table
// val.Right.Col = idk
// err := pcompile.AddRelationship(strings.ToLower(r.Name), t.Name, val) val := &psql.DBRel{Type: psql.RelRemote}
// if err != nil { val.Left.Col = idcol
// return err val.Right.Col = idk
// }
// // the function thats called to resolve this remote err := sg.pc.AddRelationship(sanitize(r.Name), t.Name, val)
// // data request if err != nil {
// fn := buildFn(r) return err
}
// path := [][]byte{} // the function thats called to resolve this remote
// for _, p := range strings.Split(r.Path, ".") { // data request
// path = append(path, []byte(p)) fn := buildFn(r)
// }
// rf := &resolvFn{ path := [][]byte{}
// IDField: []byte(idk), for _, p := range strings.Split(r.Path, ".") {
// Path: path, path = append(path, []byte(p))
// Fn: fn, }
// }
// // index resolver obj by parent and child names rf := &resolvFn{
// rmap[mkkey(h, r.Name, t.Name)] = rf IDField: []byte(idk),
Path: path,
Fn: fn,
}
// // index resolver obj by IDField // index resolver obj by parent and child names
// rmap[xxhash.Sum64(rf.IDField)] = rf sg.rmap[mkkey(h, r.Name, t.Name)] = rf
// }
// return nil // index resolver obj by IDField
// } sg.rmap[xxhash.Sum64(rf.IDField)] = rf
}
func buildFn(r config.Remote) func(http.Header, []byte) ([]byte, error) { return nil
}
func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) {
reqURL := strings.Replace(r.URL, "$id", "%s", 1) reqURL := strings.Replace(r.URL, "$id", "%s", 1)
client := &http.Client{} client := &http.Client{}
@ -115,16 +114,13 @@ func buildFn(r config.Remote) func(http.Header, []byte) ([]byte, error) {
req.Header.Set(v, hdr.Get(v)) req.Header.Set(v, hdr.Get(v))
} }
// logger.Debug().Str("uri", uri).Msg("Remote Join")
res, err := client.Do(req) res, err := client.Do(req)
if err != nil { if err != nil {
// errlog.Error().Err(err).Msgf("Failed to connect to: %s", uri) return nil, fmt.Errorf("failed to connect to '%s': %v", uri, err)
return nil, err
} }
defer res.Body.Close() defer res.Body.Close()
if r.Debug { // if r.Debug {
// reqDump, err := httputil.DumpRequestOut(req, true) // reqDump, err := httputil.DumpRequestOut(req, true)
// if err != nil { // if err != nil {
// return nil, err // return nil, err
@ -137,7 +133,7 @@ func buildFn(r config.Remote) func(http.Header, []byte) ([]byte, error) {
// logger.Debug().Msgf("Remote Request Debug:\n%s\n%s", // logger.Debug().Msgf("Remote Request Debug:\n%s\n%s",
// reqDump, resDump) // reqDump, resDump)
} // }
if res.StatusCode != 200 { if res.StatusCode != 200 {
return nil, return nil,

15
core/utils.go Normal file
View File

@ -0,0 +1,15 @@
package core
import (
"github.com/cespare/xxhash/v2"
)
// nolint: errcheck
func mkkey(h *xxhash.Digest, k1 string, k2 string) uint64 {
h.WriteString(k1)
h.WriteString(k2)
v := h.Sum64()
h.Reset()
return v
}

View File

@ -1,5 +1,5 @@
<template> <template>
<div class="shadow bg-white p-4 flex items-start" :class="className"> <div class="shadow p-4 flex items-start" :class="className">
<slot name="image"></slot> <slot name="image"></slot>
<div class="pl-4"> <div class="pl-4">
<h2 class="p-0"> <h2 class="p-0">

View File

@ -2,33 +2,33 @@
<div> <div>
<main aria-labelledby="main-title" > <main aria-labelledby="main-title" >
<Navbar /> <Navbar />
<div style="height: 3.6rem"></div>
<div class="container mx-auto mt-24"> <div class="container mx-auto pt-4">
<div class="text-center"> <div class="text-center">
<div class="text-center text-4xl text-gray-800 leading-tight"> <div class="text-center text-3xl md:text-4xl text-black leading-tight font-semibold">
Fetch data without code Fetch data without code
</div> </div>
<NavLink <NavLink
class="inline-block px-4 py-3 my-8 bg-blue-600 text-blue-100 font-bold rounded" class="inline-block px-4 py-3 my-8 bg-blue-600 text-white font-bold rounded"
:item="actionLink" :item="actionLink"
/> />
<a <a
class="px-4 py-3 my-8 border-2 border-gray-500 text-gray-600 font-bold rounded" class="px-4 py-3 my-8 border-2 border-blue-600 text-blue-600 font-bold rounded"
href="https://github.com/dosco/super-graph" href="https://github.com/dosco/super-graph"
target="_blank" target="_blank"
>Github</a> >Github</a>
</div> </div>
</div> </div>
<div class="container mx-auto mb-8">
<div class="container mx-auto mb-8 mt-0 md:mt-20 bg-green-100">
<div class="flex flex-wrap"> <div class="flex flex-wrap">
<div class="w-100 md:w-1/2 bg-indigo-300 text-indigo-800 text-lg p-4"> <div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-center text-2xl font-bold pb-2">Before, struggle with SQL</div> <div class="text-xl font-bold pb-4">Before, struggle with SQL</div>
<pre> <pre>
type User struct { type User struct {
gorm.Model gorm.Model
Profile Profile Profile Profile
@ -50,8 +50,9 @@ db.Model(&user).
and more ... and more ...
</pre> </pre>
</div> </div>
<div class="w-100 md:w-1/2 bg-green-300 text-black text-lg p-4">
<div class="text-center text-2xl font-bold pb-2">With Super Graph, just ask.</div> <div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">With Super Graph, just ask.</div>
<pre> <pre>
query { query {
user(id: 5) { user(id: 5) {
@ -59,26 +60,24 @@ query {
first_name first_name
last_name last_name
picture_url picture_url
}
posts(first: 20, order_by: { score: desc }) { posts(first: 20, order_by: { score: desc }) {
slug slug
title title
created_at created_at
cached_votes_total votes_total
vote(where: { user_id: { eq: $user_id } }) { votes { created_at }
id
}
author { id name } author { id name }
tags { id name } tags { id name }
} }
posts_cursor posts_cursor
}
} }
</pre> </pre>
</div> </div>
</div> </div>
</div> </div>
<div> <div class="mt-0 md:mt-20">
<div <div
class="flex flex-wrap mx-2 md:mx-20" class="flex flex-wrap mx-2 md:mx-20"
v-if="data.features && data.features.length" v-if="data.features && data.features.length"
@ -89,42 +88,29 @@ query {
:key="index" :key="index"
> >
<div class="p-8"> <div class="p-8">
<h2 class="md:text-xl text-blue-800 font-medium border-0 mb-1">{{ feature.title }}</h2> <h2 class="text-lg uppercase border-0">{{ feature.title }}</h2>
<p class="md:text-xl text-gray-700 leading-snug">{{ feature.details }}</p> <div class="text-xl text-gray-900 leading-snug">{{ feature.details }}</div>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
<div class="bg-gray-100 mt-10"> <div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto p-10">
<div class="pb-8 hidden md:flex justify-center"> <div class="flex justify-center pb-20">
<img src="arch-basic.svg"> <img src="arch-basic.svg">
</div> </div>
<h1 class="uppercase font-semibold text-xl text-blue-800 text-center mb-4">
What is Super Graph?
</h1>
<div class="text-2xl md:text-3xl"> <div class="text-2xl md:text-3xl">
Super Graph is a library and service that fetches data from any Postgres database using just GraphQL. No more struggling with ORMs and SQL to wrangle data out of the database. No more having to figure out the right joins or making ineffiient queries. However complex the GraphQL, Super Graph will always generate just one single efficient SQL query. The goal is to save you time and money so you can focus on you're apps core value. Super Graph is a library and service that fetches data from any Postgres database using just GraphQL. No more struggling with ORMs and SQL to wrangle data out of the database. No more having to figure out the right joins or making inefficient queries. However complex the GraphQL, Super Graph will always generate just one single efficient SQL query. The goal is to save you time and money so you can focus on you're apps core value.
</div> </div>
</div> </div>
</div> </div>
<div class="container mx-auto flex flex-wrap"> <div class="pt-20">
<div class="md:w-1/2">
<img src="/graphql.png">
</div>
<div class="md:w-1/2">
<img src="/json.png">
</div>
</div>
<div class="mt-10 py-10 md:py-20">
<div class="container mx-auto px-10 md:px-0"> <div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-2xl text-blue-800 text-center"> <h1 class="uppercase font-semibold text-2xl text-blue-800 text-center">
Try Super Graph Try Super Graph
@ -133,20 +119,18 @@ query {
<h1 class="uppercase font-semibold text-lg text-gray-800"> <h1 class="uppercase font-semibold text-lg text-gray-800">
Deploy as a service using docker Deploy as a service using docker
</h1> </h1>
<div class="bg-gray-800 text-indigo-300 p-4 rounded"> <div class="p-4 rounded bg-black text-white">
<pre>$ git clone https://github.com/dosco/super-graph && cd super-graph && make install</pre> <pre>$ git clone https://github.com/dosco/super-graph && cd super-graph && make install</pre>
<pre>$ super-graph new blog; cd blog</pre> <pre>$ super-graph new blog; cd blog</pre>
<pre>$ docker-compose run blog_api ./super-graph db:setup</pre> <pre>$ docker-compose run blog_api ./super-graph db:setup</pre>
<pre>$ docker-compose up</pre> <pre>$ docker-compose up</pre>
</div> </div>
<div class="border-t mt-4 pb-4"></div>
<h1 class="uppercase font-semibold text-lg text-gray-800"> <h1 class="uppercase font-semibold text-lg text-gray-800">
Or use it with your own code Or use it with your own code
</h1> </h1>
<div class="text-md"> <div class="text-md">
<pre class="bg-gray-800 text-indigo-300 p-4 rounded"> <pre class="p-4 rounded bg-black text-white">
package main package main
import ( import (
@ -161,17 +145,12 @@ import (
func main() { func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db") db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
conf, err := config.NewConfig("./config") sg, err := core.NewSuperGraph(nil, db)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
}
sg, err = core.NewSuperGraph(conf, db)
if err != nil {
log.Fatalf(err)
} }
graphqlQuery := ` graphqlQuery := `
@ -184,7 +163,7 @@ func main() {
res, err := sg.GraphQL(context.Background(), graphqlQuery, nil) res, err := sg.GraphQL(context.Background(), graphqlQuery, nil)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
fmt.Println(string(res.Data)) fmt.Println(string(res.Data))
@ -194,7 +173,7 @@ func main() {
</div> </div>
</div> </div>
<div class="bg-gray-100 mt-10"> <div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
The story of {{ data.heroText }} The story of {{ data.heroText }}
@ -245,7 +224,7 @@ func main() {
</div> </div>
--> -->
<div class="border-t py-10"> <div class="pt-0 md:pt-20">
<div class="block md:hidden w-100"> <div class="block md:hidden w-100">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;"> <iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;">
</iframe> </iframe>
@ -267,7 +246,101 @@ func main() {
</div> </div>
</div> </div>
<div class="bg-gray-200 mt-10"> <div class="container mx-auto pt-0 md:pt-20">
<div class="flex flex-wrap bg-green-100">
<div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">No more joins joins, json, orms, just use GraphQL. Fetch all the data want in the structure you need.</div>
<pre>
query {
thread {
slug
title
published
createdAt : created_at
totalVotes : cached_votes_total
totalPosts : cached_posts_total
vote : thread_vote(where: { user_id: { eq: $user_id } }) {
created_at
}
topics {
slug
name
}
author : me {
slug
}
posts(first: 1, order_by: { score: desc }) {
slug
body
published
createdAt : created_at
totalVotes : cached_votes_total
totalComments : cached_comments_total
vote {
created_at
}
author : user {
slug
firstName : first_name
lastName : last_name
}
}
posts_cursor
}
}
</pre>
</div>
<div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">Instant results using a single highly optimized SQL. It's just that simple.</div>
<pre>
{
"data": {
"thread": {
"slug": "eveniet-ex-24",
"vote": null,
"posts": [
{
"body": "Dolor laborum harum sed sit est ducimus temporibus velit non nobis repudiandae nobis suscipit commodi voluptatem debitis sed voluptas sequi officia.",
"slug": "illum-in-voluptas-1418",
"vote": null,
"author": {
"slug": "sigurd-kemmer",
"lastName": "Effertz",
"firstName": "Brandt"
},
"createdAt": "2020-04-07T04:22:42.115874+00:00",
"published": true,
"totalVotes": 0,
"totalComments": 2
}
],
"title": "In aut qui deleniti quia dolore quasi porro tenetur voluptatem ut adita alias fugit explicabo.",
"author": null,
"topics": [
{
"name": "CloudRun",
"slug": "cloud-run"
},
{
"name": "Postgres",
"slug": "postgres"
}
],
"createdAt": "2020-04-07T04:22:38.099482+00:00",
"published": true,
"totalPosts": 24,
"totalVotes": 0,
"posts_cursor": "mpeBl6L+QfJHc3cmLkLDj9pOdEZYTt5KQtLsazG3TLITB3hJhg=="
}
}
}
</pre>
</div>
</div>
</div>
<div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Build Secure Apps Build Secure Apps
@ -292,8 +365,8 @@ func main() {
</div> </div>
</div> </div>
<div class=""> <div class="pt-0 md:py -20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
More Features More Features
</h1> </h1>

View File

@ -1,6 +1,9 @@
@tailwind base; @tailwind base;
@css { @css {
body, .navbar, .navbar .links {
@apply bg-white text-black border-0 !important;
}
h1 { h1 {
@apply font-semibold text-3xl border-0 py-4 @apply font-semibold text-3xl border-0 py-4
} }

View File

@ -10,7 +10,7 @@ longTagline: Get an instant high performance GraphQL API for Postgres. No code n
actionText: Get Started, Free, Open Source → actionText: Get Started, Free, Open Source →
actionLink: /guide actionLink: /guide
description: Super Graph can automatically learn a Postgres database and instantly serve it as a fast and secured GraphQL API. It comes with tools to create a new app and manage it's database. You get it all, a very productive developer and a highly scalable app backend. It's designed to work well on serverless platforms by Google, AWS, Microsoft, etc. The goal is to save you a ton of time and money so you can focus on you're apps core value. description: Super Graph can automatically learn a Postgres database and instantly serve it as a fast and secured GraphQL API. It comes with tools to create a new app and manage it's database. You get it all, a very productive developer and a highly scalable app backend. It's designed to work well on serverless platforms by Google, AWS, Microsoft, etc. The goal is to save you a ton of time and money so you can focus on your apps core value.
features: features:
- title: Simple - title: Simple

View File

@ -32,7 +32,7 @@ For this to work you have to ensure that the option `:domain => :all` is added t
### With an NGINX loadbalancer ### With an NGINX loadbalancer
If you're infrastructure is fronted by NGINX then it should be configured so that all requests to your GraphQL API path are proxyed to Super Graph. In the example NGINX config below all requests to the path `/api/v1/graphql` are routed to wherever you have Super Graph installed within your architecture. This example is derived from the config file example at [/microservices-nginx-gateway/nginx.conf](https://github.com/launchany/microservices-nginx-gateway/blob/master/nginx.conf) If your infrastructure is fronted by NGINX then it should be configured so that all requests to your GraphQL API path are proxyed to Super Graph. In the example NGINX config below all requests to the path `/api/v1/graphql` are routed to wherever you have Super Graph installed within your architecture. This example is derived from the config file example at [/microservices-nginx-gateway/nginx.conf](https://github.com/launchany/microservices-nginx-gateway/blob/master/nginx.conf)
::: tip NGINX with sub-domain ::: tip NGINX with sub-domain
Yes, NGINX is very flexible and you can configure it to keep Super Graph a subdomain instead of on the same top level domain. I'm sure a little Googleing will get you some great example configs for that. Yes, NGINX is very flexible and you can configure it to keep Super Graph a subdomain instead of on the same top level domain. I'm sure a little Googleing will get you some great example configs for that.

View File

@ -347,12 +347,10 @@ beer_style
beer_yeast beer_yeast
// Cars // Cars
vehicle car
vehicle_type car_type
car_maker car_maker
car_model car_model
fuel_type
transmission_gear_type
// Text // Text
word word
@ -438,8 +436,8 @@ hipster_paragraph
hipster_sentence hipster_sentence
// File // File
extension file_extension
mine_type file_mine_type
// Numbers // Numbers
number number
@ -463,11 +461,18 @@ mac_address
digit digit
letter letter
lexify lexify
rand_string
shuffle_strings shuffle_strings
numerify numerify
``` ```
Other utility functions
```
shuffle_strings(string_array)
make_slug(text)
make_slug_lang(text, lang)
```
### Migrations ### Migrations
Easy database migrations is the most important thing when building products backend by a relational database. We make it super easy to manage and migrate your database. Easy database migrations is the most important thing when building products backend by a relational database. We make it super easy to manage and migrate your database.
@ -725,6 +730,32 @@ query {
} }
``` ```
### Custom Functions
Any function defined in the database like the below `add_five` that adds 5 to any number given to it can be used
within your query. The one limitation is that it should be a function that only accepts a single argument. The function is used within you're GraphQL in similar way to how aggregrations are used above. Example below
```grahql
query {
thread(id: 5) {
id
total_votes
add_five_total_votes
}
}
```
Postgres user-defined function `add_five`
```
CREATE OR REPLACE FUNCTION add_five(a integer) RETURNS integer AS $$
BEGIN
RETURN a + 5;
END;
$$ LANGUAGE plpgsql;
```
In GraphQL mutations is the operation type for when you need to modify data. Super Graph supports the `insert`, `update`, `upsert` and `delete`. You can also do complex nested inserts and updates. In GraphQL mutations is the operation type for when you need to modify data. Super Graph supports the `insert`, `update`, `upsert` and `delete`. You can also do complex nested inserts and updates.
When using mutations the data must be passed as variables since Super Graphs compiles the query into an prepared statement in the database for maximum speed. Prepared statements are are functions in your code when called they accept arguments and your variables are passed in as those arguments. When using mutations the data must be passed as variables since Super Graphs compiles the query into an prepared statement in the database for maximum speed. Prepared statements are are functions in your code when called they accept arguments and your variables are passed in as those arguments.
@ -1038,7 +1069,7 @@ mutation {
### Pagination ### Pagination
This is a must have feature of any API. When you want your users to go thought a list page by page or implement some fancy infinite scroll you're going to need pagination. There are two ways to paginate in Super Graph. This is a must have feature of any API. When you want your users to go through a list page by page or implement some fancy infinite scroll you're going to need pagination. There are two ways to paginate in Super Graph.
Limit-Offset Limit-Offset
This is simple enough but also inefficient when working with a large number of total items. Limit, limits the number of items fetched and offset is the point you want to fetch from. The below query will fetch 10 results at a time starting with the 100th item. You will have to keep updating offset (110, 120, 130, etc ) to walk thought the results so make offset a variable. This is simple enough but also inefficient when working with a large number of total items. Limit, limits the number of items fetched and offset is the point you want to fetch from. The below query will fetch 10 results at a time starting with the 100th item. You will have to keep updating offset (110, 120, 130, etc ) to walk thought the results so make offset a variable.
@ -1054,7 +1085,7 @@ query {
``` ```
#### Cursor #### Cursor
This is a powerful and highly efficient way to paginate though a large number of results. Infact it does not matter how many total results there are this will always be lighting fast. You can use a cursor to walk forward of backward though the results. If you plan to implement infinite scroll this is the option you should choose. This is a powerful and highly efficient way to paginate a large number of results. Infact it does not matter how many total results there are this will always be lighting fast. You can use a cursor to walk forward or backward through the results. If you plan to implement infinite scroll this is the option you should choose.
When going this route the results will contain a cursor value this is an encrypted string that you don't have to worry about just pass this back in to the next API call and you'll received the next set of results. The cursor value is encrypted since its contents should only matter to Super Graph and not the client. Also since the primary key is used for this feature it's possible you might not want to leak it's value to clients. When going this route the results will contain a cursor value this is an encrypted string that you don't have to worry about just pass this back in to the next API call and you'll received the next set of results. The cursor value is encrypted since its contents should only matter to Super Graph and not the client. Also since the primary key is used for this feature it's possible you might not want to leak it's value to clients.
@ -1704,7 +1735,7 @@ reload_on_config_change: true
# seed_file: seed.js # seed_file: seed.js
# Path pointing to where the migrations can be found # Path pointing to where the migrations can be found
migrations_path: ./config/migrations migrations_path: ./migrations
# Postgres related environment Variables # Postgres related environment Variables
# SG_DATABASE_HOST # SG_DATABASE_HOST
@ -1790,6 +1821,25 @@ database:
# Enable this if you need the user id in triggers, etc # Enable this if you need the user id in triggers, etc
set_user_id: false set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 1m
# Set up an secure tls encrypted db connection
enable_tls: false
# Required for tls. For example with Google Cloud SQL it's
# <gcp-project-id>:<cloud-sql-instance>"
# server_name: blah
# Required for tls. Can be a file path or the contents of the pem file
# server_cert: ./server-ca.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_cert: ./client-cert.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_key: ./client-key.pem
# Define additional variables here to be used with filters # Define additional variables here to be used with filters
variables: variables:
admin_account_id: "5" admin_account_id: "5"

View File

@ -3,6 +3,11 @@ services:
db: db:
image: postgres image: postgres
tmpfs: /var/lib/postgresql/data tmpfs: /var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
rails_app: rails_app:
image: dosco/super-graph-demo:latest image: dosco/super-graph-demo:latest

42
go.mod
View File

@ -1,35 +1,43 @@
module github.com/dosco/super-graph module github.com/dosco/super-graph
require ( require (
github.com/DATA-DOG/go-sqlmock v1.4.1
github.com/GeertJohan/go.rice v1.0.0 github.com/GeertJohan/go.rice v1.0.0
github.com/NYTimes/gziphandler v1.1.1 github.com/NYTimes/gziphandler v1.1.1
github.com/adjust/gorails v0.0.0-20171013043634-2786ed0c03d3 github.com/adjust/gorails v0.0.0-20171013043634-2786ed0c03d3
github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b
github.com/brianvoe/gofakeit v3.18.0+incompatible github.com/brianvoe/gofakeit/v5 v5.2.0
github.com/cespare/xxhash/v2 v2.1.0 github.com/cespare/xxhash/v2 v2.1.1
github.com/chirino/graphql v0.0.0-20200430165312-293648399b1a
github.com/daaku/go.zipexe v1.0.1 // indirect github.com/daaku/go.zipexe v1.0.1 // indirect
github.com/dgrijalva/jwt-go v3.2.0+incompatible github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/dlclark/regexp2 v1.2.0 // indirect github.com/dlclark/regexp2 v1.2.0 // indirect
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733 github.com/dop251/goja v0.0.0-20200424152103-d0b8fda54cd0
github.com/fsnotify/fsnotify v1.4.7 github.com/fsnotify/fsnotify v1.4.9
github.com/garyburd/redigo v1.6.0 github.com/garyburd/redigo v1.6.0
github.com/go-sourcemap/sourcemap v2.1.2+incompatible // indirect github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect
github.com/gobuffalo/flect v0.1.6 github.com/gobuffalo/flect v0.2.1
github.com/jackc/pgtype v1.0.1 github.com/gosimple/slug v1.9.0
github.com/jackc/pgx/v4 v4.0.1 github.com/jackc/pgtype v1.3.0
github.com/magiconair/properties v1.8.1 // indirect github.com/jackc/pgx/v4 v4.6.0
github.com/pelletier/go-toml v1.4.0 // indirect github.com/mitchellh/mapstructure v1.2.2 // indirect
github.com/pkg/errors v0.8.1 github.com/pelletier/go-toml v1.7.0 // indirect
github.com/pkg/errors v0.9.1
github.com/rs/cors v1.7.0 github.com/rs/cors v1.7.0
github.com/spf13/afero v1.2.2 // indirect github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/cobra v0.0.5 github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/cobra v1.0.0
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.4.0 github.com/spf13/viper v1.6.3
github.com/valyala/fasttemplate v1.0.1 github.com/stretchr/testify v1.5.1
github.com/valyala/fasttemplate v1.1.0
go.uber.org/zap v1.14.1 go.uber.org/zap v1.14.1
golang.org/x/crypto v0.0.0-20190927123631-a832865fa7ad golang.org/x/crypto v0.0.0-20200414173820-0848c9571904
golang.org/x/sys v0.0.0-20191128015809-6d18c012aee9 // indirect golang.org/x/sys v0.0.0-20200413165638-669c56c373c4 // indirect
gopkg.in/yaml.v2 v2.2.7 // indirect golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 // indirect
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
gopkg.in/ini.v1 v1.55.0 // indirect
) )
go 1.13 go 1.13

155
go.sum
View File

@ -1,6 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/DATA-DOG/go-sqlmock v1.4.1 h1:ThlnYciV1iM/V0OSF/dtkqWb6xo5qITT1TJBG1MRDJM=
github.com/DATA-DOG/go-sqlmock v1.4.1/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/GeertJohan/go.incremental v1.0.0 h1:7AH+pY1XUgQE4Y1HcXYaMqAI0m9yrFqo/jt0CW30vsg= github.com/GeertJohan/go.incremental v1.0.0 h1:7AH+pY1XUgQE4Y1HcXYaMqAI0m9yrFqo/jt0CW30vsg=
github.com/GeertJohan/go.incremental v1.0.0/go.mod h1:6fAjUhbVuX1KcMD3c8TEgVUqmo4seqhv0i0kdATSkM0= github.com/GeertJohan/go.incremental v1.0.0/go.mod h1:6fAjUhbVuX1KcMD3c8TEgVUqmo4seqhv0i0kdATSkM0=
github.com/GeertJohan/go.rice v1.0.0 h1:KkI6O9uMaQU3VEKaj01ulavtF7o1fWT7+pk/4voiMLQ= github.com/GeertJohan/go.rice v1.0.0 h1:KkI6O9uMaQU3VEKaj01ulavtF7o1fWT7+pk/4voiMLQ=
@ -19,24 +21,27 @@ github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b h1:L/QXpzIa3pOvUGt1D1lA5KjYhPBAN/3iWdP7xeFS9F0= github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b h1:L/QXpzIa3pOvUGt1D1lA5KjYhPBAN/3iWdP7xeFS9F0=
github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b/go.mod h1:H0wQNHz2YrLsuXOZozoeDmnHXkNCRmMW0gwFWDfEZDA= github.com/bradfitz/gomemcache v0.0.0-20190913173617-a41fca850d0b/go.mod h1:H0wQNHz2YrLsuXOZozoeDmnHXkNCRmMW0gwFWDfEZDA=
github.com/brianvoe/gofakeit v3.18.0+incompatible h1:wDOmHc9DLG4nRjUVVaxA+CEglKOW72Y5+4WNxUIkjM8= github.com/brianvoe/gofakeit/v5 v5.2.0 h1:De9X+2PQum9U2zCaIDxLV7wx0YBL6c7RN2sFBImzHGI=
github.com/brianvoe/gofakeit v3.18.0+incompatible/go.mod h1:kfwdRA90vvNhPutZWfH7WPaDzUjz+CZFqG+rPkOjGOc= github.com/brianvoe/gofakeit/v5 v5.2.0/go.mod h1:/ZENnKqX+XrN8SORLe/fu5lZDIo1tuPncWuRD+eyhSI=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA= github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chirino/graphql v0.0.0-20200430165312-293648399b1a h1:WVu7r2vwlrBVmunbSSU+9/3M3AgsQyhE49CKDjHiFq4=
github.com/chirino/graphql v0.0.0-20200430165312-293648399b1a/go.mod h1:wQjjxFMFyMlsWh4Z3nMuHQtevD4Ul9UVQSnz1JOLuP8=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I= github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I=
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ= github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd h1:qMd81Ts1T2OTKmB4acZcyKaMtRnY5Y44NuXGX2GFJ1w=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10 h1:BSKMNlYxDvnunlTymqtgONjNnaRV1sTpcovwwjF22jk= github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY= github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/daaku/go.zipexe v1.0.0 h1:VSOgZtH418pH9L16hC/JrgSNJbbAL26pj7lmD1+CGdY= github.com/daaku/go.zipexe v1.0.0 h1:VSOgZtH418pH9L16hC/JrgSNJbbAL26pj7lmD1+CGdY=
github.com/daaku/go.zipexe v1.0.0/go.mod h1:z8IiR6TsVLEYKwXAoE/I+8ys/sDkgTzSL0CLnGVd57E= github.com/daaku/go.zipexe v1.0.0/go.mod h1:z8IiR6TsVLEYKwXAoE/I+8ys/sDkgTzSL0CLnGVd57E=
@ -50,21 +55,24 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dlclark/regexp2 v1.2.0 h1:8sAhBGEM0dRWogWqWyQeIJnxjWO6oIjl8FKqREDsGfk= github.com/dlclark/regexp2 v1.2.0 h1:8sAhBGEM0dRWogWqWyQeIJnxjWO6oIjl8FKqREDsGfk=
github.com/dlclark/regexp2 v1.2.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc= github.com/dlclark/regexp2 v1.2.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733 h1:cyNc40Dx5YNEO94idePU8rhVd3dn+sd04Arh0kDBAaw= github.com/dop251/goja v0.0.0-20200424152103-d0b8fda54cd0 h1:EfFAcaAwGai/wlDCWwIObHBm3T2C2CCPX/SaS0fpOJ4=
github.com/dop251/goja v0.0.0-20190912223329-aa89e6a4c733/go.mod h1:Mw6PkjjMXWbTj+nnj4s3QPXq1jaT0s5pC0iFD4+BOAA= github.com/dop251/goja v0.0.0-20200424152103-d0b8fda54cd0/go.mod h1:Mw6PkjjMXWbTj+nnj4s3QPXq1jaT0s5pC0iFD4+BOAA=
github.com/friendsofgo/graphiql v0.2.2/go.mod h1:8Y2kZ36AoTGWs78+VRpvATyt3LJBx0SZXmay80ZTRWo=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/garyburd/redigo v1.6.0 h1:0VruCpn7yAIIu7pWVClQC8wxCJEcG3nyzpMSHKi1PQc= github.com/garyburd/redigo v1.6.0 h1:0VruCpn7yAIIu7pWVClQC8wxCJEcG3nyzpMSHKi1PQc=
github.com/garyburd/redigo v1.6.0/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY= github.com/garyburd/redigo v1.6.0/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-sourcemap/sourcemap v2.1.2+incompatible h1:0b/xya7BKGhXuqFESKM4oIiRo9WOt2ebz7KxfreD6ug= github.com/go-sourcemap/sourcemap v2.1.3+incompatible h1:W1iEw64niKVGogNgBN3ePyLFfuisuzeidWPMPWmECqU=
github.com/go-sourcemap/sourcemap v2.1.2+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg= github.com/go-sourcemap/sourcemap v2.1.3+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobuffalo/flect v0.1.6 h1:D7KWNRFiCknJKA495/e1BO7oxqf8tbieaLv/ehoZ/+g= github.com/gobuffalo/flect v0.2.1 h1:GPoRjEN0QObosV4XwuoWvSd5uSiL0N3e91/xqyY4crQ=
github.com/gobuffalo/flect v0.1.6/go.mod h1:W3K3X9ksuZfir8f/LrfVtWmCDQFfayuylOJ7sz/Fj80= github.com/gobuffalo/flect v0.2.1/go.mod h1:vmkQwuZYhN5Pc4ljYQZzP+1sq+NEkK+lh20jmEmX3jc=
github.com/gofrs/uuid v3.2.0+incompatible h1:y12jRkkFxsd7GpqdSZ+/KCs/fJbqpEXSGd4+jfEaewE= github.com/gofrs/uuid v3.2.0+incompatible h1:y12jRkkFxsd7GpqdSZ+/KCs/fJbqpEXSGd4+jfEaewE=
github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
@ -76,9 +84,15 @@ github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5y
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q= github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gosimple/slug v1.9.0 h1:r5vDcYrFz9BmfIAMC829un9hq7hKM4cHUrsv36LbEqs=
github.com/gosimple/slug v1.9.0/go.mod h1:AMZ+sOVe65uByN3kgEyf9WEBKBCSS+dJjMX9x4vDJbg=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
@ -90,11 +104,13 @@ github.com/jackc/chunkreader v1.0.0 h1:4s39bBR8ByfqH+DKm8rQA3E1LHZWB9XWcrz8fqaZb
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo= github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
github.com/jackc/chunkreader/v2 v2.0.0 h1:DUwgMQuuPnS0rhMXenUtZpqZqrR/30NWY+qQvTpSvEs= github.com/jackc/chunkreader/v2 v2.0.0 h1:DUwgMQuuPnS0rhMXenUtZpqZqrR/30NWY+qQvTpSvEs=
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk= github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
github.com/jackc/chunkreader/v2 v2.0.1/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA= github.com/jackc/pgconn v0.0.0-20190420214824-7e0022ef6ba3/go.mod h1:jkELnwuX+w9qN5YIfX0fl88Ehu4XC3keFuOJJk9pcnA=
github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE= github.com/jackc/pgconn v0.0.0-20190824142844-760dd75542eb/go.mod h1:lLjNuW/+OfW9/pnVKPazfWOgNfH2aPem8YQ7ilXGvJE=
github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s= github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsUgOEh9hBm+xYTstcNHg7UPMVJqRfQxq4s=
github.com/jackc/pgconn v1.0.1 h1:ZANo4pIkeHKIVD1cQMcxu8fwrwIICLblzi9HCjooZeQ= github.com/jackc/pgconn v1.5.0 h1:oFSOilzIZkyg787M1fEmyMfOUUvwj0daqYMfaWwNL4o=
github.com/jackc/pgconn v1.0.1/go.mod h1:GgY/Lbj1VonNaVdNUHs9AwWom3yP2eymFQ1C8z9r/Lk= github.com/jackc/pgconn v1.5.0/go.mod h1:QeD3lBfpTFe8WUnPZWN5KY/mB8FGMIYRdd8P8Jr0fAI=
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE= github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8= github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2 h1:JVX6jT/XfzNqIjye4717ITLaNwV9mWbJx0dLCpcRzdA= github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2 h1:JVX6jT/XfzNqIjye4717ITLaNwV9mWbJx0dLCpcRzdA=
@ -107,25 +123,31 @@ github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg= github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM= github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM= github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
github.com/jackc/pgproto3/v2 v2.0.0 h1:FApgMJ/GtaXfI0s8Lvd0kaLaRwMOhs4VH92pwkwQQvU= github.com/jackc/pgproto3/v2 v2.0.1 h1:Rdjp4NFjwHnEslx2b66FfCI2S0LhO4itac3hXz6WX9M=
github.com/jackc/pgproto3/v2 v2.0.0/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM= github.com/jackc/pgproto3/v2 v2.0.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgservicefile v0.0.0-20200307190119-3430c5407db8 h1:Q3tB+ExeflWUW7AFcAhXqk40s9mnNYLk1nOkKNZ5GnU=
github.com/jackc/pgservicefile v0.0.0-20200307190119-3430c5407db8/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg= github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc= github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw= github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
github.com/jackc/pgtype v1.0.1 h1:7GWB9n3DdnO3TIbj59wMAE9QcHPL4cy/Bbtk5P1Noow= github.com/jackc/pgtype v1.3.0 h1:l8JvKrby3RI7Kg3bYEeU9TA4vqC38QDpFCfcrC7KuN0=
github.com/jackc/pgtype v1.0.1/go.mod h1:5m2OfMh1wTK7x+Fk952IDmI4nw3nPrvtQdM0ZT4WpC0= github.com/jackc/pgtype v1.3.0/go.mod h1:b0JqxHvPmljG+HQ5IsvQ0yqeSi4nGcDTVjFoiLDb0Ik=
github.com/jackc/pgx v3.6.2+incompatible h1:2zP5OD7kiyR3xzRYMhOcXVvkDZsImVXfj+yIyTQf3/o=
github.com/jackc/pgx v3.6.2+incompatible/go.mod h1:0ZGrqGqkRlliWnWB4zKnWtjbSWbGkVEFm4TeybAXq+I=
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y= github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM= github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc= github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
github.com/jackc/pgx/v4 v4.0.1 h1:NNrG0MX2AVEJw1NNDYg+ixSXycCfWWKeqMuQHQkAngc= github.com/jackc/pgx/v4 v4.6.0 h1:Fh0O9GdlG4gYpjpwOqjdEodJUQM9jzN3Hdv7PN0xmm0=
github.com/jackc/pgx/v4 v4.0.1/go.mod h1:NeQ64VJooukJGFLX2r01sJL/gRbKlpvsO2giBvjfgrY= github.com/jackc/pgx/v4 v4.6.0/go.mod h1:vPh43ZzxijXUVJ+t/EmXBtFmbFVO72cuneCT9oAlxAg=
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk= github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.0.0 h1:rbjAshlgKscNa7j0jAM0uNQflis5o2XUogPMVAwtcsM= github.com/jackc/puddle v1.1.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.0.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jessevdk/go-flags v1.4.0 h1:4IU2WS7AumrZ/40jfhf4QVDMsQwqA7VEHozFRrGARJA= github.com/jessevdk/go-flags v1.4.0 h1:4IU2WS7AumrZ/40jfhf4QVDMsQwqA7VEHozFRrGARJA=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
@ -158,17 +180,26 @@ github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE= github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.2.2 h1:dxe5oCinTXiTIcfgmZecdCzPmAJKd46KsCWc35r0TV4=
github.com/mitchellh/mapstructure v1.2.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229 h1:E2B8qYyeSgv5MXpmzZXRNp8IAQ4vjxIjhpAf5hv/tAg= github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229 h1:E2B8qYyeSgv5MXpmzZXRNp8IAQ4vjxIjhpAf5hv/tAg=
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229/go.mod h1:0aYXnNPJ8l7uZxf45rWW1a/uME32OF0rhiYGNQ2oF2E= github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229/go.mod h1:0aYXnNPJ8l7uZxf45rWW1a/uME32OF0rhiYGNQ2oF2E=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.4.0 h1:u3Z1r+oOXJIkxqw34zVhyPgjBsm6X2wn21NWs/HfSeg= github.com/pelletier/go-toml v1.7.0 h1:7utD74fnzVc/cpcyy8sjrlFr5vYpypUixARcHIMIGuI=
github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo= github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@ -180,6 +211,8 @@ github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rainycape/unidecode v0.0.0-20150907023854-cb7f23ec59be h1:ta7tUOvsPHVHGom5hKW5VXNc2xZIkfCKP8iaqOyYtUQ=
github.com/rainycape/unidecode v0.0.0-20150907023854-cb7f23ec59be/go.mod h1:MIDFMn7db1kT65GmV94GzpX9Qdi7N/pQlwb+AN8wh+Q=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik= github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
@ -188,14 +221,22 @@ github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU= github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
github.com/rs/zerolog v1.15.0 h1:uPRuwkWF4J6fGsJ2R0Gn2jB1EQiav9k3S6CSdygQJXY= github.com/rs/zerolog v1.15.0 h1:uPRuwkWF4J6fGsJ2R0Gn2jB1EQiav9k3S6CSdygQJXY=
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc= github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/segmentio/ksuid v1.0.2 h1:9yBfKyw4ECGTdALaF09Snw3sLJmYIX6AbPJrAy6MrDc=
github.com/segmentio/ksuid v1.0.2/go.mod h1:BXuJDr2byAiHuQaQtSKoXh1J0YmUDurywOXgB2w+OSU=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24 h1:pntxY8Ary0t43dCZ5dqY4YTJCObLY1kIXl0uzMv+7DE= github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24 h1:pntxY8Ary0t43dCZ5dqY4YTJCObLY1kIXl0uzMv+7DE=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4= github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd/go.mod h1:TrYk7fJVaAttu97ZZKrO9UbRa8izdowaMIZcxYMbVaw=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI= github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
@ -204,18 +245,22 @@ github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8= github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk= github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2 h1:VUFqw5KcqRf7i70GOzW7N+Q7+gxVBkSSqiXB12+JQ4M=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU= github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.3 h1:pDDu1OyEDTKzpJwdq4TiuLyMsUgRa/BT5cn5O62NoHs=
github.com/spf13/viper v1.6.3/go.mod h1:jUMtyi0/lB5yZH/FjyGAoH7IMNrIhlBf6pXZmbMDvzw=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
@ -224,13 +269,24 @@ github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/uber-go/atomic v1.3.2 h1:Azu9lPBWRNKzYXSIwRfgRuDuS0YKsK4NFhiQv98gkxo=
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
github.com/uber/jaeger-client-go v2.14.1-0.20180928181052-40fb3b2c4120+incompatible h1:Dw0AFQs6RGO8RxMPGP2LknN/VtHolVH82P9PP0Ni+9w=
github.com/uber/jaeger-client-go v2.14.1-0.20180928181052-40fb3b2c4120+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
github.com/uber/jaeger-lib v1.5.0 h1:OHbgr8l656Ub3Fw5k9SWnBfIEwvoHQ+W2y+Aa9D1Uyo=
github.com/uber/jaeger-lib v1.5.0/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw= github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc= github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1 h1:tY9CJiPnMXf1ERmG2EyK7gNUd+c6RKGD0IfU8WdUSz8= github.com/valyala/fasttemplate v1.0.1 h1:tY9CJiPnMXf1ERmG2EyK7gNUd+c6RKGD0IfU8WdUSz8=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8= github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.1.0 h1:RZqt0yGBsps8NGvLSGW804QQqCUYYLsaOjTVHy1Ocw4=
github.com/valyala/fasttemplate v1.1.0/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q= github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
@ -249,22 +305,21 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.14.1 h1:nYDKopTbvAPq/NrUVZwT15y2lpROBiLLyoRTbXOYWOo= go.uber.org/zap v1.14.1 h1:nYDKopTbvAPq/NrUVZwT15y2lpROBiLLyoRTbXOYWOo=
go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc= go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9 h1:mKdxBk7AujPs8kU4m80U72y/zjbZ3UcXC7dClwKbUI0=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE= golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20190927123631-a832865fa7ad h1:5E5raQxcv+6CZ11RrBYQe5WRbUIWpScjh0kvHZkZIrQ= golang.org/x/crypto v0.0.0-20200414173820-0848c9571904 h1:bXoxMPcSLOq08zI3/c5dEBT6lE4eh+jOh886GHrn6V8=
golang.org/x/crypto v0.0.0-20190927123631-a832865fa7ad/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs= golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -285,8 +340,6 @@ golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a h1:1n5lsVfiQW3yfsRGu98756EH1YthsFqr/5mxHduZW2A=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223 h1:DH4skfRX4EBpamg7iV4ZlCpblAHI6s6TDM39bFZumv8= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223 h1:DH4skfRX4EBpamg7iV4ZlCpblAHI6s6TDM39bFZumv8=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -296,8 +349,9 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456 h1:ng0gs1AKnRRuEMZoTLLlbOd+C17zUDepwGQBb/n+JVg= golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456 h1:ng0gs1AKnRRuEMZoTLLlbOd+C17zUDepwGQBb/n+JVg=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191128015809-6d18c012aee9 h1:ZBzSG/7F4eNKz2L3GE9o300RX0Az1Bw5HF7PDraD+qU= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191128015809-6d18c012aee9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200413165638-669c56c373c4 h1:opSr2sbRXk5X5/givKrrKj9HXxFpW2sdCiP8MJSKLQY=
golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
@ -307,16 +361,22 @@ golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs= golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200128220307-520188d60f50 h1:0qnG0gwzB6QPiLDow10WJDdB38c+hQ7ArxO26Qc1boM=
golang.org/x/tools v0.0.0-20200128220307-520188d60f50/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@ -326,15 +386,22 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s= gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.55.0 h1:E8yzL5unfpW3M6fz/eB7Cb5MQAYSZ7GKo4Qth+N2sgQ=
gopkg.in/ini.v1 v1.55.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099 h1:XJP7lxbSxWLOMNdBE4B/STaqVy6L73o0knwj2vIlxnw= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099 h1:XJP7lxbSxWLOMNdBE4B/STaqVy6L73o0knwj2vIlxnw=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM= honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=

View File

@ -3,13 +3,11 @@ package serv
import ( import (
"fmt" "fmt"
"net/http" "net/http"
"github.com/dosco/super-graph/config"
) )
type actionFn func(w http.ResponseWriter, r *http.Request) error type actionFn func(w http.ResponseWriter, r *http.Request) error
func newAction(a *config.Action) (http.Handler, error) { func newAction(a *Action) (http.Handler, error) {
var fn actionFn var fn actionFn
var err error var err error
@ -25,14 +23,14 @@ func newAction(a *config.Action) (http.Handler, error) {
httpFn := func(w http.ResponseWriter, r *http.Request) { httpFn := func(w http.ResponseWriter, r *http.Request) {
if err := fn(w, r); err != nil { if err := fn(w, r); err != nil {
renderErr(w, err, nil) renderErr(w, err)
} }
} }
return http.HandlerFunc(httpFn), nil return http.HandlerFunc(httpFn), nil
} }
func newSQLAction(a *config.Action) (actionFn, error) { func newSQLAction(a *Action) (actionFn, error) {
fn := func(w http.ResponseWriter, r *http.Request) error { fn := func(w http.ResponseWriter, r *http.Request) error {
_, err := db.ExecContext(r.Context(), a.SQL) _, err := db.ExecContext(r.Context(), a.SQL)
return err return err

113
internal/serv/api.go Normal file
View File

@ -0,0 +1,113 @@
package serv
import (
"time"
"github.com/dosco/super-graph/core"
"github.com/dosco/super-graph/internal/serv/internal/auth"
"github.com/spf13/viper"
)
const (
LogLevelNone int = iota
LogLevelInfo
LogLevelWarn
LogLevelError
LogLevelDebug
)
type Core = core.Config
// Config struct holds the Super Graph config values
type Config struct {
Core `mapstructure:",squash"`
Serv `mapstructure:",squash"`
cpath string
vi *viper.Viper
}
// Serv struct contains config values used by the Super Graph service
type Serv struct {
AppName string `mapstructure:"app_name"`
Production bool
LogLevel string `mapstructure:"log_level"`
HostPort string `mapstructure:"host_port"`
Host string
Port string
HTTPGZip bool `mapstructure:"http_compress"`
WebUI bool `mapstructure:"web_ui"`
EnableTracing bool `mapstructure:"enable_tracing"`
WatchAndReload bool `mapstructure:"reload_on_config_change"`
AuthFailBlock bool `mapstructure:"auth_fail_block"`
SeedFile string `mapstructure:"seed_file"`
MigrationsPath string `mapstructure:"migrations_path"`
AllowedOrigins []string `mapstructure:"cors_allowed_origins"`
DebugCORS bool `mapstructure:"cors_debug"`
APIPath string `mapstructure:"api_path"`
CacheControl string `mapstructure:"cache_control"`
Auth auth.Auth
Auths []auth.Auth
DB struct {
Type string
Host string
Port uint16
DBName string
User string
Password string
Schema string
PoolSize int32 `mapstructure:"pool_size"`
MaxRetries int `mapstructure:"max_retries"`
PingTimeout time.Duration `mapstructure:"ping_timeout"`
EnableTLS bool `mapstructure:"enable_tls"`
ServerName string `mapstructure:"server_name"`
ServerCert string `mapstructure:"server_cert"`
ClientCert string `mapstructure:"client_cert"`
ClientKey string `mapstructure:"client_key"`
} `mapstructure:"database"`
Actions []Action
}
// Auth struct contains authentication related config values used by the Super Graph service
type Auth struct {
Name string
Type string
Cookie string
CredsInHeader bool `mapstructure:"creds_in_header"`
Rails struct {
Version string
SecretKeyBase string `mapstructure:"secret_key_base"`
URL string
Password string
MaxIdle int `mapstructure:"max_idle"`
MaxActive int `mapstructure:"max_active"`
Salt string
SignSalt string `mapstructure:"sign_salt"`
AuthSalt string `mapstructure:"auth_salt"`
}
JWT struct {
Provider string
Secret string
PubKeyFile string `mapstructure:"public_key_file"`
PubKeyType string `mapstructure:"public_key_type"`
}
Header struct {
Name string
Value string
Exists bool
}
}
// Action struct contains config values for a Super Graph service action
type Action struct {
Name string
SQL string
AuthName string `mapstructure:"auth_name"`
}

View File

@ -6,11 +6,8 @@ import (
_log "log" _log "log"
"os" "os"
"runtime" "runtime"
"strings"
"github.com/dosco/super-graph/config"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -31,10 +28,10 @@ var (
var ( var (
log *_log.Logger // logger log *_log.Logger // logger
zlog *zap.Logger // fast logger zlog *zap.Logger // fast logger
conf *config.Config // parsed config logLevel int // log level
conf *Config // parsed config
confPath string // path to the config file confPath string // path to the config file
db *sql.DB // database connection pool db *sql.DB // database connection pool
secretKey [32]byte // encryption key
) )
func Cmd() { func Cmd() {
@ -132,12 +129,12 @@ e.g. db:migrate -+1
Run: cmdNew, Run: cmdNew,
}) })
rootCmd.AddCommand(&cobra.Command{ // rootCmd.AddCommand(&cobra.Command{
Use: fmt.Sprintf("conf:dump [%s]", strings.Join(viper.SupportedExts, "|")), // Use: fmt.Sprintf("conf:dump [%s]", strings.Join(viper.SupportedExts, "|")),
Short: "Dump config to file", // Short: "Dump config to file",
Long: "Dump current config to a file in the selected format", // Long: "Dump current config to a file in the selected format",
Run: cmdConfDump, // Run: cmdConfDump,
}) // })
rootCmd.AddCommand(&cobra.Command{ rootCmd.AddCommand(&cobra.Command{
Use: "version", Use: "version",
@ -158,6 +155,20 @@ func cmdVersion(cmd *cobra.Command, args []string) {
} }
func BuildDetails() string { func BuildDetails() string {
if len(version) == 0 {
return fmt.Sprintf(`
Super Graph (unknown version)
For documentation, visit https://supergraph.dev
To build with version information please use the Makefile
> git clone https://github.com/dosco/super-graph
> cd super-graph && make install
Licensed under the Apache Public License 2.0
Copyright 2020, Vikram Rangnekar
`)
}
return fmt.Sprintf(` return fmt.Sprintf(`
Super Graph %v Super Graph %v
For documentation, visit https://supergraph.dev For documentation, visit https://supergraph.dev
@ -168,7 +179,7 @@ Branch : %v
Go version : %v Go version : %v
Licensed under the Apache Public License 2.0 Licensed under the Apache Public License 2.0
Copyright 2020, Vikram Rangnekar. Copyright 2020, Vikram Rangnekar
`, `,
version, version,
lastCommitSHA, lastCommitSHA,

21
internal/serv/cmd_conf.go Normal file
View File

@ -0,0 +1,21 @@
package serv
// func cmdConfDump(cmd *cobra.Command, args []string) {
// if len(args) != 1 {
// cmd.Help() //nolint: errcheck
// os.Exit(1)
// }
// fname := fmt.Sprintf("%s.%s", config.GetConfigName(), args[0])
// conf, err := initConf()
// if err != nil {
// log.Fatalf("ERR failed to read config: %s", err)
// }
// if err := conf.WriteConfigAs(fname); err != nil {
// log.Fatalf("ERR failed to write config: %s", err)
// }
// log.Printf("INF config dumped to ./%s", fname)
// }

View File

@ -9,7 +9,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/dosco/super-graph/cmd/internal/serv/internal/migrate" "github.com/dosco/super-graph/internal/serv/internal/migrate"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -26,7 +26,7 @@ func cmdDBSetup(cmd *cobra.Command, args []string) {
cmdDBCreate(cmd, []string{}) cmdDBCreate(cmd, []string{})
cmdDBMigrate(cmd, []string{"up"}) cmdDBMigrate(cmd, []string{"up"})
sfile := path.Join(conf.ConfigPathUsed(), conf.SeedFile) sfile := path.Join(conf.cpath, conf.SeedFile)
_, err := os.Stat(sfile) _, err := os.Stat(sfile)
if err == nil { if err == nil {
@ -55,7 +55,7 @@ func cmdDBReset(cmd *cobra.Command, args []string) {
func cmdDBCreate(cmd *cobra.Command, args []string) { func cmdDBCreate(cmd *cobra.Command, args []string) {
initConfOnce() initConfOnce()
db, err := initDB(conf) db, err := initDB(conf, false)
if err != nil { if err != nil {
log.Fatalf("ERR failed to connect to database: %s", err) log.Fatalf("ERR failed to connect to database: %s", err)
} }
@ -74,7 +74,7 @@ func cmdDBCreate(cmd *cobra.Command, args []string) {
func cmdDBDrop(cmd *cobra.Command, args []string) { func cmdDBDrop(cmd *cobra.Command, args []string) {
initConfOnce() initConfOnce()
db, err := initDB(conf) db, err := initDB(conf, false)
if err != nil { if err != nil {
log.Fatalf("ERR failed to connect to database: %s", err) log.Fatalf("ERR failed to connect to database: %s", err)
} }
@ -98,8 +98,9 @@ func cmdDBNew(cmd *cobra.Command, args []string) {
initConfOnce() initConfOnce()
name := args[0] name := args[0]
migrationsPath := conf.relPath(conf.MigrationsPath)
m, err := migrate.FindMigrations(conf.MigrationsPath) m, err := migrate.FindMigrations(migrationsPath)
if err != nil { if err != nil {
log.Fatalf("ERR error loading migrations: %s", err) log.Fatalf("ERR error loading migrations: %s", err)
} }
@ -107,8 +108,8 @@ func cmdDBNew(cmd *cobra.Command, args []string) {
mname := fmt.Sprintf("%d_%s.sql", len(m), name) mname := fmt.Sprintf("%d_%s.sql", len(m), name)
// Write new migration // Write new migration
mpath := filepath.Join(conf.MigrationsPath, mname) mpath := filepath.Join(migrationsPath, mname)
mfile, err := os.OpenFile(mpath, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0666) mfile, err := os.OpenFile(mpath, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0600)
if err != nil { if err != nil {
log.Fatalf("ERR %s", err) log.Fatalf("ERR %s", err)
} }
@ -131,7 +132,7 @@ func cmdDBMigrate(cmd *cobra.Command, args []string) {
initConfOnce() initConfOnce()
dest := args[0] dest := args[0]
conn, err := initDB(conf) conn, err := initDB(conf, true)
if err != nil { if err != nil {
log.Fatalf("ERR failed to connect to database: %s", err) log.Fatalf("ERR failed to connect to database: %s", err)
} }
@ -144,7 +145,7 @@ func cmdDBMigrate(cmd *cobra.Command, args []string) {
m.Data = getMigrationVars() m.Data = getMigrationVars()
err = m.LoadMigrations(path.Join(conf.ConfigPathUsed(), conf.MigrationsPath)) err = m.LoadMigrations(conf.relPath(conf.MigrationsPath))
if err != nil { if err != nil {
log.Fatalf("ERR failed to load migrations: %s", err) log.Fatalf("ERR failed to load migrations: %s", err)
} }
@ -223,7 +224,7 @@ func cmdDBMigrate(cmd *cobra.Command, args []string) {
func cmdDBStatus(cmd *cobra.Command, args []string) { func cmdDBStatus(cmd *cobra.Command, args []string) {
initConfOnce() initConfOnce()
db, err := initDB(conf) db, err := initDB(conf, true)
if err != nil { if err != nil {
log.Fatalf("ERR failed to connect to database: %s", err) log.Fatalf("ERR failed to connect to database: %s", err)
} }
@ -236,7 +237,7 @@ func cmdDBStatus(cmd *cobra.Command, args []string) {
m.Data = getMigrationVars() m.Data = getMigrationVars()
err = m.LoadMigrations(conf.MigrationsPath) err = m.LoadMigrations(conf.relPath(conf.MigrationsPath))
if err != nil { if err != nil {
log.Fatalf("ERR failed to load migrations: %s", err) log.Fatalf("ERR failed to load migrations: %s", err)
} }

View File

@ -13,9 +13,10 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/brianvoe/gofakeit" "github.com/brianvoe/gofakeit/v5"
"github.com/dop251/goja" "github.com/dop251/goja"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
"github.com/gosimple/slug"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -28,19 +29,19 @@ func cmdDBSeed(cmd *cobra.Command, args []string) {
conf.Production = false conf.Production = false
db, err = initDB(conf) db, err = initDB(conf, true)
if err != nil { if err != nil {
log.Fatalf("ERR failed to connect to database: %s", err) log.Fatalf("ERR failed to connect to database: %s", err)
} }
sfile := path.Join(conf.ConfigPathUsed(), conf.SeedFile) sfile := path.Join(conf.cpath, conf.SeedFile)
b, err := ioutil.ReadFile(sfile) b, err := ioutil.ReadFile(sfile)
if err != nil { if err != nil {
log.Fatalf("ERR failed to read seed file %s: %s", sfile, err) log.Fatalf("ERR failed to read seed file %s: %s", sfile, err)
} }
sg, err = core.NewSuperGraph(conf, db) sg, err = core.NewSuperGraph(&conf.Core, db)
if err != nil { if err != nil {
log.Fatalf("ERR failed to initialize Super Graph: %s", err) log.Fatalf("ERR failed to initialize Super Graph: %s", err)
} }
@ -61,6 +62,10 @@ func cmdDBSeed(cmd *cobra.Command, args []string) {
setFakeFuncs(fake) setFakeFuncs(fake)
vm.Set("fake", fake) vm.Set("fake", fake)
util := vm.NewObject()
setUtilFuncs(util)
vm.Set("util", util)
_, err = vm.RunScript("seed.js", string(b)) _, err = vm.RunScript("seed.js", string(b))
if err != nil { if err != nil {
log.Fatalf("ERR failed to execute script: %s", err) log.Fatalf("ERR failed to execute script: %s", err)
@ -232,6 +237,10 @@ func imageURL(width int, height int) string {
return fmt.Sprintf("https://picsum.photos/%d/%d?%d", width, height, rand.Intn(5000)) return fmt.Sprintf("https://picsum.photos/%d/%d?%d", width, height, rand.Intn(5000))
} }
func getRandValue(values []string) string {
return values[rand.Intn(len(values))]
}
//nolint: errcheck //nolint: errcheck
func setFakeFuncs(f *goja.Object) { func setFakeFuncs(f *goja.Object) {
gofakeit.Seed(0) gofakeit.Seed(0)
@ -259,7 +268,6 @@ func setFakeFuncs(f *goja.Object) {
f.Set("country_abr", gofakeit.CountryAbr) f.Set("country_abr", gofakeit.CountryAbr)
f.Set("state", gofakeit.State) f.Set("state", gofakeit.State)
f.Set("state_abr", gofakeit.StateAbr) f.Set("state_abr", gofakeit.StateAbr)
f.Set("status_code", gofakeit.StatusCode)
f.Set("street", gofakeit.Street) f.Set("street", gofakeit.Street)
f.Set("street_name", gofakeit.StreetName) f.Set("street_name", gofakeit.StreetName)
f.Set("street_number", gofakeit.StreetNumber) f.Set("street_number", gofakeit.StreetNumber)
@ -282,12 +290,10 @@ func setFakeFuncs(f *goja.Object) {
f.Set("beer_yeast", gofakeit.BeerYeast) f.Set("beer_yeast", gofakeit.BeerYeast)
// Cars // Cars
f.Set("vehicle", gofakeit.Vehicle) f.Set("car", gofakeit.Car)
f.Set("vehicle_type", gofakeit.VehicleType) f.Set("car_type", gofakeit.CarType)
f.Set("car_maker", gofakeit.CarMaker) f.Set("car_maker", gofakeit.CarMaker)
f.Set("car_model", gofakeit.CarModel) f.Set("car_model", gofakeit.CarModel)
f.Set("fuel_type", gofakeit.FuelType)
f.Set("transmission_gear_type", gofakeit.TransmissionGearType)
// Text // Text
f.Set("word", gofakeit.Word) f.Set("word", gofakeit.Word)
@ -315,7 +321,6 @@ func setFakeFuncs(f *goja.Object) {
f.Set("domain_suffix", gofakeit.DomainSuffix) f.Set("domain_suffix", gofakeit.DomainSuffix)
f.Set("ipv4_address", gofakeit.IPv4Address) f.Set("ipv4_address", gofakeit.IPv4Address)
f.Set("ipv6_address", gofakeit.IPv6Address) f.Set("ipv6_address", gofakeit.IPv6Address)
f.Set("simple_status_code", gofakeit.SimpleStatusCode)
f.Set("http_method", gofakeit.HTTPMethod) f.Set("http_method", gofakeit.HTTPMethod)
f.Set("user_agent", gofakeit.UserAgent) f.Set("user_agent", gofakeit.UserAgent)
f.Set("user_agent_firefox", gofakeit.FirefoxUserAgent) f.Set("user_agent_firefox", gofakeit.FirefoxUserAgent)
@ -379,8 +384,8 @@ func setFakeFuncs(f *goja.Object) {
//f.Set("language_abbreviation", gofakeit.LanguageAbbreviation) //f.Set("language_abbreviation", gofakeit.LanguageAbbreviation)
// File // File
f.Set("extension", gofakeit.Extension) f.Set("file_extension", gofakeit.FileExtension)
f.Set("mine_type", gofakeit.MimeType) f.Set("file_mine_type", gofakeit.FileMimeType)
// Numbers // Numbers
f.Set("number", gofakeit.Number) f.Set("number", gofakeit.Number)
@ -404,10 +409,16 @@ func setFakeFuncs(f *goja.Object) {
f.Set("digit", gofakeit.Digit) f.Set("digit", gofakeit.Digit)
f.Set("letter", gofakeit.Letter) f.Set("letter", gofakeit.Letter)
f.Set("lexify", gofakeit.Lexify) f.Set("lexify", gofakeit.Lexify)
f.Set("rand_string", gofakeit.RandString) f.Set("rand_string", getRandValue)
f.Set("shuffle_strings", gofakeit.ShuffleStrings)
f.Set("numerify", gofakeit.Numerify) f.Set("numerify", gofakeit.Numerify)
//f.Set("programming_language", gofakeit.ProgrammingLanguage) //f.Set("programming_language", gofakeit.ProgrammingLanguage)
}
//nolint: errcheck
func setUtilFuncs(f *goja.Object) {
// Slugs
f.Set("make_slug", slug.Make)
f.Set("make_slug_lang", slug.MakeLang)
f.Set("shuffle_strings", gofakeit.ShuffleStrings)
} }

View File

@ -19,16 +19,12 @@ func cmdServ(cmd *cobra.Command, args []string) {
initWatcher() initWatcher()
db, err = initDB(conf) db, err = initDB(conf, true)
if err != nil { if err != nil {
fatalInProd(err, "failed to connect to database") fatalInProd(err, "failed to connect to database")
} }
// if conf != nil && db != nil { sg, err = core.NewSuperGraph(&conf.Core, db)
// initResolvers()
// }
sg, err = core.NewSuperGraph(conf, db)
if err != nil { if err != nil {
fatalInProd(err, "failed to initialize Super Graph") fatalInProd(err, "failed to initialize Super Graph")
} }

120
internal/serv/config.go Normal file
View File

@ -0,0 +1,120 @@
package serv
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"github.com/spf13/viper"
)
// ReadInConfig function reads in the config file for the environment specified in the GO_ENV
// environment variable. This is the best way to create a new Super Graph config.
func ReadInConfig(configFile string) (*Config, error) {
cpath := path.Dir(configFile)
cfile := path.Base(configFile)
vi := newViper(cpath, cfile)
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
inherits := vi.GetString("inherits")
if len(inherits) != 0 {
vi = newViper(cpath, inherits)
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
if vi.IsSet("inherits") {
return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)",
inherits,
vi.GetString("inherits"))
}
vi.SetConfigName(cfile)
if err := vi.MergeInConfig(); err != nil {
return nil, err
}
}
c := &Config{cpath: cpath, vi: vi}
if err := vi.Unmarshal(&c); err != nil {
return nil, fmt.Errorf("failed to decode config, %v", err)
}
return c, nil
}
func newViper(configPath, configFile string) *viper.Viper {
vi := viper.New()
vi.SetEnvPrefix("SG")
vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
vi.AutomaticEnv()
vi.AddConfigPath(configPath)
vi.SetConfigName(configFile)
vi.AddConfigPath("./config")
vi.SetDefault("host_port", "0.0.0.0:8080")
vi.SetDefault("web_ui", false)
vi.SetDefault("enable_tracing", false)
vi.SetDefault("auth_fail_block", "always")
vi.SetDefault("seed_file", "seed.js")
vi.SetDefault("database.type", "postgres")
vi.SetDefault("database.host", "localhost")
vi.SetDefault("database.port", 5432)
vi.SetDefault("database.user", "postgres")
vi.SetDefault("database.schema", "public")
vi.SetDefault("env", "development")
vi.BindEnv("env", "GO_ENV") //nolint: errcheck
vi.BindEnv("host", "HOST") //nolint: errcheck
vi.BindEnv("port", "PORT") //nolint: errcheck
vi.SetDefault("auth.rails.max_idle", 80)
vi.SetDefault("auth.rails.max_active", 12000)
return vi
}
func GetConfigName() string {
if len(os.Getenv("GO_ENV")) == 0 {
return "dev"
}
ge := strings.ToLower(os.Getenv("GO_ENV"))
switch {
case strings.HasPrefix(ge, "pro"):
return "prod"
case strings.HasPrefix(ge, "sta"):
return "stage"
case strings.HasPrefix(ge, "tes"):
return "test"
case strings.HasPrefix(ge, "dev"):
return "dev"
}
return ge
}
func (c *Config) relPath(p string) string {
if filepath.IsAbs(p) {
return p
}
return path.Join(c.cpath, p)
}

View File

@ -6,11 +6,9 @@ import (
"io" "io"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"strings"
"github.com/dosco/super-graph/cmd/internal/serv/internal/auth"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
"github.com/dosco/super-graph/internal/serv/internal/auth"
"github.com/rs/cors" "github.com/rs/cors"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -31,7 +29,7 @@ type gqlReq struct {
} }
type errorResp struct { type errorResp struct {
Error error `json:"error"` Error string `json:"error"`
} }
func apiV1Handler() http.Handler { func apiV1Handler() http.Handler {
@ -54,16 +52,17 @@ func apiV1Handler() http.Handler {
func apiV1(w http.ResponseWriter, r *http.Request) { func apiV1(w http.ResponseWriter, r *http.Request) {
ct := r.Context() ct := r.Context()
w.Header().Set("Content-Type", "application/json")
//nolint: errcheck //nolint: errcheck
if conf.AuthFailBlock && !auth.IsAuth(ct) { if conf.AuthFailBlock && !auth.IsAuth(ct) {
renderErr(w, errUnauthorized, nil) renderErr(w, errUnauthorized)
return return
} }
b, err := ioutil.ReadAll(io.LimitReader(r.Body, maxReadBytes)) b, err := ioutil.ReadAll(io.LimitReader(r.Body, maxReadBytes))
if err != nil { if err != nil {
renderErr(w, err, nil) renderErr(w, err)
return return
} }
defer r.Body.Close() defer r.Body.Close()
@ -72,54 +71,57 @@ func apiV1(w http.ResponseWriter, r *http.Request) {
err = json.Unmarshal(b, &req) err = json.Unmarshal(b, &req)
if err != nil { if err != nil {
renderErr(w, err, nil) renderErr(w, err)
return
}
if strings.EqualFold(req.OpName, introspectionQuery) {
introspect(w)
return return
} }
doLog := true
res, err := sg.GraphQL(ct, req.Query, req.Vars) res, err := sg.GraphQL(ct, req.Query, req.Vars)
if conf.LogLevel() >= config.LogLevelDebug { if !conf.Production && res.QueryName() == introspectionQuery {
log.Printf("DBG query:\n%s\nsql:\n%s", req.Query, res.SQL()) doLog = false
} }
if err != nil { if doLog && logLevel >= LogLevelDebug {
renderErr(w, err, res) log.Printf("DBG query %s: %s", res.QueryName(), res.SQL())
return
} }
if err == nil {
if len(conf.CacheControl) != 0 && res.Operation() == core.OpQuery {
w.Header().Set("Cache-Control", conf.CacheControl)
}
//nolint: errcheck
json.NewEncoder(w).Encode(res) json.NewEncoder(w).Encode(res)
if conf.LogLevel() >= config.LogLevelInfo { if doLog && logLevel >= LogLevelInfo {
zlog.Info("success", zlog.Info("success",
zap.String("op", res.Operation()), zap.String("op", res.OperationName()),
zap.String("name", res.QueryName()), zap.String("name", res.QueryName()),
zap.String("role", res.Role()), zap.String("role", res.Role()),
) )
} }
} else {
renderErr(w, err)
if doLog && logLevel >= LogLevelInfo {
zlog.Error("error",
zap.String("op", res.OperationName()),
zap.String("name", res.QueryName()),
zap.String("role", res.Role()),
zap.Error(err),
)
}
}
} }
//nolint: errcheck //nolint: errcheck
func renderErr(w http.ResponseWriter, err error, res *core.Result) { func renderErr(w http.ResponseWriter, err error) {
if err == errUnauthorized { if err == errUnauthorized {
w.WriteHeader(http.StatusUnauthorized) w.WriteHeader(http.StatusUnauthorized)
} }
json.NewEncoder(w).Encode(&errorResp{err}) json.NewEncoder(w).Encode(errorResp{err.Error()})
if conf.LogLevel() >= config.LogLevelError {
if res != nil {
zlog.Error(err.Error(),
zap.String("op", res.Operation()),
zap.String("name", res.QueryName()),
zap.String("role", res.Role()),
)
} else {
zlog.Error(err.Error())
}
}
} }

216
internal/serv/init.go Normal file
View File

@ -0,0 +1,216 @@
package serv
import (
"crypto/tls"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"io/ioutil"
"path"
"path/filepath"
"strings"
"time"
"github.com/jackc/pgx/v4"
"github.com/jackc/pgx/v4/stdlib"
//_ "github.com/jackc/pgx/v4/stdlib"
)
const (
PEM_SIG = "--BEGIN "
)
func initConf() (*Config, error) {
cp, err := filepath.Abs(confPath)
if err != nil {
return nil, err
}
c, err := ReadInConfig(path.Join(cp, GetConfigName()))
if err != nil {
return nil, err
}
switch c.LogLevel {
case "debug":
logLevel = LogLevelDebug
case "error":
logLevel = LogLevelError
case "warn":
logLevel = LogLevelWarn
case "info":
logLevel = LogLevelInfo
default:
logLevel = LogLevelNone
}
// Auths: validate and sanitize
am := make(map[string]struct{})
for i := 0; i < len(c.Auths); i++ {
a := &c.Auths[i]
a.Name = sanitize(a.Name)
if _, ok := am[a.Name]; ok {
c.Auths = append(c.Auths[:i], c.Auths[i+1:]...)
log.Printf("WRN duplicate auth found: %s", a.Name)
}
am[a.Name] = struct{}{}
}
// Actions: validate and sanitize
axm := make(map[string]struct{})
for i := 0; i < len(c.Actions); i++ {
a := &c.Actions[i]
a.Name = sanitize(a.Name)
a.AuthName = sanitize(a.AuthName)
if _, ok := axm[a.Name]; ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
log.Printf("WRN duplicate action found: %s", a.Name)
}
if _, ok := am[a.AuthName]; !ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
log.Printf("WRN invalid auth_name '%s' for auth: %s", a.AuthName, a.Name)
}
axm[a.Name] = struct{}{}
}
var anonFound bool
for _, r := range c.Roles {
if sanitize(r.Name) == "anon" {
anonFound = true
}
}
if !anonFound {
log.Printf("WRN unauthenticated requests will be blocked. no role 'anon' defined")
c.AuthFailBlock = false
}
if len(c.AllowListFile) == 0 {
c.AllowListFile = c.relPath("./allow.list")
}
if c.Production {
c.UseAllowList = true
}
// In anon role block all tables that are not defined in the role
c.DefaultBlock = true
return c, nil
}
func initDB(c *Config, useDB bool) (*sql.DB, error) {
var db *sql.DB
var err error
config, _ := pgx.ParseConfig("")
config.Host = c.DB.Host
config.Port = c.DB.Port
config.User = c.DB.User
config.Password = c.DB.Password
config.RuntimeParams = map[string]string{
"application_name": c.AppName,
"search_path": c.DB.Schema,
}
if useDB {
config.Database = c.DB.DBName
}
if c.DB.EnableTLS {
if len(c.DB.ServerName) == 0 {
return nil, errors.New("server_name is required")
}
if len(c.DB.ServerCert) == 0 {
return nil, errors.New("server_cert is required")
}
if len(c.DB.ClientCert) == 0 {
return nil, errors.New("client_cert is required")
}
if len(c.DB.ClientKey) == 0 {
return nil, errors.New("client_key is required")
}
rootCertPool := x509.NewCertPool()
var pem []byte
var err error
if strings.Contains(c.DB.ServerCert, PEM_SIG) {
pem = []byte(c.DB.ServerCert)
} else {
pem, err = ioutil.ReadFile(c.relPath(c.DB.ServerCert))
}
if err != nil {
return nil, fmt.Errorf("db tls: %w", err)
}
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
return nil, errors.New("db tls: failed to append pem")
}
clientCert := make([]tls.Certificate, 0, 1)
var certs tls.Certificate
if strings.Contains(c.DB.ClientCert, PEM_SIG) {
certs, err = tls.X509KeyPair([]byte(c.DB.ClientCert), []byte(c.DB.ClientKey))
} else {
certs, err = tls.LoadX509KeyPair(c.relPath(c.DB.ClientCert), c.relPath(c.DB.ClientKey))
}
if err != nil {
return nil, fmt.Errorf("db tls: %w", err)
}
clientCert = append(clientCert, certs)
config.TLSConfig = &tls.Config{
RootCAs: rootCertPool,
Certificates: clientCert,
ServerName: c.DB.ServerName,
}
}
// switch c.LogLevel {
// case "debug":
// config.LogLevel = pgx.LogLevelDebug
// case "info":
// config.LogLevel = pgx.LogLevelInfo
// case "warn":
// config.LogLevel = pgx.LogLevelWarn
// case "error":
// config.LogLevel = pgx.LogLevelError
// default:
// config.LogLevel = pgx.LogLevelNone
// }
//config.Logger = NewSQLLogger(logger)
// if c.DB.MaxRetries != 0 {
// opt.MaxRetries = c.DB.MaxRetries
// }
// if c.DB.PoolSize != 0 {
// config.MaxConns = conf.DB.PoolSize
// }
for i := 1; i < 10; i++ {
db = stdlib.OpenDB(*config)
if db == nil {
break
}
time.Sleep(time.Duration(i*100) * time.Millisecond)
}
if err != nil {
return nil, err
}
return db, nil
}

View File

@ -5,11 +5,43 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
) )
func SimpleHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { // Auth struct contains authentication related config values used by the Super Graph service
type Auth struct {
Name string
Type string
Cookie string
CredsInHeader bool `mapstructure:"creds_in_header"`
Rails struct {
Version string
SecretKeyBase string `mapstructure:"secret_key_base"`
URL string
Password string
MaxIdle int `mapstructure:"max_idle"`
MaxActive int `mapstructure:"max_active"`
Salt string
SignSalt string `mapstructure:"sign_salt"`
AuthSalt string `mapstructure:"auth_salt"`
}
JWT struct {
Provider string
Secret string
PubKeyFile string `mapstructure:"public_key_file"`
PubKeyType string `mapstructure:"public_key_type"`
}
Header struct {
Name string
Value string
Exists bool
}
}
func SimpleHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
return func(w http.ResponseWriter, r *http.Request) { return func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
@ -32,7 +64,7 @@ func SimpleHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error)
}, nil }, nil
} }
func HeaderHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func HeaderHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
hdr := ac.Header hdr := ac.Header
if len(hdr.Name) == 0 { if len(hdr.Name) == 0 {
@ -64,7 +96,7 @@ func HeaderHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error)
}, nil }, nil
} }
func WithAuth(next http.Handler, ac *config.Auth) (http.Handler, error) { func WithAuth(next http.Handler, ac *Auth) (http.Handler, error) {
var err error var err error
if ac.CredsInHeader { if ac.CredsInHeader {

View File

@ -7,7 +7,6 @@ import (
"strings" "strings"
jwt "github.com/dgrijalva/jwt-go" jwt "github.com/dgrijalva/jwt-go"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
) )
@ -16,7 +15,7 @@ const (
jwtAuth0 int = iota + 1 jwtAuth0 int = iota + 1
) )
func JwtHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func JwtHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
var key interface{} var key interface{}
var jwtProvider int var jwtProvider int

View File

@ -9,13 +9,12 @@ import (
"strings" "strings"
"github.com/bradfitz/gomemcache/memcache" "github.com/bradfitz/gomemcache/memcache"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core" "github.com/dosco/super-graph/core"
"github.com/dosco/super-graph/cmd/internal/serv/internal/rails" "github.com/dosco/super-graph/internal/serv/internal/rails"
"github.com/garyburd/redigo/redis" "github.com/garyburd/redigo/redis"
) )
func RailsHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func RailsHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
ru := ac.Rails.URL ru := ac.Rails.URL
if strings.HasPrefix(ru, "memcache:") { if strings.HasPrefix(ru, "memcache:") {
@ -29,7 +28,7 @@ func RailsHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error)
return RailsCookieHandler(ac, next) return RailsCookieHandler(ac, next)
} }
func RailsRedisHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func RailsRedisHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
cookie := ac.Cookie cookie := ac.Cookie
if len(cookie) == 0 { if len(cookie) == 0 {
@ -85,7 +84,7 @@ func RailsRedisHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, er
}, nil }, nil
} }
func RailsMemcacheHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func RailsMemcacheHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
cookie := ac.Cookie cookie := ac.Cookie
if len(cookie) == 0 { if len(cookie) == 0 {
@ -128,7 +127,7 @@ func RailsMemcacheHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc,
}, nil }, nil
} }
func RailsCookieHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, error) { func RailsCookieHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
cookie := ac.Cookie cookie := ac.Cookie
if len(cookie) == 0 { if len(cookie) == 0 {
return nil, fmt.Errorf("no auth.cookie defined") return nil, fmt.Errorf("no auth.cookie defined")
@ -159,7 +158,7 @@ func RailsCookieHandler(ac *config.Auth, next http.Handler) (http.HandlerFunc, e
}, nil }, nil
} }
func railsAuth(ac *config.Auth) (*rails.Auth, error) { func railsAuth(ac *Auth) (*rails.Auth, error) {
secret := ac.Rails.SecretKeyBase secret := ac.Rails.SecretKeyBase
if len(secret) == 0 { if len(secret) == 0 {
return nil, errors.New("no auth.rails.secret_key_base defined") return nil, errors.New("no auth.rails.secret_key_base defined")

Some files were not shown because too many files have changed in this diff Show More