Compare commits

..

85 Commits

Author SHA1 Message Date
5cbb1ab048 Basic debian packaging 2020-06-15 18:50:47 +02:00
7557a4c29c fix: issue with fragments related crash 2020-06-15 10:46:52 -04:00
dd4accfdd2 fix: implement various deepsource suggestions 2020-06-15 10:16:47 -04:00
06214a3850 feat: add support for polymorphic database relationships 2020-06-15 03:09:18 -04:00
7b5548a2c6 fix: jit failing on anon queries 2020-06-10 00:38:46 -04:00
00cfa251a2 fix: issue with jit performance 2020-06-09 19:06:16 -04:00
9f35f85857 feat: add support for inline fragments 2020-06-09 02:13:51 -04:00
f4f6420a30 fix: race in node pool object cleanup 2020-06-08 19:28:22 -04:00
6716b97a39 fix: duplicate fragment crash issue 2020-06-07 17:03:09 -04:00
7169dd65f5 docs: add firebase auth and fragments 2020-06-07 15:58:00 -04:00
b26cdbf960 fix: update allow list parser to support fragments 2020-06-07 13:02:57 -04:00
33f3fefbf3 feat: add support for graphql fragments 2020-06-06 17:52:29 -04:00
a775f9475b feat: Add firebase auth support and JWT audience check (#71) 2020-06-05 18:48:17 -04:00
bd157290f6 fix: bug with parsing variables in roles_query 2020-06-04 21:55:52 -04:00
82cc712a93 fix: bug with shared pointer in new jit mode 2020-06-03 18:19:07 -04:00
0ce129de14 fix: remove upx install from dockerfile 2020-06-03 01:42:10 -04:00
1a15e433ba feat: make query preperation a JIT operation to improve startup time 2020-06-03 01:03:12 -04:00
816121fbcf fix: issue with early return in prepare statement function 2020-05-31 18:10:50 -04:00
e82e97a9d7 fix: issues caught by fuzzer 2020-05-31 14:11:28 -07:00
6102f1d66e fix: infinite loop on missing allow.list issue 2020-05-30 23:36:44 -07:00
701b2f3bfd fix: remove left-over debug prints 2020-05-29 02:27:53 -04:00
bac89d8301 fix: i will not prematurely optimization 2020-05-29 02:23:54 -04:00
b3dfb2bc7b fix: improve fuzzing coverage for jsn package 2020-05-29 00:08:37 -04:00
1fb7f0e6c8 BREAKING CHANGE: super-graph/core now defaults to allow all in anon role 2020-05-28 00:07:01 -04:00
2241364d00 fix: rewrite the sql args and variables codebase to use expression values 2020-05-26 19:41:28 -04:00
f63e270c73 Merge branch 'master' of github.com:dosco/super-graph 2020-05-24 17:44:34 -04:00
ccab367351 fix: make array variables work again 2020-05-24 17:43:54 -04:00
67ddc148a9 fix: go get in start doc (#68) 2020-05-24 16:44:17 -04:00
31afdac3af docs: add telemetry docs 2020-05-24 10:44:00 -04:00
1344246287 fix: add http tracing so end-to-end tracing is possible 2020-05-24 02:24:24 -04:00
7b25873438 fix: update embedded static assets 2020-05-23 16:53:39 -04:00
d572b4f753 fix: allow unauthenticated operations in seed script 2020-05-23 16:37:27 -04:00
cd69b5a78f docs: add opencensus support to the README 2020-05-23 11:50:38 -04:00
01ad9b71ba feat: add opencensus tracing and metrics support 2020-05-23 11:43:57 -04:00
b64daaf034 fix: issue with breaking changes in gofakeit 2020-05-22 16:52:42 -04:00
c7837bf758 feat: add open opencensus telemetry support 2020-05-22 16:49:54 -04:00
448e6bb72a fix: add config for per role operation blocking by type 2020-05-22 02:24:22 -04:00
f7d3760af7 feat: re-format graphql queries saved in allow.list 2020-05-22 02:24:22 -04:00
2acb05741e fix: few typos (#67) 2020-05-21 01:07:54 -04:00
8104ee9df2 fix: update description in the README 2020-05-20 09:42:10 -04:00
ab8566df03 fix: postgres schema name config value is not used 2020-05-20 00:03:05 -04:00
94fa51ffb2 fix: add color to logo for dark mode 2020-05-18 00:32:53 -04:00
1c823e4353 feat: add cloudbuild.yaml generation for new apps 2020-05-17 19:16:40 -04:00
f6ce0c102b docs: new website 2020-05-17 03:12:09 -04:00
a1a47c905d fix: update discord link (#66) 2020-05-16 02:05:59 -04:00
d3e32f944a fix: json content type breaks web ui 2020-05-11 21:09:12 -04:00
3bf9f02a9f fix: bug with reading config file by name 2020-05-10 11:26:48 -04:00
533c767e1d fix: benchmark was failing. Also added a benchmark for the chirino/graphql version gql parser to compare results. (#62) 2020-05-07 10:48:01 -04:00
84d55dbc8a feat: remove data from variables saved to allow.list 2020-05-07 10:27:40 -04:00
5aafff6310 chore: add InteliJ editor project files to the .gitignore list. (#61) 2020-05-07 10:24:29 -04:00
840aaf64ff fix: return response as application/json (#59) 2020-05-07 10:24:12 -04:00
7bbb56a328 fix get functions parameters without name (#60) 2020-05-07 03:04:37 -04:00
394b08b2fe chore: update changelog 2020-05-03 21:01:16 -04:00
842252f9e2 fix: fix issue with skipping prepared statements for some roles on error 2020-05-03 20:52:26 -04:00
279f5616d1 fix: fix for issues reported by deepsource 2020-05-03 16:08:34 -04:00
04bb88f74b Add .deepsource.toml 2020-05-03 19:57:42 +00:00
38ed6dbc5f fix: bug with single quote ecape in production mode 2020-05-01 02:20:45 -04:00
ec2f8d0c58 chore: pickup latest version of chirino/graphql module for it’s schema api simplifications. (#58) 2020-05-01 02:03:35 -04:00
9b51065414 fix: grammatical errors (#57) 2020-04-25 09:57:59 -04:00
1a70603b1a feat: add option to set the cache-control header 2020-04-24 20:45:03 -04:00
505335d872 feat: add config to set api endpoint prefix 2020-04-24 01:23:35 -04:00
bdc8c65a09 fix: fix issues with code examples 2020-04-23 21:25:09 -04:00
03fe29b088 fix: improve documentation of the config object 2020-04-23 21:25:09 -04:00
5857efdd70 fix: correct spellings and language in README.md (#55)
* Update README.md

* Code review: fix go get again
2020-04-23 21:01:00 -04:00
bdffe7b14e fix: add a benchmark around the GraphQL api function 2020-04-23 01:42:16 -04:00
ae7cde0433 feat: add support for single argument Postgres functions 2020-04-22 20:51:14 -04:00
6293d37e73 fix: upgrade packages in the web ui 2020-04-21 21:05:14 -04:00
7a3fe5a1df fix: Only include the bulk update arguments on the plur… (#54)
* introspection fix: Only include the bulk update arguments on the plural versions of the fields.

* Fixes error graphql: Unknown type "String!"
2020-04-21 10:41:28 -04:00
2a32c179ba feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. (#53) 2020-04-21 10:03:05 -04:00
0a02bde219 fix: block introspection queries in production mode 2020-04-20 02:06:58 -04:00
966aa9ce8c feat: add some initial introspection support. (#52) 2020-04-19 23:48:49 -04:00
6f18d56ca0 fix: update queries generate invalid sql 2020-04-19 13:40:14 -04:00
c400461835 fix: prepared statements not working in prod mode 2020-04-19 12:54:37 -04:00
a6691de1b7 fix: remove multi-line graphql query in log 2020-04-19 02:50:09 -04:00
e6934cda02 fix: vars not sanitized in roles_query 2020-04-18 17:46:40 -04:00
4cf7956ff5 feat: add cockroachdb support. (#50)
This PR changes the generated SQL so that it's also compatible with CockroachDB.
Notable changes:
* use `SELECT to_jsonb("__sr_0".*)`  instead of `SELECT to_jsonb("__sr_0")`
* don't use `json_populate_record`, use the `CAST` and `->>` instead.  For example:

  instead of: `SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t`

  do: `CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i`

This PR also adds some integration tests against an actual database instance.  If you have the cockroachdb binary installed on your PATH,
the test suite will startup a temporary cockroachdb instance on a random port to test against.  It is stopped and the tmp data files are deleted once the test ends.  It will also run the integration tests against database
pointed at by your `SG_POSTGRESQL_TEST_URL` environment variable if it’s set.

Also includes some small formatting changes introduced by `gofmt -w .`
2020-04-18 17:42:17 -04:00
5356455904 Fix issue with relative paths and config files 2020-04-17 10:56:26 -04:00
074aded5c0 Upgrade UI and app templates 2020-04-16 10:27:10 -04:00
c7557f761f Fix broken build 2020-04-16 01:28:55 -04:00
09d6460a13 Make go get to install work. 2020-04-16 00:26:32 -04:00
40c99e9ef3 Fix issue with missing build variables 2020-04-13 00:50:54 -04:00
75ff5510d4 Fix issue with failing db cmds 2020-04-13 00:43:18 -04:00
1370d24985 Fix issue with make install 2020-04-12 20:35:31 -04:00
ef50c1957b Fix CloudRun connection issue 2020-04-12 10:09:37 -04:00
41ea6ef6f5 Fix readme add library usage 2020-04-11 16:41:10 -04:00
223 changed files with 24117 additions and 7764 deletions

View File

@ -5,18 +5,18 @@ info:
repository_url: https://github.com/dosco/super-graph repository_url: https://github.com/dosco/super-graph
options: options:
commits: commits:
# filters: filters:
# Type: Type:
# - feat - feat
# - fix - fix
# - perf - perf
# - refactor - refactor
commit_groups: commit_groups:
# title_maps: title_maps:
# feat: Features feat: Features
# fix: Bug Fixes fix: Bug Fixes
# perf: Performance Improvements perf: Performance Improvements
# refactor: Code Refactoring refactor: Code Refactoring
header: header:
pattern: "^((\\w+)\\s.*)$" pattern: "^((\\w+)\\s.*)$"
pattern_maps: pattern_maps:

8
.deepsource.toml Normal file
View File

@ -0,0 +1,8 @@
version = 1
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_path = "github.com/dosco/super-graph"

6
.gitignore vendored
View File

@ -23,16 +23,20 @@
/tmp/runner-build /tmp/runner-build
/demo/tmp /demo/tmp
.idea
*.iml
.vscode .vscode
.DS_Store .DS_Store
.swp .swp
.release .release
main main
super-graph super-graph
supergraph
*-fuzz.zip *-fuzz.zip
crashers crashers
suppressions suppressions
release release
.gofuzz .gofuzz
*-fuzz.zip *-fuzz.zip
*.test
.firebase

View File

@ -7,7 +7,7 @@ rules:
- name: run - name: run
match: \.go$ match: \.go$
ignore: web|examples|docs|_test\.go$ ignore: web|examples|docs|_test\.go$
command: go run cmd/main.go serv command: go run main.go serv
- name: test - name: test
match: _test\.go$ match: _test\.go$
command: go test -cover {PKG} command: go test -cover {PKG}

View File

@ -1,401 +1,371 @@
<a name="unreleased"></a> <a name="unreleased"></a>
## [Unreleased] ## [Unreleased]
### Add
- Add config driven custom table relationships
- Add support for `websearch_to_tsquery` in PG 11
### Create <a name="v0.13.22"></a>
- Create CODE_OF_CONDUCT.md ## [v0.13.22] - 2020-05-01
### Fix <a name="v0.13.21"></a>
- Fix bug with remote join example ## [v0.13.21] - 2020-04-24
- Fix grammer / syntax
### Update <a name="v0.13.20"></a>
- Update issue templates ## [v0.13.20] - 2020-04-24
- Update CONTRIBUTING.md
- Update issue templates <a name="v0.13.19"></a>
- Update feature_request.md ## [v0.13.19] - 2020-04-23
<a name="v0.13.18"></a>
## [v0.13.18] - 2020-04-23
<a name="v0.13.17"></a>
## [v0.13.17] - 2020-04-22
<a name="v0.13.16"></a>
## [v0.13.16] - 2020-04-21
### Features
- feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. ([#53](https://github.com/dosco/super-graph/issues/53))
<a name="v0.13.15"></a>
## [v0.13.15] - 2020-04-20
<a name="v0.13.14"></a>
## [v0.13.14] - 2020-04-19
<a name="v0.13.13"></a>
## [v0.13.13] - 2020-04-19
<a name="v0.13.12"></a>
## [v0.13.12] - 2020-04-19
<a name="v0.13.11"></a>
## [v0.13.11] - 2020-04-18
<a name="v0.13.10"></a>
## [v0.13.10] - 2020-04-17
<a name="v0.13.9"></a>
## [v0.13.9] - 2020-04-16
<a name="v0.13.8"></a>
## [v0.13.8] - 2020-04-16
<a name="v0.13.7"></a>
## [v0.13.7] - 2020-04-16
<a name="v0.13.6"></a>
## [v0.13.6] - 2020-04-13
<a name="v0.13.5"></a>
## [v0.13.5] - 2020-04-13
<a name="v0.13.4"></a>
## [v0.13.4] - 2020-04-12
<a name="v0.13.3"></a>
## [v0.13.3] - 2020-04-12
<a name="v0.13.2"></a>
## [v0.13.2] - 2020-04-11
<a name="v0.13.1"></a>
## [v0.13.1] - 2020-04-11
<a name="v0.13.0"></a>
## [v0.13.0] - 2020-04-10
<a name="v0.12.49"></a>
## [v0.12.49] - 2020-04-01
<a name="v0.12.48"></a>
## [v0.12.48] - 2020-03-31
<a name="v0.12.47"></a>
## [v0.12.47] - 2020-03-30
<a name="v0.12.46"></a>
## [v0.12.46] - 2020-03-21
<a name="v0.12.45"></a>
## [v0.12.45] - 2020-03-18
<a name="v0.12.44"></a>
## [v0.12.44] - 2020-03-16
<a name="v0.12.43"></a>
## [v0.12.43] - 2020-03-16
<a name="v0.12.42"></a>
## [v0.12.42] - 2020-03-14
<a name="v0.12.41"></a>
## [v0.12.41] - 2020-03-06
<a name="v0.12.40"></a>
## [v0.12.40] - 2020-03-06
<a name="v0.12.39"></a>
## [v0.12.39] - 2020-03-06
<a name="v0.12.38"></a>
## [v0.12.38] - 2020-03-05
<a name="v0.12.37"></a>
## [v0.12.37] - 2020-03-04
<a name="v0.12.36"></a>
## [v0.12.36] - 2020-03-04
<a name="v0.12.35"></a>
## [v0.12.35] - 2020-03-03
<a name="v0.12.34"></a>
## [v0.12.34] - 2020-03-03
<a name="v0.12.33"></a>
## [v0.12.33] - 2020-02-29
<a name="v0.12.32"></a>
## [v0.12.32] - 2020-02-24
### Bug Fixes
- fix "Try the demo app" in docs ([#38](https://github.com/dosco/super-graph/issues/38))
<a name="v0.12.31"></a>
## [v0.12.31] - 2020-02-23
<a name="v0.12.30"></a>
## [v0.12.30] - 2020-02-23
<a name="v0.12.29"></a>
## [v0.12.29] - 2020-02-21
<a name="v0.12.28"></a>
## [v0.12.28] - 2020-02-20
<a name="v0.12.27"></a>
## [v0.12.27] - 2020-02-19
<a name="v0.12.26"></a>
## [v0.12.26] - 2020-02-11
<a name="v0.12.25"></a>
## [v0.12.25] - 2020-02-10
<a name="v0.12.24"></a>
## [v0.12.24] - 2020-02-03
<a name="v0.12.23"></a>
## [v0.12.23] - 2020-02-02
<a name="v0.12.22"></a>
## [v0.12.22] - 2020-02-01
<a name="v0.12.21"></a>
## [v0.12.21] - 2020-01-31
<a name="v0.12.20"></a>
## [v0.12.20] - 2020-01-28
<a name="v0.12.19"></a>
## [v0.12.19] - 2020-01-26
<a name="v0.12.18"></a>
## [v0.12.18] - 2020-01-20
<a name="v0.12.17"></a>
## [v0.12.17] - 2020-01-20
<a name="v0.12.16"></a>
## [v0.12.16] - 2020-01-19
<a name="v0.12.15"></a>
## [v0.12.15] - 2020-01-17
<a name="v0.12.14"></a>
## [v0.12.14] - 2020-01-17
<a name="v0.12.13"></a>
## [v0.12.13] - 2020-01-16
<a name="v0.12.12"></a>
## [v0.12.12] - 2020-01-15
<a name="v0.12.11"></a>
## [v0.12.11] - 2020-01-14
<a name="v0.12.10"></a>
## [v0.12.10] - 2020-01-14
<a name="v0.12.9"></a>
## [v0.12.9] - 2020-01-14
<a name="v0.12.8"></a>
## [v0.12.8] - 2020-01-13
<a name="v0.12.7"></a>
## [v0.12.7] - 2020-01-11
### Pull Requests
- Merge pull request [#22](https://github.com/dosco/super-graph/issues/22) from bhaskarmurthy/fix-grammer-syntax
<a name="v0.12.6"></a> <a name="v0.12.6"></a>
## [v0.12.6] - 2019-12-02 ## [v0.12.6] - 2019-12-02
### Add
- Add support for `websearch_to_tsquery` in PG 11
<a name="v0.12.5"></a> <a name="v0.12.5"></a>
## [v0.12.5] - 2019-11-30 ## [v0.12.5] - 2019-11-30
### Add
- Add a guide to the internals of the codebase
- Add a CONTRIBUTING.md guide for contributors
- Add a CHANGLOG.md
- Add issue templates
### Fix
- Fix for missing filters on nested selectors
### Refactor
- Refactor rename 'Select.Table` to `Select.Name`
<a name="v0.12.4"></a> <a name="v0.12.4"></a>
## [v0.12.4] - 2019-11-28 ## [v0.12.4] - 2019-11-28
### Move
- Move license from MIT to Apache 2.0. Add Makefile
<a name="v0.12.3"></a> <a name="v0.12.3"></a>
## [v0.12.3] - 2019-11-26 ## [v0.12.3] - 2019-11-26
### Added
- Added support for query names to the allow.list
<a name="v0.12.2"></a> <a name="v0.12.2"></a>
## [v0.12.2] - 2019-11-25 ## [v0.12.2] - 2019-11-25
### Fix
- Fix bug with compiling anon queries
<a name="v0.12.1"></a> <a name="v0.12.1"></a>
## [v0.12.1] - 2019-11-22 ## [v0.12.1] - 2019-11-22
### Move
- Move sql query logging from info to debug
<a name="v0.12.0"></a> <a name="v0.12.0"></a>
## [v0.12.0] - 2019-11-22 ## [v0.12.0] - 2019-11-22
### Use
- Use logger error instead of panic in goja handlers
<a name="v0.11.9"></a> <a name="v0.11.9"></a>
## [v0.11.9] - 2019-11-22 ## [v0.11.9] - 2019-11-22
### Add
- Add a db:reset command only for dev mode
<a name="v0.11.8"></a> <a name="v0.11.8"></a>
## [v0.11.8] - 2019-11-21 ## [v0.11.8] - 2019-11-21
### Optimize
- Optimize db queries limit use of transactions
<a name="v0.11.7"></a> <a name="v0.11.7"></a>
## [v0.11.7] - 2019-11-19 ## [v0.11.7] - 2019-11-19
### Added
- Added support for multi-root queries
<a name="v0.11.6"></a> <a name="v0.11.6"></a>
## [v0.11.6] - 2019-11-15 ## [v0.11.6] - 2019-11-15
### Fix
- Fix issues with JWT auth
- Fix bug with migration filename generation
- Fix bug with migration file name
<a name="v0.11.5"></a> <a name="v0.11.5"></a>
## [v0.11.5] - 2019-11-10 ## [v0.11.5] - 2019-11-10
### Fix
- Fix bug with migration template name
<a name="v0.11.4"></a> <a name="v0.11.4"></a>
## [v0.11.4] - 2019-11-10 ## [v0.11.4] - 2019-11-10
### Fix
- Fix bug with creating new migrations
<a name="v0.11.3"></a> <a name="v0.11.3"></a>
## [v0.11.3] - 2019-11-09 ## [v0.11.3] - 2019-11-09
### Fix
- Fix macro syntax bug in app templates
<a name="v0.11.2"></a> <a name="v0.11.2"></a>
## [v0.11.2] - 2019-11-07 ## [v0.11.2] - 2019-11-07
### Fix
- Fix bugs and add new production mode
<a name="v0.11.1"></a> <a name="v0.11.1"></a>
## [v0.11.1] - 2019-11-05 ## [v0.11.1] - 2019-11-05
### Add
- Add nested where clause to filter based on related tables
### Block
- Block unauthorized requests when 'anon' role is not defined
### Update
- Update docs and website with new features
<a name="v0.11"></a> <a name="v0.11"></a>
## [v0.11] - 2019-11-01 ## [v0.11] - 2019-11-01
### Add
- Add config driven presets for insert, update and upsert
- Add config driven presets for insert, update and upserta
- Add RBAC option to disable functions eg. count
- Add fuzz testing to 'serv' for the GQL hash parser
- Add fuzz testing to 'jsn' and 'qcode'
- Add ability to block queries and mutations by role
- Add built in 'anon' and 'user' roles
- Add role based access control
### Allow
- Allow config files to inherit from other config files
### Change
- Change config key inherit to inherits
### Get
- Get RBAC working for queries and mutations
### Optimize
- Optimize prepared statement flow for RBAC
### Preserve
- Preserve allow.list ordering on save
### Update
- Update filters section in guide
### Pull Requests ### Pull Requests
- Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac - Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac
<a name="v0.10.1"></a> <a name="v0.10.1"></a>
## [v0.10.1] - 2019-10-06 ## [v0.10.1] - 2019-10-06
### Add
- Add ability to set filters per operation / action
- Add upsert mutation
### Pull Requests ### Pull Requests
- Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder - Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder
<a name="v0.10"></a> <a name="v0.10"></a>
## [v0.10] - 2019-10-04 ## [v0.10] - 2019-10-04
### Fix
- Fix return values for bulk mutations and delete
- Fix issues with mutation SQL
- Fix broken demo app
- Fix typo in 'across'
### Remove
- Remove extra link from README
### Update
- Update docs, getting started guide and mutations
### Pull Requests ### Pull Requests
- Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes - Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes
<a name="v0.9"></a> <a name="v0.9"></a>
## [v0.9] - 2019-10-01 ## [v0.9] - 2019-10-01
### Fix
- Fix demo rails app broken build
<a name="v0.8"></a> <a name="v0.8"></a>
## [v0.8] - 2019-09-30 ## [v0.8] - 2019-09-30
### Fix
- Fix invalid import bug
### Update
- Update documentation site
<a name="v0.7"></a> <a name="v0.7"></a>
## [v0.7] - 2019-09-29 ## [v0.7] - 2019-09-29
### Failure
- Failure to prepare statements should be a warning
### Fix
- Fix duplicte column bug
<a name="v0.6"></a> <a name="v0.6"></a>
## [v0.6] - 2019-09-29 ## [v0.6] - 2019-09-29
### Add
- Add database setup commands
- Add binary compression back to Dockerfile
- Add initialization command to setup new apps
- Add migrate command
- Add database seeding capability
- Add session variable for user id
- Add delete mutation
- Add update mutation
- Add insert mutation with bulk insert
- Add GoTO Aug, 19 presentation
- Add support for prepared statements
- Add end-to-end benchmaking
- Add object pooling for parser expressions
- Add request / response debugging for remote joins
- Add a presentation about GraphQL
- Add validation for remote JSON
- Add tracing for API stitching
- Add REST API stitching
- Add SQL query cacheing
- Add support for GraphQL variables
- Add fuzz testing to qcode
- Add test for Rails Redis cookie store integration
- Add an install guide
### Change
- Change fuzz test name to qcode
- Change logo from PNG to SVG
### Enabke
- Enabke reload on config change
### Fix
- Fix missing config name bug
- Fix new app templates
- Fix help message for migrate
- Fix session variable bug
- Fix test failures in `psql` and `serv`
- Fix demo docker services startup order
- Fix wrong value for false token bug. Reported by [@ThisIsMissEm](https://github.com/ThisIsMissEm)
- Fix allow.list file discovery bug
- Fix bug with allow list path
- Fix wrong value for use_allow_list in dev config
- Fix startup bug in demo script
- Fix url bug in allow list
- Fix bug [#676](https://github.com/dosco/super-graph/issues/676) found by fuzzer
- Fix race-condition in remote joins
- Fix cookie passing in web ui
- Fix bug with passing cookies in web ui
- Fix null pointer with invalid argument values
- Fix infinite loop bug in lexer
- Fix null pointer issue found by fuzz test
- Fix issue with fuzzbuzz config
- Fix demo to run as memory only
- Fix auth documentation
- Fix issue with web ui sizing
- Fix issue preventing docker-compose deploy
- Fix try demo documentation
### Futher
- Futher reduce allocations across hot paths
- Futher reduce allocations on the compiler hot path
- Futher optimize json parsing and editing performance
### Highlight
- Highlight top features better on the site
### Improve
- Improve readability of json parser code
- Improve the motivation section in the readme
- Improve the demo experience
### Make
- Make remote joins use parallel http requests
### Merge
- Merge branch 'master' into optimize-psql
### New
- New low allocation fast json parsing and editing library
### Optimize
- Optimize lexer and fix bugs
- Optimize the sql generator hot path
### Reduce
- Reduce alllocations done by the stack
- Reduce steps to run the demo
- Reduce allocations and improve perf over 50%
### Remove
- Remove unused packages
- Remove the 'hello' test app folder
- Remove other allocations in psql
### Use
- Use hash's as ids for table relationships
### Watch
- Watch and reload on config changes
<a name="v0.5"></a> <a name="v0.5"></a>
## [v0.5] - 2019-04-10 ## [v0.5] - 2019-04-10
### Add
- Add supprt for new Rails 5.2 aes-256-gcm cookies
- Add query support for ts_rank and ts_headline
- Add full text search support using TSV indexes
- Add missing assets folder
- Add fetch by ID feature
- Add documentation
### Cleanup
- Cleanup and redesign config files
### Fix
- Fix bug with auth config parsing
### Redesign
- Redesign config file architecture
### Reduce
- Reduce realloc of maps and slices
### Update
- Update docs with full-text search information
<a name="v0.4"></a> <a name="v0.4"></a>
## [v0.4] - 2019-04-01 ## [v0.4] - 2019-04-01
<a name="v0.3"></a> <a name="v0.3"></a>
## [v0.3] - 2019-04-01 ## [v0.3] - 2019-04-01
### Add
- Add SQL execution timing and tracing
- Add support for HAVING with aggregate queries
- Add aggregrate functions to GQL queries
- Add Auth0 JWT support
- Add React UI building to the docker build flow
- Add compiler profiling
- Add bechmarks for GQL to SQL compile
- Add tests for gql to sql compile
### Cleanup
- Cleanup Dockerfile
### Fix
- Fix recurring packer issue docker hub builds
- Fix issue with asset packer breaking Docker builds
- Fix missing git package in Dockerfile
- Fix docker ignore values
- Fix image build failure on docker hub
- Fix build issue in Dockerfile
- Fix bugs and document the 'where' clause
- Fix perf issue with inflections
### Optimize
- Optimize docker image
### Pack
- Pack web UI with app into a single binary
### Upgrade
- Upgrade web UI packages
<a name="0.3"></a> <a name="0.3"></a>
## 0.3 - 2019-03-24 ## 0.3 - 2019-03-24
### First
- First commit
### Fix [Unreleased]: https://github.com/dosco/super-graph/compare/v0.13.22...HEAD
- Fix license to MIT [v0.13.22]: https://github.com/dosco/super-graph/compare/v0.13.21...v0.13.22
[v0.13.21]: https://github.com/dosco/super-graph/compare/v0.13.20...v0.13.21
[v0.13.20]: https://github.com/dosco/super-graph/compare/v0.13.19...v0.13.20
[Unreleased]: https://github.com/dosco/super-graph/compare/v0.12.6...HEAD [v0.13.19]: https://github.com/dosco/super-graph/compare/v0.13.18...v0.13.19
[v0.13.18]: https://github.com/dosco/super-graph/compare/v0.13.17...v0.13.18
[v0.13.17]: https://github.com/dosco/super-graph/compare/v0.13.16...v0.13.17
[v0.13.16]: https://github.com/dosco/super-graph/compare/v0.13.15...v0.13.16
[v0.13.15]: https://github.com/dosco/super-graph/compare/v0.13.14...v0.13.15
[v0.13.14]: https://github.com/dosco/super-graph/compare/v0.13.13...v0.13.14
[v0.13.13]: https://github.com/dosco/super-graph/compare/v0.13.12...v0.13.13
[v0.13.12]: https://github.com/dosco/super-graph/compare/v0.13.11...v0.13.12
[v0.13.11]: https://github.com/dosco/super-graph/compare/v0.13.10...v0.13.11
[v0.13.10]: https://github.com/dosco/super-graph/compare/v0.13.9...v0.13.10
[v0.13.9]: https://github.com/dosco/super-graph/compare/v0.13.8...v0.13.9
[v0.13.8]: https://github.com/dosco/super-graph/compare/v0.13.7...v0.13.8
[v0.13.7]: https://github.com/dosco/super-graph/compare/v0.13.6...v0.13.7
[v0.13.6]: https://github.com/dosco/super-graph/compare/v0.13.5...v0.13.6
[v0.13.5]: https://github.com/dosco/super-graph/compare/v0.13.4...v0.13.5
[v0.13.4]: https://github.com/dosco/super-graph/compare/v0.13.3...v0.13.4
[v0.13.3]: https://github.com/dosco/super-graph/compare/v0.13.2...v0.13.3
[v0.13.2]: https://github.com/dosco/super-graph/compare/v0.13.1...v0.13.2
[v0.13.1]: https://github.com/dosco/super-graph/compare/v0.13.0...v0.13.1
[v0.13.0]: https://github.com/dosco/super-graph/compare/v0.12.49...v0.13.0
[v0.12.49]: https://github.com/dosco/super-graph/compare/v0.12.48...v0.12.49
[v0.12.48]: https://github.com/dosco/super-graph/compare/v0.12.47...v0.12.48
[v0.12.47]: https://github.com/dosco/super-graph/compare/v0.12.46...v0.12.47
[v0.12.46]: https://github.com/dosco/super-graph/compare/v0.12.45...v0.12.46
[v0.12.45]: https://github.com/dosco/super-graph/compare/v0.12.44...v0.12.45
[v0.12.44]: https://github.com/dosco/super-graph/compare/v0.12.43...v0.12.44
[v0.12.43]: https://github.com/dosco/super-graph/compare/v0.12.42...v0.12.43
[v0.12.42]: https://github.com/dosco/super-graph/compare/v0.12.41...v0.12.42
[v0.12.41]: https://github.com/dosco/super-graph/compare/v0.12.40...v0.12.41
[v0.12.40]: https://github.com/dosco/super-graph/compare/v0.12.39...v0.12.40
[v0.12.39]: https://github.com/dosco/super-graph/compare/v0.12.38...v0.12.39
[v0.12.38]: https://github.com/dosco/super-graph/compare/v0.12.37...v0.12.38
[v0.12.37]: https://github.com/dosco/super-graph/compare/v0.12.36...v0.12.37
[v0.12.36]: https://github.com/dosco/super-graph/compare/v0.12.35...v0.12.36
[v0.12.35]: https://github.com/dosco/super-graph/compare/v0.12.34...v0.12.35
[v0.12.34]: https://github.com/dosco/super-graph/compare/v0.12.33...v0.12.34
[v0.12.33]: https://github.com/dosco/super-graph/compare/v0.12.32...v0.12.33
[v0.12.32]: https://github.com/dosco/super-graph/compare/v0.12.31...v0.12.32
[v0.12.31]: https://github.com/dosco/super-graph/compare/v0.12.30...v0.12.31
[v0.12.30]: https://github.com/dosco/super-graph/compare/v0.12.29...v0.12.30
[v0.12.29]: https://github.com/dosco/super-graph/compare/v0.12.28...v0.12.29
[v0.12.28]: https://github.com/dosco/super-graph/compare/v0.12.27...v0.12.28
[v0.12.27]: https://github.com/dosco/super-graph/compare/v0.12.26...v0.12.27
[v0.12.26]: https://github.com/dosco/super-graph/compare/v0.12.25...v0.12.26
[v0.12.25]: https://github.com/dosco/super-graph/compare/v0.12.24...v0.12.25
[v0.12.24]: https://github.com/dosco/super-graph/compare/v0.12.23...v0.12.24
[v0.12.23]: https://github.com/dosco/super-graph/compare/v0.12.22...v0.12.23
[v0.12.22]: https://github.com/dosco/super-graph/compare/v0.12.21...v0.12.22
[v0.12.21]: https://github.com/dosco/super-graph/compare/v0.12.20...v0.12.21
[v0.12.20]: https://github.com/dosco/super-graph/compare/v0.12.19...v0.12.20
[v0.12.19]: https://github.com/dosco/super-graph/compare/v0.12.18...v0.12.19
[v0.12.18]: https://github.com/dosco/super-graph/compare/v0.12.17...v0.12.18
[v0.12.17]: https://github.com/dosco/super-graph/compare/v0.12.16...v0.12.17
[v0.12.16]: https://github.com/dosco/super-graph/compare/v0.12.15...v0.12.16
[v0.12.15]: https://github.com/dosco/super-graph/compare/v0.12.14...v0.12.15
[v0.12.14]: https://github.com/dosco/super-graph/compare/v0.12.13...v0.12.14
[v0.12.13]: https://github.com/dosco/super-graph/compare/v0.12.12...v0.12.13
[v0.12.12]: https://github.com/dosco/super-graph/compare/v0.12.11...v0.12.12
[v0.12.11]: https://github.com/dosco/super-graph/compare/v0.12.10...v0.12.11
[v0.12.10]: https://github.com/dosco/super-graph/compare/v0.12.9...v0.12.10
[v0.12.9]: https://github.com/dosco/super-graph/compare/v0.12.8...v0.12.9
[v0.12.8]: https://github.com/dosco/super-graph/compare/v0.12.7...v0.12.8
[v0.12.7]: https://github.com/dosco/super-graph/compare/v0.12.6...v0.12.7
[v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6 [v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6
[v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5 [v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5
[v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4 [v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4

View File

@ -1,17 +1,18 @@
# stage: 1 # stage: 1
FROM node:10 as react-build FROM node:10 as react-build
WORKDIR /web WORKDIR /web
COPY /cmd/internal/serv/web/ ./ COPY /internal/serv/web/ ./
RUN yarn RUN yarn
RUN yarn build RUN yarn build
# stage: 2 # stage: 2
FROM golang:1.14-alpine as go-build FROM golang:1.14-alpine as go-build
RUN apk update && \ RUN apk update && \
apk add --no-cache make && \ apk add --no-cache make && \
apk add --no-cache git && \ apk add --no-cache git && \
apk add --no-cache jq && \ apk add --no-cache jq
apk add --no-cache upx=3.95-r2
RUN GO111MODULE=off go get -u github.com/rafaelsq/wtc RUN GO111MODULE=off go get -u github.com/rafaelsq/wtc
@ -22,14 +23,16 @@ RUN chmod 755 /usr/local/bin/sops
WORKDIR /app WORKDIR /app
COPY . /app COPY . /app
RUN mkdir -p /app/cmd/internal/serv/web/build RUN mkdir -p /app/internal/serv/web/build
COPY --from=react-build /web/build/ ./cmd/internal/serv/web/build COPY --from=react-build /web/build/ ./internal/serv/web/build
RUN go mod vendor RUN go mod vendor
RUN make build RUN make build
RUN echo "Compressing binary, will take a bit of time..." && \ # RUN echo "Compressing binary, will take a bit of time..." && \
upx --ultra-brute -qq super-graph && \ # upx --ultra-brute -qq super-graph && \
upx -t super-graph # upx -t super-graph
# stage: 3 # stage: 3
FROM alpine:latest FROM alpine:latest
@ -41,7 +44,7 @@ RUN mkdir -p /config
COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=go-build /app/config/* /config/ COPY --from=go-build /app/config/* /config/
COPY --from=go-build /app/super-graph . COPY --from=go-build /app/super-graph .
COPY --from=go-build /app/cmd/scripts/start.sh . COPY --from=go-build /app/internal/scripts/start.sh .
COPY --from=go-build /usr/local/bin/sops . COPY --from=go-build /usr/local/bin/sops .
RUN chmod +x /super-graph RUN chmod +x /super-graph

View File

@ -12,30 +12,30 @@ endif
export GO111MODULE := on export GO111MODULE := on
# Build-time Go variables # Build-time Go variables
version = github.com/dosco/super-graph/serv.version version = github.com/dosco/super-graph/internal/serv.version
gitBranch = github.com/dosco/super-graph/serv.gitBranch gitBranch = github.com/dosco/super-graph/internal/serv.gitBranch
lastCommitSHA = github.com/dosco/super-graph/serv.lastCommitSHA lastCommitSHA = github.com/dosco/super-graph/internal/serv.lastCommitSHA
lastCommitTime = github.com/dosco/super-graph/serv.lastCommitTime lastCommitTime = github.com/dosco/super-graph/internal/serv.lastCommitTime
BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}' BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}'
.PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS) .PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS)
test: test:
@go test -v ./... @go test -v -short -race ./...
BIN_DIR := $(GOPATH)/bin BIN_DIR := $(GOPATH)/bin
GORICE := $(BIN_DIR)/rice GORICE := $(BIN_DIR)/rice
GOLANGCILINT := $(BIN_DIR)/golangci-lint GOLANGCILINT := $(BIN_DIR)/golangci-lint
GITCHGLOG := $(BIN_DIR)/git-chglog GITCHGLOG := $(BIN_DIR)/git-chglog
WEB_BUILD_DIR := ./cmd/internal/serv/web/build/manifest.json WEB_BUILD_DIR := ./internal/serv/web/build/manifest.json
$(GORICE): $(GORICE):
@GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice @GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice
$(WEB_BUILD_DIR): $(WEB_BUILD_DIR):
@echo "First install Yarn and create a build of the web UI then re-run make install" @echo "First install Yarn and create a build of the web UI then re-run make install"
@echo "Run this command: yarn --cwd cmd/internal/serv/web/ build" @echo "Run this command: yarn --cwd internal/serv/web/ build"
@exit 1 @exit 1
$(GITCHGLOG): $(GITCHGLOG):
@ -45,7 +45,7 @@ changelog: $(GITCHGLOG)
@git-chglog $(ARGS) @git-chglog $(ARGS)
$(GOLANGCILINT): $(GOLANGCILINT):
@GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.21.0 @GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.25.1
lint: $(GOLANGCILINT) lint: $(GOLANGCILINT)
@golangci-lint run ./... --skip-dirs-use-default @golangci-lint run ./... --skip-dirs-use-default
@ -57,7 +57,7 @@ os = $(word 1, $@)
$(PLATFORMS): lint test $(PLATFORMS): lint test
@mkdir -p release @mkdir -p release
@GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 cmd/main.go @GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 main.go
release: windows linux darwin release: windows linux darwin
@ -69,7 +69,7 @@ gen: $(GORICE) $(WEB_BUILD_DIR)
@go generate ./... @go generate ./...
$(BINARY): clean $(BINARY): clean
@go build $(BUILD_FLAGS) -o $(BINARY) cmd/main.go @go build $(BUILD_FLAGS) -o $(BINARY) main.go
clean: clean:
@rm -f $(BINARY) @rm -f $(BINARY)
@ -77,11 +77,10 @@ clean:
run: clean run: clean
@go run $(BUILD_FLAGS) main.go $(ARGS) @go run $(BUILD_FLAGS) main.go $(ARGS)
install: install: clean build
@echo $(GOPATH)
@echo "Commit Hash: `git rev-parse HEAD`" @echo "Commit Hash: `git rev-parse HEAD`"
@echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`" @echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`"
@go install $(BUILD_FLAGS) cmd @mv $(BINARY) $(GOPATH)/bin/$(BINARY)
@echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32` @echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`
uninstall: clean uninstall: clean

100
README.md
View File

@ -1,28 +1,71 @@
<!-- <a href="https://supergraph.dev"><img src="https://supergraph.dev/hologram.svg" width="100" height="100" align="right" /></a> --> <img src="docs/website/static/img/super-graph-logo.svg" width="80" />
<img src="docs/.vuepress/public/super-graph.png" width="250" /> # Super Graph - Fetch data without code!
### Build web products faster. Secure high performance GraphQL [![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg)](https://pkg.go.dev/github.com/dosco/super-graph/core?tab=doc)
![Apache 2.0](https://img.shields.io/github/license/dosco/super-graph.svg?style=flat-square)
![Apache Public License 2.0](https://img.shields.io/github/license/dosco/super-graph.svg) ![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg?style=flat-square)
![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg)
![Cloud native](https://img.shields.io/badge/cloud--native-enabled-blue.svg)
[![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ) [![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ)
Super Graph gives you a high performance GraphQL API without you having to write any code. GraphQL is automagically compiled into an efficient SQL query. Use it either as a library or a standalone service.
## What is Super Graph ## Using it as a service
Is designed to 100x your developer productivity. Super Graph will instantly and without you writing code provide you a high performance and secure GraphQL API for Postgres DB. GraphQL queries are translated into a single fast SQL query. No more writing API code as you develop ```console
your web frontend just make the query you need and Super Graph will do the rest. go get github.com/dosco/super-graph
super-graph new <app_name>
```
Super Graph has a rich feature set like integrating with your existing Ruby on Rails apps, joining your DB with data from remote APIs, role and attribute based access control, support for JWT tokens, built-in DB mutations and seeding, and a lot more. ## Using it in your own code
![GraphQL](docs/.vuepress/public/graphql.png?raw=true "") ```console
go get github.com/dosco/super-graph/core
```
```golang
package main
## The story of Super Graph? import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib"
)
After working on several products through my career I find that we spend way too much time on building API backends. Most APIs also require constant updating, this costs real time and money. func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil {
log.Fatal(err)
}
sg, err := core.NewSuperGraph(nil, db)
if err != nil {
log.Fatal(err)
}
query := `
query {
posts {
id
title
}
}`
ctx = context.WithValue(ctx, core.UserIDKey, 1)
res, err := sg.GraphQL(ctx, query, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(res.Data))
}
```
## About Super Graph
After working on several products through my career I found that we spend way too much time on building API backends. Most APIs also need constant updating, and this costs time and money.
It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see. It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see.
@ -30,44 +73,35 @@ I didn't want to write this code anymore, I wanted the computer to do it. Enter
Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL. Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL.
This compiler is what sits at the heart of Super Graph with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations and everything else needed for you to build production ready apps with it. This compiler is what sits at the heart of Super Graph, with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations, and everything else needed for you to build production-ready apps with it.
## Features ## Features
- Complex nested queries and mutations - Complex nested queries and mutations
- Auto learns database tables and relationships - Auto learns database tables and relationships
- Role and Attribute based access control - Role and Attribute-based access control
- Full text search and aggregations - Opaque cursor-based efficient pagination
- Full-text search and aggregations
- JWT tokens supported (Auth0, etc) - JWT tokens supported (Auth0, etc)
- Join database queries with remote REST APIs - Join database queries with remote REST APIs
- Also works with existing Ruby-On-Rails apps - Also works with existing Ruby-On-Rails apps
- Rails authentication supported (Redis, Memcache, Cookie) - Rails authentication supported (Redis, Memcache, Cookie)
- A simple config file - A simple config file
- High performance GO codebase - High performance Go codebase
- Tiny docker image and low memory requirements - Tiny docker image and low memory requirements
- Fuzz tested for security - Fuzz tested for security
- Database migrations tool - Database migrations tool
- Database seeding tool - Database seeding tool
- Works with Postgres and YugabyteDB - Works with Postgres and Yugabyte DB
- OpenCensus Support: Zipkin, Prometheus, X-Ray, Stackdriver
## Get started
```
git clone https://github.com/dosco/super-graph
cd ./super-graph
make install
super-graph new <app_name>
```
## Documentation ## Documentation
[supergraph.dev](https://supergraph.dev) [supergraph.dev](https://supergraph.dev)
## Contact me ## Reach out
I'm happy to help you deploy Super Graph so feel free to reach out over We're happy to help you leverage Super Graph reach out if you have questions
Twitter or Discord.
[twitter/dosco](https://twitter.com/dosco) [twitter/dosco](https://twitter.com/dosco)
@ -78,5 +112,3 @@ Twitter or Discord.
[Apache Public License 2.0](https://opensource.org/licenses/Apache-2.0) [Apache Public License 2.0](https://opensource.org/licenses/Apache-2.0)
Copyright (c) 2019-present Vikram Rangnekar Copyright (c) 2019-present Vikram Rangnekar

View File

@ -1,7 +0,0 @@
package serv
// func (c *coreContext) handleReq(w io.Writer, req *http.Request) error {
// return nil
// }

View File

@ -1,124 +0,0 @@
package serv
import (
"encoding/json"
"errors"
"io"
"io/ioutil"
"net/http"
"strings"
"github.com/dosco/super-graph/cmd/internal/serv/internal/auth"
"github.com/dosco/super-graph/core"
"github.com/rs/cors"
"go.uber.org/zap"
)
const (
maxReadBytes = 100000 // 100Kb
introspectionQuery = "IntrospectionQuery"
)
var (
errUnauthorized = errors.New("not authorized")
)
type gqlReq struct {
OpName string `json:"operationName"`
Query string `json:"query"`
Vars json.RawMessage `json:"variables"`
}
type errorResp struct {
Error error `json:"error"`
}
func apiV1Handler() http.Handler {
h, err := auth.WithAuth(http.HandlerFunc(apiV1), &conf.Auth)
if err != nil {
log.Fatalf("ERR %s", err)
}
if len(conf.AllowedOrigins) != 0 {
c := cors.New(cors.Options{
AllowedOrigins: conf.AllowedOrigins,
AllowCredentials: true,
Debug: conf.DebugCORS,
})
h = c.Handler(h)
}
return h
}
func apiV1(w http.ResponseWriter, r *http.Request) {
ct := r.Context()
//nolint: errcheck
if conf.AuthFailBlock && !auth.IsAuth(ct) {
renderErr(w, errUnauthorized, nil)
return
}
b, err := ioutil.ReadAll(io.LimitReader(r.Body, maxReadBytes))
if err != nil {
renderErr(w, err, nil)
return
}
defer r.Body.Close()
req := gqlReq{}
err = json.Unmarshal(b, &req)
if err != nil {
renderErr(w, err, nil)
return
}
if strings.EqualFold(req.OpName, introspectionQuery) {
introspect(w)
return
}
res, err := sg.GraphQL(ct, req.Query, req.Vars)
if logLevel >= LogLevelDebug {
log.Printf("DBG query:\n%s\nsql:\n%s", req.Query, res.SQL())
}
if err != nil {
renderErr(w, err, res)
return
}
json.NewEncoder(w).Encode(res)
if logLevel >= LogLevelInfo {
zlog.Info("success",
zap.String("op", res.Operation()),
zap.String("name", res.QueryName()),
zap.String("role", res.Role()),
)
}
}
//nolint: errcheck
func renderErr(w http.ResponseWriter, err error, res *core.Result) {
if err == errUnauthorized {
w.WriteHeader(http.StatusUnauthorized)
}
json.NewEncoder(w).Encode(&errorResp{err})
if logLevel >= LogLevelError {
if res != nil {
zlog.Error(err.Error(),
zap.String("op", res.Operation()),
zap.String("name", res.QueryName()),
zap.String("role", res.Role()),
)
} else {
zlog.Error(err.Error())
}
}
}

View File

@ -1,153 +0,0 @@
package serv
import (
"database/sql"
"fmt"
"path"
"time"
_ "github.com/jackc/pgx/v4/stdlib"
)
func initConf() (*Config, error) {
c, err := ReadInConfig(path.Join(confPath, GetConfigName()))
if err != nil {
return nil, err
}
switch c.LogLevel {
case "debug":
logLevel = LogLevelDebug
case "error":
logLevel = LogLevelError
case "warn":
logLevel = LogLevelWarn
case "info":
logLevel = LogLevelInfo
default:
logLevel = LogLevelNone
}
// Auths: validate and sanitize
am := make(map[string]struct{})
for i := 0; i < len(c.Auths); i++ {
a := &c.Auths[i]
a.Name = sanitize(a.Name)
if _, ok := am[a.Name]; ok {
c.Auths = append(c.Auths[:i], c.Auths[i+1:]...)
log.Printf("WRN duplicate auth found: %s", a.Name)
}
am[a.Name] = struct{}{}
}
// Actions: validate and sanitize
axm := make(map[string]struct{})
for i := 0; i < len(c.Actions); i++ {
a := &c.Actions[i]
a.Name = sanitize(a.Name)
a.AuthName = sanitize(a.AuthName)
if _, ok := axm[a.Name]; ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
log.Printf("WRN duplicate action found: %s", a.Name)
}
if _, ok := am[a.AuthName]; !ok {
c.Actions = append(c.Actions[:i], c.Actions[i+1:]...)
log.Printf("WRN invalid auth_name '%s' for auth: %s", a.AuthName, a.Name)
}
axm[a.Name] = struct{}{}
}
var anonFound bool
for _, r := range c.Roles {
if sanitize(r.Name) == "anon" {
anonFound = true
}
}
if !anonFound {
log.Printf("WRN unauthenticated requests will be blocked. no role 'anon' defined")
c.AuthFailBlock = false
}
return c, nil
}
func initDB(c *Config) (*sql.DB, error) {
var db *sql.DB
var err error
cs := fmt.Sprintf("postgres://%s:%s@%s:%d/%s",
c.DB.User, c.DB.Password,
c.DB.Host, c.DB.Port, c.DB.DBName)
for i := 1; i < 10; i++ {
db, err = sql.Open("pgx", cs)
if err == nil {
break
}
time.Sleep(time.Duration(i*100) * time.Millisecond)
}
if err != nil {
return nil, err
}
return db, nil
// config, _ := pgxpool.ParseConfig("")
// config.ConnConfig.Host = c.DB.Host
// config.ConnConfig.Port = c.DB.Port
// config.ConnConfig.Database = c.DB.DBName
// config.ConnConfig.User = c.DB.User
// config.ConnConfig.Password = c.DB.Password
// config.ConnConfig.RuntimeParams = map[string]string{
// "application_name": c.AppName,
// "search_path": c.DB.Schema,
// }
// switch c.LogLevel {
// case "debug":
// config.ConnConfig.LogLevel = pgx.LogLevelDebug
// case "info":
// config.ConnConfig.LogLevel = pgx.LogLevelInfo
// case "warn":
// config.ConnConfig.LogLevel = pgx.LogLevelWarn
// case "error":
// config.ConnConfig.LogLevel = pgx.LogLevelError
// default:
// config.ConnConfig.LogLevel = pgx.LogLevelNone
// }
// config.ConnConfig.Logger = NewSQLLogger(logger)
// // if c.DB.MaxRetries != 0 {
// // opt.MaxRetries = c.DB.MaxRetries
// // }
// if c.DB.PoolSize != 0 {
// config.MaxConns = conf.DB.PoolSize
// }
// var db *pgxpool.Pool
// var err error
// for i := 1; i < 10; i++ {
// db, err = pgxpool.ConnectConfig(context.Background(), config)
// if err == nil {
// break
// }
// time.Sleep(time.Duration(i*100) * time.Millisecond)
// }
// if err != nil {
// return nil, err
// }
// return db, nil
}

View File

@ -1,105 +0,0 @@
package auth
import (
"context"
"io/ioutil"
"net/http"
"strings"
jwt "github.com/dgrijalva/jwt-go"
"github.com/dosco/super-graph/core"
)
const (
authHeader = "Authorization"
jwtAuth0 int = iota + 1
)
func JwtHandler(ac *Auth, next http.Handler) (http.HandlerFunc, error) {
var key interface{}
var jwtProvider int
cookie := ac.Cookie
if ac.JWT.Provider == "auth0" {
jwtProvider = jwtAuth0
}
secret := ac.JWT.Secret
publicKeyFile := ac.JWT.PubKeyFile
switch {
case len(secret) != 0:
key = []byte(secret)
case len(publicKeyFile) != 0:
kd, err := ioutil.ReadFile(publicKeyFile)
if err != nil {
return nil, err
}
switch ac.JWT.PubKeyType {
case "ecdsa":
key, err = jwt.ParseECPublicKeyFromPEM(kd)
case "rsa":
key, err = jwt.ParseRSAPublicKeyFromPEM(kd)
default:
key, err = jwt.ParseECPublicKeyFromPEM(kd)
}
if err != nil {
return nil, err
}
}
return func(w http.ResponseWriter, r *http.Request) {
var tok string
if len(cookie) != 0 {
ck, err := r.Cookie(cookie)
if err != nil {
next.ServeHTTP(w, r)
return
}
tok = ck.Value
} else {
ah := r.Header.Get(authHeader)
if len(ah) < 10 {
next.ServeHTTP(w, r)
return
}
tok = ah[7:]
}
token, err := jwt.ParseWithClaims(tok, &jwt.StandardClaims{}, func(token *jwt.Token) (interface{}, error) {
return key, nil
})
if err != nil {
next.ServeHTTP(w, r)
return
}
if claims, ok := token.Claims.(*jwt.StandardClaims); ok {
ctx := r.Context()
if jwtProvider == jwtAuth0 {
sub := strings.Split(claims.Subject, "|")
if len(sub) != 2 {
ctx = context.WithValue(ctx, core.UserIDProviderKey, sub[0])
ctx = context.WithValue(ctx, core.UserIDKey, sub[1])
}
} else {
ctx = context.WithValue(ctx, core.UserIDKey, claims.Subject)
}
next.ServeHTTP(w, r.WithContext(ctx))
return
}
next.ServeHTTP(w, r)
}, nil
}

View File

@ -1,37 +0,0 @@
package serv
import "net/http"
//nolint: errcheck
func introspect(w http.ResponseWriter) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": null,
"subscriptionType": null
}
},
"extensions":{
"tracing":{
"version":1,
"startTime":"2019-06-04T19:53:31.093Z",
"endTime":"2019-06-04T19:53:31.108Z",
"duration":15219720,
"execution": {
"resolvers": [{
"path": ["__schema"],
"parentType": "Query",
"fieldName": "__schema",
"returnType": "__Schema!",
"startOffset": 50950,
"duration": 17187
}]
}
}
}
}`))
}

View File

@ -1,59 +0,0 @@
version: '3.4'
services:
# Postgres DB
db:
image: postgres:12
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
# Yugabyte DB
# yb-master:
# image: yugabytedb/yugabyte:latest
# container_name: yb-master-n1
# command: [ "/home/yugabyte/bin/yb-master",
# "--fs_data_dirs=/mnt/disk0,/mnt/disk1",
# "--master_addresses=yb-master-n1:7100",
# "--replication_factor=1",
# "--enable_ysql=true"]
# ports:
# - "7000:7000"
# environment:
# SERVICE_7000_NAME: yb-master
# db:
# image: yugabytedb/yugabyte:latest
# container_name: yb-tserver-n1
# command: [ "/home/yugabyte/bin/yb-tserver",
# "--fs_data_dirs=/mnt/disk0,/mnt/disk1",
# "--start_pgsql_proxy",
# "--tserver_master_addrs=yb-master-n1:7100"]
# ports:
# - "9042:9042"
# - "6379:6379"
# - "5433:5433"
# - "9000:9000"
# environment:
# SERVICE_5433_NAME: ysql
# SERVICE_9042_NAME: ycql
# SERVICE_6379_NAME: yedis
# SERVICE_9000_NAME: yb-tserver
# depends_on:
# - yb-master
{% app_name_slug %}_api:
image: dosco/super-graph:latest
environment:
GO_ENV: "development"
# Uncomment below for Yugabyte DB
# SG_DATABASE_PORT: 5433
# SG_DATABASE_USER: yugabyte
# SG_DATABASE_PASSWORD: yugabyte
volumes:
- ./config:/config
ports:
- "8080:8080"
depends_on:
- db

View File

@ -1,30 +0,0 @@
{
"files": {
"main.css": "/static/css/main.c6b5c55c.chunk.css",
"main.js": "/static/js/main.04d74040.chunk.js",
"main.js.map": "/static/js/main.04d74040.chunk.js.map",
"runtime-main.js": "/static/js/runtime-main.4aea9da3.js",
"runtime-main.js.map": "/static/js/runtime-main.4aea9da3.js.map",
"static/js/2.03370bd3.chunk.js": "/static/js/2.03370bd3.chunk.js",
"static/js/2.03370bd3.chunk.js.map": "/static/js/2.03370bd3.chunk.js.map",
"index.html": "/index.html",
"precache-manifest.e33bc3c7c6774d7032c490820c96901d.js": "/precache-manifest.e33bc3c7c6774d7032c490820c96901d.js",
"service-worker.js": "/service-worker.js",
"static/css/main.c6b5c55c.chunk.css.map": "/static/css/main.c6b5c55c.chunk.css.map",
"static/media/GraphQLLanguageService.js.flow": "/static/media/GraphQLLanguageService.js.5ab204b9.flow",
"static/media/autocompleteUtils.js.flow": "/static/media/autocompleteUtils.js.4ce7ba19.flow",
"static/media/getAutocompleteSuggestions.js.flow": "/static/media/getAutocompleteSuggestions.js.7f98f032.flow",
"static/media/getDefinition.js.flow": "/static/media/getDefinition.js.4dbec62f.flow",
"static/media/getDiagnostics.js.flow": "/static/media/getDiagnostics.js.65b0979a.flow",
"static/media/getHoverInformation.js.flow": "/static/media/getHoverInformation.js.d9411837.flow",
"static/media/getOutline.js.flow": "/static/media/getOutline.js.c04e3998.flow",
"static/media/index.js.flow": "/static/media/index.js.02c24280.flow",
"static/media/logo.png": "/static/media/logo.57ee3b60.png"
},
"entrypoints": [
"static/js/runtime-main.4aea9da3.js",
"static/js/2.03370bd3.chunk.js",
"static/css/main.c6b5c55c.chunk.css",
"static/js/main.04d74040.chunk.js"
]
}

View File

@ -1 +0,0 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="shortcut icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1,shrink-to-fit=no"/><meta name="theme-color" content="#000000"/><link rel="manifest" href="/manifest.json"/><link href="https://fonts.googleapis.com/css?family=Open+Sans:300,400,600,700|Source+Code+Pro:400,700" rel="stylesheet"><title>Super Graph - GraphQL API for Rails</title><link href="/static/css/main.c6b5c55c.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(i){function e(e){for(var r,t,n=e[0],o=e[1],u=e[2],l=0,f=[];l<n.length;l++)t=n[l],Object.prototype.hasOwnProperty.call(p,t)&&p[t]&&f.push(p[t][0]),p[t]=0;for(r in o)Object.prototype.hasOwnProperty.call(o,r)&&(i[r]=o[r]);for(s&&s(e);f.length;)f.shift()();return c.push.apply(c,u||[]),a()}function a(){for(var e,r=0;r<c.length;r++){for(var t=c[r],n=!0,o=1;o<t.length;o++){var u=t[o];0!==p[u]&&(n=!1)}n&&(c.splice(r--,1),e=l(l.s=t[0]))}return e}var t={},p={1:0},c=[];function l(e){if(t[e])return t[e].exports;var r=t[e]={i:e,l:!1,exports:{}};return i[e].call(r.exports,r,r.exports,l),r.l=!0,r.exports}l.m=i,l.c=t,l.d=function(e,r,t){l.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},l.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},l.t=function(r,e){if(1&e&&(r=l(r)),8&e)return r;if(4&e&&"object"==typeof r&&r&&r.__esModule)return r;var t=Object.create(null);if(l.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:r}),2&e&&"string"!=typeof r)for(var n in r)l.d(t,n,function(e){return r[e]}.bind(null,n));return t},l.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return l.d(r,"a",r),r},l.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},l.p="/";var r=this.webpackJsonpweb=this.webpackJsonpweb||[],n=r.push.bind(r);r.push=e,r=r.slice();for(var o=0;o<r.length;o++)e(r[o]);var s=n;a()}([])</script><script src="/static/js/2.03370bd3.chunk.js"></script><script src="/static/js/main.04d74040.chunk.js"></script></body></html>

View File

@ -1,58 +0,0 @@
self.__precacheManifest = (self.__precacheManifest || []).concat([
{
"revision": "ecdae64182d05c64e7f7f200ed03a4ed",
"url": "/index.html"
},
{
"revision": "6e9467dc213a3e2b84ea",
"url": "/static/css/main.c6b5c55c.chunk.css"
},
{
"revision": "c156a125990ddf5dcc51",
"url": "/static/js/2.03370bd3.chunk.js"
},
{
"revision": "6e9467dc213a3e2b84ea",
"url": "/static/js/main.04d74040.chunk.js"
},
{
"revision": "427262b6771d3f49a7c5",
"url": "/static/js/runtime-main.4aea9da3.js"
},
{
"revision": "5ab204b9b95c06640dbefae9a65b1db2",
"url": "/static/media/GraphQLLanguageService.js.5ab204b9.flow"
},
{
"revision": "4ce7ba191f7ebee4426768f246b2f0e0",
"url": "/static/media/autocompleteUtils.js.4ce7ba19.flow"
},
{
"revision": "7f98f032085704c8943ec2d1925c7c84",
"url": "/static/media/getAutocompleteSuggestions.js.7f98f032.flow"
},
{
"revision": "4dbec62f1d8e8417afb9cbd19f1268c3",
"url": "/static/media/getDefinition.js.4dbec62f.flow"
},
{
"revision": "65b0979ac23feca49e4411883fd8eaab",
"url": "/static/media/getDiagnostics.js.65b0979a.flow"
},
{
"revision": "d94118379d362fc161aa1246bcc14d43",
"url": "/static/media/getHoverInformation.js.d9411837.flow"
},
{
"revision": "c04e3998712b37a96f0bfd283fa06b52",
"url": "/static/media/getOutline.js.c04e3998.flow"
},
{
"revision": "02c24280c5e4a7eb3c6cfcb079a8f1e3",
"url": "/static/media/index.js.02c24280.flow"
},
{
"revision": "57ee3b6084cb9d3c754cc12d25a98035",
"url": "/static/media/logo.57ee3b60.png"
}
]);

View File

@ -1,2 +0,0 @@
body{margin:0;padding:0;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background-color:#0f202d}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.playground>div:nth-child(2){height:calc(100vh - 131px)}
/*# sourceMappingURL=main.c6b5c55c.chunk.css.map */

File diff suppressed because one or more lines are too long

View File

@ -1,2 +0,0 @@
(this.webpackJsonpweb=this.webpackJsonpweb||[]).push([[0],{163:function(e,t,n){var r={".":61,"./":61,"./GraphQLLanguageService":117,"./GraphQLLanguageService.js":117,"./GraphQLLanguageService.js.flow":315,"./autocompleteUtils":91,"./autocompleteUtils.js":91,"./autocompleteUtils.js.flow":316,"./getAutocompleteSuggestions":77,"./getAutocompleteSuggestions.js":77,"./getAutocompleteSuggestions.js.flow":317,"./getDefinition":92,"./getDefinition.js":92,"./getDefinition.js.flow":318,"./getDiagnostics":94,"./getDiagnostics.js":94,"./getDiagnostics.js.flow":319,"./getHoverInformation":95,"./getHoverInformation.js":95,"./getHoverInformation.js.flow":320,"./getOutline":116,"./getOutline.js":116,"./getOutline.js.flow":321,"./index":61,"./index.js":61,"./index.js.flow":322};function o(e){var t=a(e);return n(t)}function a(e){if(!n.o(r,e)){var t=new Error("Cannot find module '"+e+"'");throw t.code="MODULE_NOT_FOUND",t}return r[e]}o.keys=function(){return Object.keys(r)},o.resolve=a,e.exports=o,o.id=163},190:function(e,t,n){"use strict";(function(e){var r=n(100),o=n(101),a=n(201),i=n(191),s=n(202),l=n(5),c=n.n(l),u=n(20),g=n(130),f=(n(441),window.fetch);window.fetch=function(){return arguments[1].credentials="include",Promise.resolve(f.apply(e,arguments))};var p=function(e){function t(){return Object(r.a)(this,t),Object(a.a)(this,Object(i.a)(t).apply(this,arguments))}return Object(s.a)(t,e),Object(o.a)(t,[{key:"render",value:function(){return c.a.createElement("div",null,c.a.createElement("header",{style:{background:"#09141b",color:"#03a9f4",letterSpacing:"0.15rem",height:"65px",display:"flex",alignItems:"center"}},c.a.createElement("h3",{style:{textDecoration:"none",margin:"0px",fontSize:"18px"}},c.a.createElement("span",{style:{textTransform:"uppercase",marginLeft:"20px",paddingRight:"10px",borderRight:"1px solid #fff"}},"Super Graph"),c.a.createElement("span",{style:{fontSize:"16px",marginLeft:"10px",color:"#fff"}},"Instant GraphQL"))),c.a.createElement(u.Provider,{store:g.store},c.a.createElement(g.Playground,{endpoint:"/api/v1/graphql",settings:"{ 'schema.polling.enable': false, 'request.credentials': 'include', 'general.betaUpdates': true, 'editor.reuseHeaders': true, 'editor.theme': 'dark' }"})))}}]),t}(l.Component);t.a=p}).call(this,n(32))},205:function(e,t,n){e.exports=n(206)},206:function(e,t,n){"use strict";n.r(t);var r=n(5),o=n.n(r),a=n(52),i=n.n(a),s=n(190);i.a.render(o.a.createElement(s.a,null),document.getElementById("root"))},441:function(e,t,n){}},[[205,1,2]]]);
//# sourceMappingURL=main.04d74040.chunk.js.map

File diff suppressed because one or more lines are too long

View File

@ -1,328 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {
DocumentNode,
FragmentSpreadNode,
FragmentDefinitionNode,
OperationDefinitionNode,
TypeDefinitionNode,
NamedTypeNode,
} from 'graphql';
import type {
CompletionItem,
DefinitionQueryResult,
Diagnostic,
GraphQLCache,
GraphQLConfig,
GraphQLProjectConfig,
Uri,
} from 'graphql-language-service-types';
import type {Position} from 'graphql-language-service-utils';
import type {Hover} from 'vscode-languageserver-types';
import {Kind, parse, print} from 'graphql';
import {getAutocompleteSuggestions} from './getAutocompleteSuggestions';
import {getHoverInformation} from './getHoverInformation';
import {validateQuery, getRange, SEVERITY} from './getDiagnostics';
import {
getDefinitionQueryResultForFragmentSpread,
getDefinitionQueryResultForDefinitionNode,
getDefinitionQueryResultForNamedType,
} from './getDefinition';
import {getASTNodeAtPosition} from 'graphql-language-service-utils';
const {
FRAGMENT_DEFINITION,
OBJECT_TYPE_DEFINITION,
INTERFACE_TYPE_DEFINITION,
ENUM_TYPE_DEFINITION,
UNION_TYPE_DEFINITION,
SCALAR_TYPE_DEFINITION,
INPUT_OBJECT_TYPE_DEFINITION,
SCALAR_TYPE_EXTENSION,
OBJECT_TYPE_EXTENSION,
INTERFACE_TYPE_EXTENSION,
UNION_TYPE_EXTENSION,
ENUM_TYPE_EXTENSION,
INPUT_OBJECT_TYPE_EXTENSION,
DIRECTIVE_DEFINITION,
FRAGMENT_SPREAD,
OPERATION_DEFINITION,
NAMED_TYPE,
} = Kind;
export class GraphQLLanguageService {
_graphQLCache: GraphQLCache;
_graphQLConfig: GraphQLConfig;
constructor(cache: GraphQLCache) {
this._graphQLCache = cache;
this._graphQLConfig = cache.getGraphQLConfig();
}
async getDiagnostics(
query: string,
uri: Uri,
isRelayCompatMode?: boolean,
): Promise<Array<Diagnostic>> {
// Perform syntax diagnostics first, as this doesn't require
// schema/fragment definitions, even the project configuration.
let queryHasExtensions = false;
const projectConfig = this._graphQLConfig.getConfigForFile(uri);
const schemaPath = projectConfig.schemaPath;
try {
const queryAST = parse(query);
if (!schemaPath || uri !== schemaPath) {
queryHasExtensions = queryAST.definitions.some(definition => {
switch (definition.kind) {
case OBJECT_TYPE_DEFINITION:
case INTERFACE_TYPE_DEFINITION:
case ENUM_TYPE_DEFINITION:
case UNION_TYPE_DEFINITION:
case SCALAR_TYPE_DEFINITION:
case INPUT_OBJECT_TYPE_DEFINITION:
case SCALAR_TYPE_EXTENSION:
case OBJECT_TYPE_EXTENSION:
case INTERFACE_TYPE_EXTENSION:
case UNION_TYPE_EXTENSION:
case ENUM_TYPE_EXTENSION:
case INPUT_OBJECT_TYPE_EXTENSION:
case DIRECTIVE_DEFINITION:
return true;
}
return false;
});
}
} catch (error) {
const range = getRange(error.locations[0], query);
return [
{
severity: SEVERITY.ERROR,
message: error.message,
source: 'GraphQL: Syntax',
range,
},
];
}
// If there's a matching config, proceed to prepare to run validation
let source = query;
const fragmentDefinitions = await this._graphQLCache.getFragmentDefinitions(
projectConfig,
);
const fragmentDependencies = await this._graphQLCache.getFragmentDependencies(
query,
fragmentDefinitions,
);
const dependenciesSource = fragmentDependencies.reduce(
(prev, cur) => `${prev} ${print(cur.definition)}`,
'',
);
source = `${source} ${dependenciesSource}`;
let validationAst = null;
try {
validationAst = parse(source);
} catch (error) {
// the query string is already checked to be parsed properly - errors
// from this parse must be from corrupted fragment dependencies.
// For IDEs we don't care for errors outside of the currently edited
// query, so we return an empty array here.
return [];
}
// Check if there are custom validation rules to be used
let customRules;
const customRulesModulePath =
projectConfig.extensions.customValidationRules;
if (customRulesModulePath) {
/* eslint-disable no-implicit-coercion */
const rulesPath = require.resolve(`${customRulesModulePath}`);
if (rulesPath) {
customRules = require(`${rulesPath}`)(this._graphQLConfig);
}
/* eslint-enable no-implicit-coercion */
}
const schema = await this._graphQLCache
.getSchema(projectConfig.projectName, queryHasExtensions)
.catch(() => null);
if (!schema) {
return [];
}
return validateQuery(validationAst, schema, customRules, isRelayCompatMode);
}
async getAutocompleteSuggestions(
query: string,
position: Position,
filePath: Uri,
): Promise<Array<CompletionItem>> {
const projectConfig = this._graphQLConfig.getConfigForFile(filePath);
const schema = await this._graphQLCache
.getSchema(projectConfig.projectName)
.catch(() => null);
if (schema) {
return getAutocompleteSuggestions(schema, query, position);
}
return [];
}
async getHoverInformation(
query: string,
position: Position,
filePath: Uri,
): Promise<Hover.contents> {
const projectConfig = this._graphQLConfig.getConfigForFile(filePath);
const schema = await this._graphQLCache
.getSchema(projectConfig.projectName)
.catch(() => null);
if (schema) {
return getHoverInformation(schema, query, position);
}
return '';
}
async getDefinition(
query: string,
position: Position,
filePath: Uri,
): Promise<?DefinitionQueryResult> {
const projectConfig = this._graphQLConfig.getConfigForFile(filePath);
let ast;
try {
ast = parse(query);
} catch (error) {
return null;
}
const node = getASTNodeAtPosition(query, ast, position);
if (node) {
switch (node.kind) {
case FRAGMENT_SPREAD:
return this._getDefinitionForFragmentSpread(
query,
ast,
node,
filePath,
projectConfig,
);
case FRAGMENT_DEFINITION:
case OPERATION_DEFINITION:
return getDefinitionQueryResultForDefinitionNode(
filePath,
query,
(node: FragmentDefinitionNode | OperationDefinitionNode),
);
case NAMED_TYPE:
return this._getDefinitionForNamedType(
query,
ast,
node,
filePath,
projectConfig,
);
}
}
return null;
}
async _getDefinitionForNamedType(
query: string,
ast: DocumentNode,
node: NamedTypeNode,
filePath: Uri,
projectConfig: GraphQLProjectConfig,
): Promise<?DefinitionQueryResult> {
const objectTypeDefinitions = await this._graphQLCache.getObjectTypeDefinitions(
projectConfig,
);
const dependencies = await this._graphQLCache.getObjectTypeDependenciesForAST(
ast,
objectTypeDefinitions,
);
const localObjectTypeDefinitions = ast.definitions.filter(
definition =>
definition.kind === OBJECT_TYPE_DEFINITION ||
definition.kind === INPUT_OBJECT_TYPE_DEFINITION ||
definition.kind === ENUM_TYPE_DEFINITION,
);
const typeCastedDefs = ((localObjectTypeDefinitions: any): Array<
TypeDefinitionNode,
>);
const localOperationDefinationInfos = typeCastedDefs.map(
(definition: TypeDefinitionNode) => ({
filePath,
content: query,
definition,
}),
);
const result = await getDefinitionQueryResultForNamedType(
query,
node,
dependencies.concat(localOperationDefinationInfos),
);
return result;
}
async _getDefinitionForFragmentSpread(
query: string,
ast: DocumentNode,
node: FragmentSpreadNode,
filePath: Uri,
projectConfig: GraphQLProjectConfig,
): Promise<?DefinitionQueryResult> {
const fragmentDefinitions = await this._graphQLCache.getFragmentDefinitions(
projectConfig,
);
const dependencies = await this._graphQLCache.getFragmentDependenciesForAST(
ast,
fragmentDefinitions,
);
const localFragDefinitions = ast.definitions.filter(
definition => definition.kind === FRAGMENT_DEFINITION,
);
const typeCastedDefs = ((localFragDefinitions: any): Array<
FragmentDefinitionNode,
>);
const localFragInfos = typeCastedDefs.map(
(definition: FragmentDefinitionNode) => ({
filePath,
content: query,
definition,
}),
);
const result = await getDefinitionQueryResultForFragmentSpread(
query,
node,
dependencies.concat(localFragInfos),
);
return result;
}
}

View File

@ -1,204 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {GraphQLField, GraphQLSchema, GraphQLType} from 'graphql';
import {isCompositeType} from 'graphql';
import {
SchemaMetaFieldDef,
TypeMetaFieldDef,
TypeNameMetaFieldDef,
} from 'graphql/type/introspection';
import type {
CompletionItem,
ContextToken,
State,
TypeInfo,
} from 'graphql-language-service-types';
// Utility for returning the state representing the Definition this token state
// is within, if any.
export function getDefinitionState(tokenState: State): ?State {
let definitionState;
forEachState(tokenState, state => {
switch (state.kind) {
case 'Query':
case 'ShortQuery':
case 'Mutation':
case 'Subscription':
case 'FragmentDefinition':
definitionState = state;
break;
}
});
return definitionState;
}
// Gets the field definition given a type and field name
export function getFieldDef(
schema: GraphQLSchema,
type: GraphQLType,
fieldName: string,
): ?GraphQLField<*, *> {
if (fieldName === SchemaMetaFieldDef.name && schema.getQueryType() === type) {
return SchemaMetaFieldDef;
}
if (fieldName === TypeMetaFieldDef.name && schema.getQueryType() === type) {
return TypeMetaFieldDef;
}
if (fieldName === TypeNameMetaFieldDef.name && isCompositeType(type)) {
return TypeNameMetaFieldDef;
}
if (type.getFields && typeof type.getFields === 'function') {
return (type.getFields()[fieldName]: any);
}
return null;
}
// Utility for iterating through a CodeMirror parse state stack bottom-up.
export function forEachState(
stack: State,
fn: (state: State) => ?TypeInfo,
): void {
const reverseStateStack = [];
let state = stack;
while (state && state.kind) {
reverseStateStack.push(state);
state = state.prevState;
}
for (let i = reverseStateStack.length - 1; i >= 0; i--) {
fn(reverseStateStack[i]);
}
}
export function objectValues(object: Object): Array<any> {
const keys = Object.keys(object);
const len = keys.length;
const values = new Array(len);
for (let i = 0; i < len; ++i) {
values[i] = object[keys[i]];
}
return values;
}
// Create the expected hint response given a possible list and a token
export function hintList(
token: ContextToken,
list: Array<CompletionItem>,
): Array<CompletionItem> {
return filterAndSortList(list, normalizeText(token.string));
}
// Given a list of hint entries and currently typed text, sort and filter to
// provide a concise list.
function filterAndSortList(
list: Array<CompletionItem>,
text: string,
): Array<CompletionItem> {
if (!text) {
return filterNonEmpty(list, entry => !entry.isDeprecated);
}
const byProximity = list.map(entry => ({
proximity: getProximity(normalizeText(entry.label), text),
entry,
}));
const conciseMatches = filterNonEmpty(
filterNonEmpty(byProximity, pair => pair.proximity <= 2),
pair => !pair.entry.isDeprecated,
);
const sortedMatches = conciseMatches.sort(
(a, b) =>
(a.entry.isDeprecated ? 1 : 0) - (b.entry.isDeprecated ? 1 : 0) ||
a.proximity - b.proximity ||
a.entry.label.length - b.entry.label.length,
);
return sortedMatches.map(pair => pair.entry);
}
// Filters the array by the predicate, unless it results in an empty array,
// in which case return the original array.
function filterNonEmpty(
array: Array<Object>,
predicate: (entry: Object) => boolean,
): Array<Object> {
const filtered = array.filter(predicate);
return filtered.length === 0 ? array : filtered;
}
function normalizeText(text: string): string {
return text.toLowerCase().replace(/\W/g, '');
}
// Determine a numeric proximity for a suggestion based on current text.
function getProximity(suggestion: string, text: string): number {
// start with lexical distance
let proximity = lexicalDistance(text, suggestion);
if (suggestion.length > text.length) {
// do not penalize long suggestions.
proximity -= suggestion.length - text.length - 1;
// penalize suggestions not starting with this phrase
proximity += suggestion.indexOf(text) === 0 ? 0 : 0.5;
}
return proximity;
}
/**
* Computes the lexical distance between strings A and B.
*
* The "distance" between two strings is given by counting the minimum number
* of edits needed to transform string A into string B. An edit can be an
* insertion, deletion, or substitution of a single character, or a swap of two
* adjacent characters.
*
* This distance can be useful for detecting typos in input or sorting
*
* @param {string} a
* @param {string} b
* @return {int} distance in number of edits
*/
function lexicalDistance(a: string, b: string): number {
let i;
let j;
const d = [];
const aLength = a.length;
const bLength = b.length;
for (i = 0; i <= aLength; i++) {
d[i] = [i];
}
for (j = 1; j <= bLength; j++) {
d[0][j] = j;
}
for (i = 1; i <= aLength; i++) {
for (j = 1; j <= bLength; j++) {
const cost = a[i - 1] === b[j - 1] ? 0 : 1;
d[i][j] = Math.min(
d[i - 1][j] + 1,
d[i][j - 1] + 1,
d[i - 1][j - 1] + cost,
);
if (i > 1 && j > 1 && a[i - 1] === b[j - 2] && a[i - 2] === b[j - 1]) {
d[i][j] = Math.min(d[i][j], d[i - 2][j - 2] + cost);
}
}
}
return d[aLength][bLength];
}

View File

@ -1,665 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {
FragmentDefinitionNode,
GraphQLDirective,
GraphQLSchema,
} from 'graphql';
import type {
CompletionItem,
ContextToken,
State,
TypeInfo,
} from 'graphql-language-service-types';
import type {Position} from 'graphql-language-service-utils';
import {
GraphQLBoolean,
GraphQLEnumType,
GraphQLInputObjectType,
GraphQLList,
SchemaMetaFieldDef,
TypeMetaFieldDef,
TypeNameMetaFieldDef,
assertAbstractType,
doTypesOverlap,
getNamedType,
getNullableType,
isAbstractType,
isCompositeType,
isInputType,
} from 'graphql';
import {CharacterStream, onlineParser} from 'graphql-language-service-parser';
import {
forEachState,
getDefinitionState,
getFieldDef,
hintList,
objectValues,
} from './autocompleteUtils';
/**
* Given GraphQLSchema, queryText, and context of the current position within
* the source text, provide a list of typeahead entries.
*/
export function getAutocompleteSuggestions(
schema: GraphQLSchema,
queryText: string,
cursor: Position,
contextToken?: ContextToken,
): Array<CompletionItem> {
const token = contextToken || getTokenAtPosition(queryText, cursor);
const state =
token.state.kind === 'Invalid' ? token.state.prevState : token.state;
// relieve flow errors by checking if `state` exists
if (!state) {
return [];
}
const kind = state.kind;
const step = state.step;
const typeInfo = getTypeInfo(schema, token.state);
// Definition kinds
if (kind === 'Document') {
return hintList(token, [
{label: 'query'},
{label: 'mutation'},
{label: 'subscription'},
{label: 'fragment'},
{label: '{'},
]);
}
// Field names
if (kind === 'SelectionSet' || kind === 'Field' || kind === 'AliasedField') {
return getSuggestionsForFieldNames(token, typeInfo, schema);
}
// Argument names
if (kind === 'Arguments' || (kind === 'Argument' && step === 0)) {
const argDefs = typeInfo.argDefs;
if (argDefs) {
return hintList(
token,
argDefs.map(argDef => ({
label: argDef.name,
detail: String(argDef.type),
documentation: argDef.description,
})),
);
}
}
// Input Object fields
if (kind === 'ObjectValue' || (kind === 'ObjectField' && step === 0)) {
if (typeInfo.objectFieldDefs) {
const objectFields = objectValues(typeInfo.objectFieldDefs);
return hintList(
token,
objectFields.map(field => ({
label: field.name,
detail: String(field.type),
documentation: field.description,
})),
);
}
}
// Input values: Enum and Boolean
if (
kind === 'EnumValue' ||
(kind === 'ListValue' && step === 1) ||
(kind === 'ObjectField' && step === 2) ||
(kind === 'Argument' && step === 2)
) {
return getSuggestionsForInputValues(token, typeInfo);
}
// Fragment type conditions
if (
(kind === 'TypeCondition' && step === 1) ||
(kind === 'NamedType' &&
state.prevState != null &&
state.prevState.kind === 'TypeCondition')
) {
return getSuggestionsForFragmentTypeConditions(token, typeInfo, schema);
}
// Fragment spread names
if (kind === 'FragmentSpread' && step === 1) {
return getSuggestionsForFragmentSpread(token, typeInfo, schema, queryText);
}
// Variable definition types
if (
(kind === 'VariableDefinition' && step === 2) ||
(kind === 'ListType' && step === 1) ||
(kind === 'NamedType' &&
state.prevState &&
(state.prevState.kind === 'VariableDefinition' ||
state.prevState.kind === 'ListType'))
) {
return getSuggestionsForVariableDefinition(token, schema);
}
// Directive names
if (kind === 'Directive') {
return getSuggestionsForDirective(token, state, schema);
}
return [];
}
// Helper functions to get suggestions for each kinds
function getSuggestionsForFieldNames(
token: ContextToken,
typeInfo: TypeInfo,
schema: GraphQLSchema,
): Array<CompletionItem> {
if (typeInfo.parentType) {
const parentType = typeInfo.parentType;
const fields =
parentType.getFields instanceof Function
? objectValues(parentType.getFields())
: [];
if (isAbstractType(parentType)) {
fields.push(TypeNameMetaFieldDef);
}
if (parentType === schema.getQueryType()) {
fields.push(SchemaMetaFieldDef, TypeMetaFieldDef);
}
return hintList(
token,
fields.map(field => ({
label: field.name,
detail: String(field.type),
documentation: field.description,
isDeprecated: field.isDeprecated,
deprecationReason: field.deprecationReason,
})),
);
}
return [];
}
function getSuggestionsForInputValues(
token: ContextToken,
typeInfo: TypeInfo,
): Array<CompletionItem> {
const namedInputType = getNamedType(typeInfo.inputType);
if (namedInputType instanceof GraphQLEnumType) {
const values = namedInputType.getValues();
return hintList(
token,
values.map(value => ({
label: value.name,
detail: String(namedInputType),
documentation: value.description,
isDeprecated: value.isDeprecated,
deprecationReason: value.deprecationReason,
})),
);
} else if (namedInputType === GraphQLBoolean) {
return hintList(token, [
{
label: 'true',
detail: String(GraphQLBoolean),
documentation: 'Not false.',
},
{
label: 'false',
detail: String(GraphQLBoolean),
documentation: 'Not true.',
},
]);
}
return [];
}
function getSuggestionsForFragmentTypeConditions(
token: ContextToken,
typeInfo: TypeInfo,
schema: GraphQLSchema,
): Array<CompletionItem> {
let possibleTypes;
if (typeInfo.parentType) {
if (isAbstractType(typeInfo.parentType)) {
const abstractType = assertAbstractType(typeInfo.parentType);
// Collect both the possible Object types as well as the interfaces
// they implement.
const possibleObjTypes = schema.getPossibleTypes(abstractType);
const possibleIfaceMap = Object.create(null);
possibleObjTypes.forEach(type => {
type.getInterfaces().forEach(iface => {
possibleIfaceMap[iface.name] = iface;
});
});
possibleTypes = possibleObjTypes.concat(objectValues(possibleIfaceMap));
} else {
// The parent type is a non-abstract Object type, so the only possible
// type that can be used is that same type.
possibleTypes = [typeInfo.parentType];
}
} else {
const typeMap = schema.getTypeMap();
possibleTypes = objectValues(typeMap).filter(isCompositeType);
}
return hintList(
token,
possibleTypes.map(type => {
const namedType = getNamedType(type);
return {
label: String(type),
documentation: (namedType && namedType.description) || '',
};
}),
);
}
function getSuggestionsForFragmentSpread(
token: ContextToken,
typeInfo: TypeInfo,
schema: GraphQLSchema,
queryText: string,
): Array<CompletionItem> {
const typeMap = schema.getTypeMap();
const defState = getDefinitionState(token.state);
const fragments = getFragmentDefinitions(queryText);
// Filter down to only the fragments which may exist here.
const relevantFrags = fragments.filter(
frag =>
// Only include fragments with known types.
typeMap[frag.typeCondition.name.value] &&
// Only include fragments which are not cyclic.
!(
defState &&
defState.kind === 'FragmentDefinition' &&
defState.name === frag.name.value
) &&
// Only include fragments which could possibly be spread here.
isCompositeType(typeInfo.parentType) &&
isCompositeType(typeMap[frag.typeCondition.name.value]) &&
doTypesOverlap(
schema,
typeInfo.parentType,
typeMap[frag.typeCondition.name.value],
),
);
return hintList(
token,
relevantFrags.map(frag => ({
label: frag.name.value,
detail: String(typeMap[frag.typeCondition.name.value]),
documentation: `fragment ${frag.name.value} on ${
frag.typeCondition.name.value
}`,
})),
);
}
function getFragmentDefinitions(
queryText: string,
): Array<FragmentDefinitionNode> {
const fragmentDefs = [];
runOnlineParser(queryText, (_, state) => {
if (state.kind === 'FragmentDefinition' && state.name && state.type) {
fragmentDefs.push({
kind: 'FragmentDefinition',
name: {
kind: 'Name',
value: state.name,
},
selectionSet: {
kind: 'SelectionSet',
selections: [],
},
typeCondition: {
kind: 'NamedType',
name: {
kind: 'Name',
value: state.type,
},
},
});
}
});
return fragmentDefs;
}
function getSuggestionsForVariableDefinition(
token: ContextToken,
schema: GraphQLSchema,
): Array<CompletionItem> {
const inputTypeMap = schema.getTypeMap();
const inputTypes = objectValues(inputTypeMap).filter(isInputType);
return hintList(
token,
inputTypes.map(type => ({
label: type.name,
documentation: type.description,
})),
);
}
function getSuggestionsForDirective(
token: ContextToken,
state: State,
schema: GraphQLSchema,
): Array<CompletionItem> {
if (state.prevState && state.prevState.kind) {
const directives = schema
.getDirectives()
.filter(directive => canUseDirective(state.prevState, directive));
return hintList(
token,
directives.map(directive => ({
label: directive.name,
documentation: directive.description || '',
})),
);
}
return [];
}
export function getTokenAtPosition(
queryText: string,
cursor: Position,
): ContextToken {
let styleAtCursor = null;
let stateAtCursor = null;
let stringAtCursor = null;
const token = runOnlineParser(queryText, (stream, state, style, index) => {
if (index === cursor.line) {
if (stream.getCurrentPosition() >= cursor.character) {
styleAtCursor = style;
stateAtCursor = {...state};
stringAtCursor = stream.current();
return 'BREAK';
}
}
});
// Return the state/style of parsed token in case those at cursor aren't
// available.
return {
start: token.start,
end: token.end,
string: stringAtCursor || token.string,
state: stateAtCursor || token.state,
style: styleAtCursor || token.style,
};
}
/**
* Provides an utility function to parse a given query text and construct a
* `token` context object.
* A token context provides useful information about the token/style that
* CharacterStream currently possesses, as well as the end state and style
* of the token.
*/
type callbackFnType = (
stream: CharacterStream,
state: State,
style: string,
index: number,
) => void | 'BREAK';
function runOnlineParser(
queryText: string,
callback: callbackFnType,
): ContextToken {
const lines = queryText.split('\n');
const parser = onlineParser();
let state = parser.startState();
let style = '';
let stream: CharacterStream = new CharacterStream('');
for (let i = 0; i < lines.length; i++) {
stream = new CharacterStream(lines[i]);
while (!stream.eol()) {
style = parser.token(stream, state);
const code = callback(stream, state, style, i);
if (code === 'BREAK') {
break;
}
}
// Above while loop won't run if there is an empty line.
// Run the callback one more time to catch this.
callback(stream, state, style, i);
if (!state.kind) {
state = parser.startState();
}
}
return {
start: stream.getStartOfToken(),
end: stream.getCurrentPosition(),
string: stream.current(),
state,
style,
};
}
function canUseDirective(
state: $PropertyType<State, 'prevState'>,
directive: GraphQLDirective,
): boolean {
if (!state || !state.kind) {
return false;
}
const kind = state.kind;
const locations = directive.locations;
switch (kind) {
case 'Query':
return locations.indexOf('QUERY') !== -1;
case 'Mutation':
return locations.indexOf('MUTATION') !== -1;
case 'Subscription':
return locations.indexOf('SUBSCRIPTION') !== -1;
case 'Field':
case 'AliasedField':
return locations.indexOf('FIELD') !== -1;
case 'FragmentDefinition':
return locations.indexOf('FRAGMENT_DEFINITION') !== -1;
case 'FragmentSpread':
return locations.indexOf('FRAGMENT_SPREAD') !== -1;
case 'InlineFragment':
return locations.indexOf('INLINE_FRAGMENT') !== -1;
// Schema Definitions
case 'SchemaDef':
return locations.indexOf('SCHEMA') !== -1;
case 'ScalarDef':
return locations.indexOf('SCALAR') !== -1;
case 'ObjectTypeDef':
return locations.indexOf('OBJECT') !== -1;
case 'FieldDef':
return locations.indexOf('FIELD_DEFINITION') !== -1;
case 'InterfaceDef':
return locations.indexOf('INTERFACE') !== -1;
case 'UnionDef':
return locations.indexOf('UNION') !== -1;
case 'EnumDef':
return locations.indexOf('ENUM') !== -1;
case 'EnumValue':
return locations.indexOf('ENUM_VALUE') !== -1;
case 'InputDef':
return locations.indexOf('INPUT_OBJECT') !== -1;
case 'InputValueDef':
const prevStateKind = state.prevState && state.prevState.kind;
switch (prevStateKind) {
case 'ArgumentsDef':
return locations.indexOf('ARGUMENT_DEFINITION') !== -1;
case 'InputDef':
return locations.indexOf('INPUT_FIELD_DEFINITION') !== -1;
}
}
return false;
}
// Utility for collecting rich type information given any token's state
// from the graphql-mode parser.
export function getTypeInfo(
schema: GraphQLSchema,
tokenState: State,
): TypeInfo {
let argDef;
let argDefs;
let directiveDef;
let enumValue;
let fieldDef;
let inputType;
let objectFieldDefs;
let parentType;
let type;
forEachState(tokenState, state => {
switch (state.kind) {
case 'Query':
case 'ShortQuery':
type = schema.getQueryType();
break;
case 'Mutation':
type = schema.getMutationType();
break;
case 'Subscription':
type = schema.getSubscriptionType();
break;
case 'InlineFragment':
case 'FragmentDefinition':
if (state.type) {
type = schema.getType(state.type);
}
break;
case 'Field':
case 'AliasedField':
if (!type || !state.name) {
fieldDef = null;
} else {
fieldDef = parentType
? getFieldDef(schema, parentType, state.name)
: null;
type = fieldDef ? fieldDef.type : null;
}
break;
case 'SelectionSet':
parentType = getNamedType(type);
break;
case 'Directive':
directiveDef = state.name ? schema.getDirective(state.name) : null;
break;
case 'Arguments':
if (!state.prevState) {
argDefs = null;
} else {
switch (state.prevState.kind) {
case 'Field':
argDefs = fieldDef && fieldDef.args;
break;
case 'Directive':
argDefs = directiveDef && directiveDef.args;
break;
case 'AliasedField':
const name = state.prevState && state.prevState.name;
if (!name) {
argDefs = null;
break;
}
const field = parentType
? getFieldDef(schema, parentType, name)
: null;
if (!field) {
argDefs = null;
break;
}
argDefs = field.args;
break;
default:
argDefs = null;
break;
}
}
break;
case 'Argument':
if (argDefs) {
for (let i = 0; i < argDefs.length; i++) {
if (argDefs[i].name === state.name) {
argDef = argDefs[i];
break;
}
}
}
inputType = argDef && argDef.type;
break;
case 'EnumValue':
const enumType = getNamedType(inputType);
enumValue =
enumType instanceof GraphQLEnumType
? find(enumType.getValues(), val => val.value === state.name)
: null;
break;
case 'ListValue':
const nullableType = getNullableType(inputType);
inputType =
nullableType instanceof GraphQLList ? nullableType.ofType : null;
break;
case 'ObjectValue':
const objectType = getNamedType(inputType);
objectFieldDefs =
objectType instanceof GraphQLInputObjectType
? objectType.getFields()
: null;
break;
case 'ObjectField':
const objectField =
state.name && objectFieldDefs ? objectFieldDefs[state.name] : null;
inputType = objectField && objectField.type;
break;
case 'NamedType':
if (state.name) {
type = schema.getType(state.name);
}
break;
}
});
return {
argDef,
argDefs,
directiveDef,
enumValue,
fieldDef,
inputType,
objectFieldDefs,
parentType,
type,
};
}
// Returns the first item in the array which causes predicate to return truthy.
function find(array, predicate) {
for (let i = 0; i < array.length; i++) {
if (predicate(array[i])) {
return array[i];
}
}
return null;
}

View File

@ -1,136 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {
ASTNode,
FragmentSpreadNode,
FragmentDefinitionNode,
OperationDefinitionNode,
NamedTypeNode,
TypeDefinitionNode,
} from 'graphql';
import type {
Definition,
DefinitionQueryResult,
FragmentInfo,
Position,
Range,
Uri,
ObjectTypeInfo,
} from 'graphql-language-service-types';
import {locToRange, offsetToPosition} from 'graphql-language-service-utils';
import invariant from 'assert';
export const LANGUAGE = 'GraphQL';
function getRange(text: string, node: ASTNode): Range {
const location = node.loc;
invariant(location, 'Expected ASTNode to have a location.');
return locToRange(text, location);
}
function getPosition(text: string, node: ASTNode): Position {
const location = node.loc;
invariant(location, 'Expected ASTNode to have a location.');
return offsetToPosition(text, location.start);
}
export async function getDefinitionQueryResultForNamedType(
text: string,
node: NamedTypeNode,
dependencies: Array<ObjectTypeInfo>,
): Promise<DefinitionQueryResult> {
const name = node.name.value;
const defNodes = dependencies.filter(
({definition}) => definition.name && definition.name.value === name,
);
if (defNodes.length === 0) {
process.stderr.write(`Definition not found for GraphQL type ${name}`);
return {queryRange: [], definitions: []};
}
const definitions: Array<Definition> = defNodes.map(
({filePath, content, definition}) =>
getDefinitionForNodeDefinition(filePath || '', content, definition),
);
return {
definitions,
queryRange: definitions.map(_ => getRange(text, node)),
};
}
export async function getDefinitionQueryResultForFragmentSpread(
text: string,
fragment: FragmentSpreadNode,
dependencies: Array<FragmentInfo>,
): Promise<DefinitionQueryResult> {
const name = fragment.name.value;
const defNodes = dependencies.filter(
({definition}) => definition.name.value === name,
);
if (defNodes.length === 0) {
process.stderr.write(`Definition not found for GraphQL fragment ${name}`);
return {queryRange: [], definitions: []};
}
const definitions: Array<Definition> = defNodes.map(
({filePath, content, definition}) =>
getDefinitionForFragmentDefinition(filePath || '', content, definition),
);
return {
definitions,
queryRange: definitions.map(_ => getRange(text, fragment)),
};
}
export function getDefinitionQueryResultForDefinitionNode(
path: Uri,
text: string,
definition: FragmentDefinitionNode | OperationDefinitionNode,
): DefinitionQueryResult {
return {
definitions: [getDefinitionForFragmentDefinition(path, text, definition)],
queryRange: definition.name ? [getRange(text, definition.name)] : [],
};
}
function getDefinitionForFragmentDefinition(
path: Uri,
text: string,
definition: FragmentDefinitionNode | OperationDefinitionNode,
): Definition {
const name = definition.name;
invariant(name, 'Expected ASTNode to have a Name.');
return {
path,
position: getPosition(text, definition),
range: getRange(text, definition),
name: name.value || '',
language: LANGUAGE,
// This is a file inside the project root, good enough for now
projectRoot: path,
};
}
function getDefinitionForNodeDefinition(
path: Uri,
text: string,
definition: TypeDefinitionNode,
): Definition {
const name = definition.name;
invariant(name, 'Expected ASTNode to have a Name.');
return {
path,
position: getPosition(text, definition),
range: getRange(text, definition),
name: name.value || '',
language: LANGUAGE,
// This is a file inside the project root, good enough for now
projectRoot: path,
};
}

View File

@ -1,172 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {
ASTNode,
DocumentNode,
GraphQLError,
GraphQLSchema,
Location,
SourceLocation,
} from 'graphql';
import type {
Diagnostic,
CustomValidationRule,
} from 'graphql-language-service-types';
import invariant from 'assert';
import {findDeprecatedUsages, parse} from 'graphql';
import {CharacterStream, onlineParser} from 'graphql-language-service-parser';
import {
Position,
Range,
validateWithCustomRules,
} from 'graphql-language-service-utils';
export const SEVERITY = {
ERROR: 1,
WARNING: 2,
INFORMATION: 3,
HINT: 4,
};
export function getDiagnostics(
query: string,
schema: ?GraphQLSchema = null,
customRules?: Array<CustomValidationRule>,
isRelayCompatMode?: boolean,
): Array<Diagnostic> {
let ast = null;
try {
ast = parse(query);
} catch (error) {
const range = getRange(error.locations[0], query);
return [
{
severity: SEVERITY.ERROR,
message: error.message,
source: 'GraphQL: Syntax',
range,
},
];
}
return validateQuery(ast, schema, customRules, isRelayCompatMode);
}
export function validateQuery(
ast: DocumentNode,
schema: ?GraphQLSchema = null,
customRules?: Array<CustomValidationRule>,
isRelayCompatMode?: boolean,
): Array<Diagnostic> {
// We cannot validate the query unless a schema is provided.
if (!schema) {
return [];
}
const validationErrorAnnotations = mapCat(
validateWithCustomRules(schema, ast, customRules, isRelayCompatMode),
error => annotations(error, SEVERITY.ERROR, 'Validation'),
);
// Note: findDeprecatedUsages was added in graphql@0.9.0, but we want to
// support older versions of graphql-js.
const deprecationWarningAnnotations = !findDeprecatedUsages
? []
: mapCat(findDeprecatedUsages(schema, ast), error =>
annotations(error, SEVERITY.WARNING, 'Deprecation'),
);
return validationErrorAnnotations.concat(deprecationWarningAnnotations);
}
// General utility for map-cating (aka flat-mapping).
function mapCat<T>(
array: Array<T>,
mapper: (item: T) => Array<any>,
): Array<any> {
return Array.prototype.concat.apply([], array.map(mapper));
}
function annotations(
error: GraphQLError,
severity: number,
type: string,
): Array<Diagnostic> {
if (!error.nodes) {
return [];
}
return error.nodes.map(node => {
const highlightNode =
node.kind !== 'Variable' && node.name
? node.name
: node.variable
? node.variable
: node;
invariant(error.locations, 'GraphQL validation error requires locations.');
const loc = error.locations[0];
const highlightLoc = getLocation(highlightNode);
const end = loc.column + (highlightLoc.end - highlightLoc.start);
return {
source: `GraphQL: ${type}`,
message: error.message,
severity,
range: new Range(
new Position(loc.line - 1, loc.column - 1),
new Position(loc.line - 1, end),
),
};
});
}
export function getRange(location: SourceLocation, queryText: string) {
const parser = onlineParser();
const state = parser.startState();
const lines = queryText.split('\n');
invariant(
lines.length >= location.line,
'Query text must have more lines than where the error happened',
);
let stream = null;
for (let i = 0; i < location.line; i++) {
stream = new CharacterStream(lines[i]);
while (!stream.eol()) {
const style = parser.token(stream, state);
if (style === 'invalidchar') {
break;
}
}
}
invariant(stream, 'Expected Parser stream to be available.');
const line = location.line - 1;
const start = stream.getStartOfToken();
const end = stream.getCurrentPosition();
return new Range(new Position(line, start), new Position(line, end));
}
/**
* Get location info from a node in a type-safe way.
*
* The only way a node could not have a location is if we initialized the parser
* (and therefore the lexer) with the `noLocation` option, but we always
* call `parse` without options above.
*/
function getLocation(node: any): Location {
const typeCastedNode = (node: ASTNode);
const location = typeCastedNode.loc;
invariant(location, 'Expected ASTNode to have a location.');
return location;
}

View File

@ -1,186 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
/**
* Ported from codemirror-graphql
* https://github.com/graphql/codemirror-graphql/blob/master/src/info.js
*/
import type {GraphQLSchema} from 'graphql';
import type {ContextToken} from 'graphql-language-service-types';
import type {Hover} from 'vscode-languageserver-types';
import type {Position} from 'graphql-language-service-utils';
import {getTokenAtPosition, getTypeInfo} from './getAutocompleteSuggestions';
import {GraphQLNonNull, GraphQLList} from 'graphql';
export function getHoverInformation(
schema: GraphQLSchema,
queryText: string,
cursor: Position,
contextToken?: ContextToken,
): Hover.contents {
const token = contextToken || getTokenAtPosition(queryText, cursor);
if (!schema || !token || !token.state) {
return [];
}
const state = token.state;
const kind = state.kind;
const step = state.step;
const typeInfo = getTypeInfo(schema, token.state);
const options = {schema};
// Given a Schema and a Token, produce the contents of an info tooltip.
// To do this, create a div element that we will render "into" and then pass
// it to various rendering functions.
if (
(kind === 'Field' && step === 0 && typeInfo.fieldDef) ||
(kind === 'AliasedField' && step === 2 && typeInfo.fieldDef)
) {
const into = [];
renderField(into, typeInfo, options);
renderDescription(into, options, typeInfo.fieldDef);
return into.join('').trim();
} else if (kind === 'Directive' && step === 1 && typeInfo.directiveDef) {
const into = [];
renderDirective(into, typeInfo, options);
renderDescription(into, options, typeInfo.directiveDef);
return into.join('').trim();
} else if (kind === 'Argument' && step === 0 && typeInfo.argDef) {
const into = [];
renderArg(into, typeInfo, options);
renderDescription(into, options, typeInfo.argDef);
return into.join('').trim();
} else if (
kind === 'EnumValue' &&
typeInfo.enumValue &&
typeInfo.enumValue.description
) {
const into = [];
renderEnumValue(into, typeInfo, options);
renderDescription(into, options, typeInfo.enumValue);
return into.join('').trim();
} else if (
kind === 'NamedType' &&
typeInfo.type &&
typeInfo.type.description
) {
const into = [];
renderType(into, typeInfo, options, typeInfo.type);
renderDescription(into, options, typeInfo.type);
return into.join('').trim();
}
}
function renderField(into, typeInfo, options) {
renderQualifiedField(into, typeInfo, options);
renderTypeAnnotation(into, typeInfo, options, typeInfo.type);
}
function renderQualifiedField(into, typeInfo, options) {
if (!typeInfo.fieldDef) {
return;
}
const fieldName = (typeInfo.fieldDef.name: string);
if (fieldName.slice(0, 2) !== '__') {
renderType(into, typeInfo, options, typeInfo.parentType);
text(into, '.');
}
text(into, fieldName);
}
function renderDirective(into, typeInfo, options) {
if (!typeInfo.directiveDef) {
return;
}
const name = '@' + typeInfo.directiveDef.name;
text(into, name);
}
function renderArg(into, typeInfo, options) {
if (typeInfo.directiveDef) {
renderDirective(into, typeInfo, options);
} else if (typeInfo.fieldDef) {
renderQualifiedField(into, typeInfo, options);
}
if (!typeInfo.argDef) {
return;
}
const name = typeInfo.argDef.name;
text(into, '(');
text(into, name);
renderTypeAnnotation(into, typeInfo, options, typeInfo.inputType);
text(into, ')');
}
function renderTypeAnnotation(into, typeInfo, options, t) {
text(into, ': ');
renderType(into, typeInfo, options, t);
}
function renderEnumValue(into, typeInfo, options) {
if (!typeInfo.enumValue) {
return;
}
const name = typeInfo.enumValue.name;
renderType(into, typeInfo, options, typeInfo.inputType);
text(into, '.');
text(into, name);
}
function renderType(into, typeInfo, options, t) {
if (!t) {
return;
}
if (t instanceof GraphQLNonNull) {
renderType(into, typeInfo, options, t.ofType);
text(into, '!');
} else if (t instanceof GraphQLList) {
text(into, '[');
renderType(into, typeInfo, options, t.ofType);
text(into, ']');
} else {
text(into, t.name);
}
}
function renderDescription(into, options, def) {
if (!def) {
return;
}
const description =
typeof def.description === 'string' ? def.description : null;
if (description) {
text(into, '\n\n');
text(into, description);
}
renderDeprecation(into, options, def);
}
function renderDeprecation(into, options, def) {
if (!def) {
return;
}
const reason =
typeof def.deprecationReason === 'string' ? def.deprecationReason : null;
if (!reason) {
return;
}
text(into, '\n\n');
text(into, 'Deprecated: ');
text(into, reason);
}
function text(into: string[], content: string) {
into.push(content);
}

View File

@ -1,121 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
import type {
Outline,
TextToken,
TokenKind,
} from 'graphql-language-service-types';
import {Kind, parse, visit} from 'graphql';
import {offsetToPosition} from 'graphql-language-service-utils';
const {INLINE_FRAGMENT} = Kind;
const OUTLINEABLE_KINDS = {
Field: true,
OperationDefinition: true,
Document: true,
SelectionSet: true,
Name: true,
FragmentDefinition: true,
FragmentSpread: true,
InlineFragment: true,
};
type OutlineTreeConverterType = {[name: string]: Function};
export function getOutline(queryText: string): ?Outline {
let ast;
try {
ast = parse(queryText);
} catch (error) {
return null;
}
const visitorFns = outlineTreeConverter(queryText);
const outlineTrees = visit(ast, {
leave(node) {
if (
OUTLINEABLE_KINDS.hasOwnProperty(node.kind) &&
visitorFns[node.kind]
) {
return visitorFns[node.kind](node);
}
return null;
},
});
return {outlineTrees};
}
function outlineTreeConverter(docText: string): OutlineTreeConverterType {
const meta = node => ({
representativeName: node.name,
startPosition: offsetToPosition(docText, node.loc.start),
endPosition: offsetToPosition(docText, node.loc.end),
children: node.selectionSet || [],
});
return {
Field: node => {
const tokenizedText = node.alias
? [buildToken('plain', node.alias), buildToken('plain', ': ')]
: [];
tokenizedText.push(buildToken('plain', node.name));
return {tokenizedText, ...meta(node)};
},
OperationDefinition: node => ({
tokenizedText: [
buildToken('keyword', node.operation),
buildToken('whitespace', ' '),
buildToken('class-name', node.name),
],
...meta(node),
}),
Document: node => node.definitions,
SelectionSet: node =>
concatMap(node.selections, child => {
return child.kind === INLINE_FRAGMENT ? child.selectionSet : child;
}),
Name: node => node.value,
FragmentDefinition: node => ({
tokenizedText: [
buildToken('keyword', 'fragment'),
buildToken('whitespace', ' '),
buildToken('class-name', node.name),
],
...meta(node),
}),
FragmentSpread: node => ({
tokenizedText: [
buildToken('plain', '...'),
buildToken('class-name', node.name),
],
...meta(node),
}),
InlineFragment: node => node.selectionSet,
};
}
function buildToken(kind: TokenKind, value: string): TextToken {
return {kind, value};
}
function concatMap(arr: Array<any>, fn: Function): Array<any> {
const res = [];
for (let i = 0; i < arr.length; i++) {
const x = fn(arr[i], i);
if (Array.isArray(x)) {
res.push(...x);
} else {
res.push(x);
}
}
return res;
}

View File

@ -1,31 +0,0 @@
/**
* Copyright (c) Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the license found in the
* LICENSE file in the root directory of this source tree.
*
* @flow
*/
export {
getDefinitionState,
getFieldDef,
forEachState,
objectValues,
hintList,
} from './autocompleteUtils';
export {getAutocompleteSuggestions} from './getAutocompleteSuggestions';
export {
LANGUAGE,
getDefinitionQueryResultForFragmentSpread,
getDefinitionQueryResultForDefinitionNode,
} from './getDefinition';
export {getDiagnostics, validateQuery} from './getDiagnostics';
export {getOutline} from './getOutline';
export {getHoverInformation} from './getHoverInformation';
export {GraphQLLanguageService} from './GraphQLLanguageService';

View File

@ -1,7 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3">
<g fill="#61DAFB">
<path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/>
<circle cx="420.9" cy="296.5" r="45.7"/>
<path d="M520.5 78.1z"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 2.6 KiB

755
config/allow.list Normal file
View File

@ -0,0 +1,755 @@
# http://localhost:8080/
variables {
"data": [
{
"name": "Protect Ya Neck",
"created_at": "now",
"updated_at": "now"
},
{
"name": "Enter the Wu-Tang",
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(insert: $data) {
id
name
}
}
variables {
"update": {
"name": "Wu-Tang",
"description": "No description needed"
},
"product_id": 1
}
mutation {
products(id: $product_id, update: $update) {
id
name
description
}
}
query {
users {
id
email
picture: avatar
products(limit: 2, where: {price: {gt: 10}}) {
id
name
description
}
}
}
variables {
"data": [
{
"name": "Gumbo1",
"created_at": "now",
"updated_at": "now"
},
{
"name": "Gumbo2",
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(id: 199, delete: true) {
id
name
}
}
query {
products {
id
name
user {
email
}
}
}
variables {
"data": {
"product_id": 5
}
}
mutation {
products(id: $product_id, delete: true) {
id
name
}
}
query {
products {
id
name
price
users {
email
}
}
}
variables {
"data": {
"email": "gfk@myspace.com",
"full_name": "Ghostface Killah",
"created_at": "now",
"updated_at": "now"
}
}
mutation {
user(insert: $data) {
id
}
}
variables {
"update": {
"name": "Helloo",
"description": "World \u003c\u003e"
},
"user": 123
}
mutation {
products(id: 5, update: $update) {
id
name
description
}
}
variables {
"data": {
"name": "WOOO",
"price": 50.5
}
}
mutation {
products(insert: $data) {
id
name
}
}
query getProducts {
products {
id
name
price
description
}
}
query {
deals {
id
name
price
}
}
variables {
"beer": "smoke"
}
query beerSearch {
products(search: $beer) {
id
name
search_rank
search_headline_description
}
}
query {
user {
id
full_name
}
}
variables {
"data": {
"email": "goo1@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
user(insert: $data) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"email": "goo12@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": [
{
"name": "Banana 1",
"price": 1.1,
"created_at": "now",
"updated_at": "now"
},
{
"name": "Banana 2",
"price": 2.2,
"created_at": "now",
"updated_at": "now"
}
]
}
}
mutation {
user(insert: $data) {
id
full_name
email
products {
id
name
price
}
}
}
variables {
"data": {
"name": "Banana 3",
"price": 1.1,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "a2@a.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
products(insert: $data) {
id
name
price
user {
id
full_name
email
}
}
}
variables {
"update": {
"name": "my_name",
"description": "my_desc"
}
}
mutation {
product(id: 15, update: $update, where: {id: {eq: 1}}) {
id
name
}
}
variables {
"update": {
"name": "my_name",
"description": "my_desc"
}
}
mutation {
product(update: $update, where: {id: {eq: 1}}) {
id
name
}
}
variables {
"update": {
"name": "my_name 2",
"description": "my_desc 2"
}
}
mutation {
product(update: $update, where: {id: {eq: 1}}) {
id
name
description
}
}
variables {
"data": {
"sale_type": "tuutuu",
"quantity": 5,
"due_date": "now",
"customer": {
"email": "thedude1@rug.com",
"full_name": "The Dude"
},
"product": {
"name": "Apple",
"price": 1.25
}
}
}
mutation {
purchase(update: $data, id: 5) {
sale_type
quantity
due_date
customer {
id
full_name
email
}
product {
id
name
price
}
}
}
variables {
"data": {
"email": "thedude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"where": {
"id": 2
},
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
mutation {
user(update: $data, where: {id: {eq: 8}}) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"email": "thedude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"where": {
"id": 2
},
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
query {
user(where: {id: {eq: 8}}) {
id
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "thedude@rug.com"
}
}
}
query {
user {
email
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "booboo@demo.com"
}
}
}
mutation {
product(update: $data, id: 6) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"email": "booboo@demo.com"
}
}
}
query {
product(id: 6) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"email": "thedude123@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"connect": {
"id": 7
},
"disconnect": {
"id": 8
}
}
}
}
mutation {
user(update: $data, id: 6) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 5,
"email": "test@test.com"
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"email": "thed44ude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"connect": {
"id": 5
}
}
}
}
mutation {
user(insert: $data) {
id
full_name
email
product {
id
name
price
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 5
}
}
}
}
mutation {
product(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": [
{
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 6
}
}
},
{
"name": "Coconut",
"price": 2.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": {
"id": 3
}
}
}
]
}
mutation {
products(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": [
{
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
},
{
"name": "Coconut",
"price": 2.25,
"created_at": "now",
"updated_at": "now"
}
]
}
mutation {
products(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"connect": {
"id": 5,
"email": "test@test.com"
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"connect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user {
id
full_name
email
}
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"disconnect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 9) {
id
name
user_id
}
}
variables {
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"disconnect": {
"id": 5
}
}
}
}
mutation {
product(update: $data, id: 2) {
id
name
user_id
}
}

243
config/dev.yml Normal file
View File

@ -0,0 +1,243 @@
app_name: "Super Graph Development"
host_port: 0.0.0.0:8080
web_ui: true
# debug, error, warn, info, none
log_level: "debug"
# enable or disable http compression (uses gzip)
http_compress: true
# When production mode is 'true' only queries
# from the allow list are permitted.
# When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list
production: false
# Throw a 401 on auth failure for queries that need auth
auth_fail_block: false
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
# Watch the config folder and reload Super Graph
# with the new configs when a change is detected
reload_on_config_change: true
# File that points to the database seeding script
# seed_file: seed.js
# Path pointing to where the migrations can be found
migrations_path: ./migrations
# Secret key for general encryption operations like
# encrypting the cursor data
secret_key: supercalifajalistics
# CORS: A list of origins a cross-domain request can be executed from.
# If the special * value is present in the list, all origins will be allowed.
# An origin may contain a wildcard (*) to replace 0 or more
# characters (i.e.: http://*.domain.com).
cors_allowed_origins: ["*"]
# Debug Cross Origin Resource Sharing requests
cors_debug: true
# Default API path prefix is /api you can change it if you like
# api_path: "/data"
# Cache-Control header can help cache queries if your CDN supports cache-control
# on POST requests (does not work with not mutations)
# cache_control: "public, max-age=300, s-maxage=600"
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
# inflections:
# person: people
# sheep: sheep
# open opencensus tracing and metrics
# telemetry:
# debug: true
# metrics:
# exporter: "prometheus"
# tracing:
# exporter: "zipkin"
# endpoint: "http://zipkin:9411/api/v2/spans"
# sample: 0.2
# include_query: false
# include_params: false
auth:
# Can be 'rails' or 'jwt'
type: rails
cookie: _app_session
# Comment this out if you want to disable setting
# the user_id via a header for testing.
# Disable in production
creds_in_header: true
rails:
# Rails version this is used for reading the
# various cookies formats.
version: 5.2
# Found in 'Rails.application.config.secret_key_base'
secret_key_base: 0a248500a64c01184edb4d7ad3a805488f8097ac761b76aaa6c17c01dcb7af03a2f18ba61b2868134b9c7b79a122bc0dadff4367414a2d173297bfea92be5566
# Remote cookie store. (memcache or redis)
# url: redis://redis:6379
# password: ""
# max_idle: 80
# max_active: 12000
# In most cases you don't need these
# salt: "encrypted cookie"
# sign_salt: "signed encrypted cookie"
# auth_salt: "authenticated encrypted cookie"
# jwt:
# provider: auth0
# secret: abc335bfcfdb04e50db5bb0a4d67ab9
# public_key_file: /secrets/public_key.pem
# public_key_type: ecdsa #rsa
database:
type: postgres
host: db
port: 5432
dbname: app_development
user: postgres
password: postgres
#schema: "public"
#pool_size: 10
#max_retries: 0
#log_level: "debug"
# Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc
set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 1m
# Define additional variables here to be used with filters
variables:
admin_account_id: "5"
# Field and table names that you wish to block
blocklist:
- ar_internal_metadata
- schema_migrations
- secret
- password
- encrypted
- token
tables:
- name: customers
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# debug: true
pass_headers:
- cookie
set_headers:
- name: Host
value: 0.0.0.0
# - name: Authorization
# value: Bearer <stripe_api_key>
- # You can create new fields that have a
# real db table backing them
name: me
table: users
- name: deals
table: products
- name: users
columns:
- name: email
related_to: products.name
roles_query: "SELECT * FROM users WHERE id = $user_id"
roles:
- name: anon
tables:
- name: products
query:
limit: 10
columns: ["id", "name", "description"]
aggregation: false
insert:
block: false
update:
block: false
delete:
block: false
- name: deals
query:
limit: 3
aggregation: false
- name: purchases
query:
limit: 3
aggregation: false
- name: user
tables:
- name: users
query:
filters: ["{ id: { _eq: $user_id } }"]
- name: products
query:
limit: 50
filters: ["{ user_id: { eq: $user_id } }"]
disable_functions: false
insert:
filters: ["{ user_id: { eq: $user_id } }"]
presets:
- user_id: "$user_id"
- created_at: "now"
- updated_at: "now"
update:
filters: ["{ user_id: { eq: $user_id } }"]
columns:
- id
- name
presets:
- updated_at: "now"
delete:
block: true
- name: admin
match: id = 1000
tables:
- name: users
filters: []

76
config/prod.yml Normal file
View File

@ -0,0 +1,76 @@
# Inherit config from this other config file
# so I only need to overwrite some values
inherits: dev
app_name: "Super Graph Production"
host_port: 0.0.0.0:8080
web_ui: false
# debug, error, warn, info, none
log_level: "info"
# enable or disable http compression (uses gzip)
http_compress: true
# When production mode is 'true' only queries
# from the allow list are permitted.
# When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list
production: true
# Throw a 401 on auth failure for queries that need auth
auth_fail_block: true
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
# File that points to the database seeding script
# seed_file: seed.js
# Path pointing to where the migrations can be found
# migrations_path: ./migrations
# Secret key for general encryption operations like
# encrypting the cursor data
# secret_key: supercalifajalistics
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
database:
type: postgres
host: db
port: 5432
dbname: app_production
user: postgres
password: postgres
#pool_size: 10
#max_retries: 0
#log_level: "debug"
# Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc
set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 5m
# open opencensus tracing and metrics
# telemetry:
# debug: false
# metrics:
# exporter: "prometheus"
# tracing:
# exporter: "zipkin"
# endpoint: "http://zipkin:9411/api/v2/spans"
# sample: 0.6

116
config/seed.js Normal file
View File

@ -0,0 +1,116 @@
var user_count = 10
customer_count = 100
product_count = 50
purchase_count = 100
var users = []
customers = []
products = []
for (i = 0; i < user_count; i++) {
var pwd = fake.password()
var data = {
full_name: fake.name(),
avatar: fake.avatar_url(200),
phone: fake.phone(),
email: fake.email(),
password: pwd,
password_confirmation: pwd,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
user(insert: $data) { \
id \
} \
}", { data: data })
users.push(res.user)
}
for (i = 0; i < product_count; i++) {
var n = Math.floor(Math.random() * users.length)
var user = users[n]
var desc = [
fake.beer_style(),
fake.beer_hop(),
fake.beer_yeast(),
fake.beer_ibu(),
fake.beer_alcohol(),
fake.beer_blg(),
].join(", ")
var data = {
name: fake.beer_name(),
description: desc,
price: fake.price()
//user_id: user.id,
//created_at: "now",
//updated_at: "now"
}
var res = graphql(" \
mutation { \
product(insert: $data) { \
id \
} \
}", { data: data }, {
user_id: 5
})
products.push(res.product)
}
for (i = 0; i < customer_count; i++) {
var pwd = fake.password()
var data = {
stripe_id: "CUS-" + fake.uuid(),
full_name: fake.name(),
phone: fake.phone(),
email: fake.email(),
password: pwd,
password_confirmation: pwd,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
customer(insert: $data) { \
id \
} \
}", { data: data })
customers.push(res.customer)
}
for (i = 0; i < purchase_count; i++) {
var sale_type = fake.rand_string(["rented", "bought"])
if (sale_type === "rented") {
var due_date = fake.date()
var returned = fake.date()
}
var data = {
customer_id: customers[Math.floor(Math.random() * customer_count)].id,
product_id: products[Math.floor(Math.random() * product_count)].id,
sale_type: sale_type,
quantity: Math.floor(Math.random() * 10),
due_date: due_date,
returned: returned,
created_at: "now",
updated_at: "now"
}
var res = graphql(" \
mutation { \
purchase(insert: $data) { \
id \
} \
}", { data: data })
console.log(res)
}

View File

@ -16,17 +16,12 @@
func main() { func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db") db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
conf, err := core.ReadInConfig("./config/dev.yml") sg, err := core.NewSuperGraph(nil, db)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
}
sg, err = core.NewSuperGraph(conf, db)
if err != nil {
log.Fatalf(err)
} }
query := ` query := `
@ -37,9 +32,11 @@
} }
}` }`
res, err := sg.GraphQL(context.Background(), query, nil) ctx = context.WithValue(ctx, core.UserIDKey, 1)
res, err := sg.GraphQL(ctx, query, nil)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
fmt.Println(string(res.Data)) fmt.Println(string(res.Data))
@ -52,9 +49,11 @@ import (
"crypto/sha256" "crypto/sha256"
"database/sql" "database/sql"
"encoding/json" "encoding/json"
"hash/maphash"
_log "log" _log "log"
"os" "os"
"github.com/chirino/graphql"
"github.com/dosco/super-graph/core/internal/allow" "github.com/dosco/super-graph/core/internal/allow"
"github.com/dosco/super-graph/core/internal/crypto" "github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/core/internal/psql" "github.com/dosco/super-graph/core/internal/psql"
@ -81,25 +80,39 @@ type SuperGraph struct {
conf *Config conf *Config
db *sql.DB db *sql.DB
log *_log.Logger log *_log.Logger
dbinfo *psql.DBInfo
schema *psql.DBSchema schema *psql.DBSchema
allowList *allow.List allowList *allow.List
encKey [32]byte encKey [32]byte
prepared map[string]*preparedItem hashSeed maphash.Seed
queries map[uint64]*query
roles map[string]*Role roles map[string]*Role
getRole *sql.Stmt getRole *sql.Stmt
rmap map[uint64]resolvFn
abacEnabled bool abacEnabled bool
anonExists bool
qc *qcode.Compiler qc *qcode.Compiler
pc *psql.Compiler pc *psql.Compiler
ge *graphql.Engine
} }
// NewSuperGraph creates the SuperGraph struct, this involves querying the database to learn its // NewSuperGraph creates the SuperGraph struct, this involves querying the database to learn its
// schemas and relationships // schemas and relationships
func NewSuperGraph(conf *Config, db *sql.DB) (*SuperGraph, error) { func NewSuperGraph(conf *Config, db *sql.DB) (*SuperGraph, error) {
return newSuperGraph(conf, db, nil)
}
// newSuperGraph helps with writing tests and benchmarks
func newSuperGraph(conf *Config, db *sql.DB, dbinfo *psql.DBInfo) (*SuperGraph, error) {
if conf == nil {
conf = &Config{}
}
sg := &SuperGraph{ sg := &SuperGraph{
conf: conf, conf: conf,
db: db, db: db,
dbinfo: dbinfo,
log: _log.New(os.Stdout, "", 0), log: _log.New(os.Stdout, "", 0),
hashSeed: maphash.MakeSeed(),
} }
if err := sg.initConfig(); err != nil { if err := sg.initConfig(); err != nil {
@ -118,7 +131,15 @@ func NewSuperGraph(conf *Config, db *sql.DB) (*SuperGraph, error) {
return nil, err return nil, err
} }
if len(conf.SecretKey) != 0 { if err := sg.initResolvers(); err != nil {
return nil, err
}
if err := sg.initGraphQLEgine(); err != nil {
return nil, err
}
if conf.SecretKey != "" {
sk := sha256.Sum256([]byte(conf.SecretKey)) sk := sha256.Sum256([]byte(conf.SecretKey))
conf.SecretKey = "" conf.SecretKey = ""
sg.encKey = sk sg.encKey = sk
@ -149,7 +170,24 @@ type Result struct {
// In developer mode all names queries are saved into a file `allow.list` and in production mode only // In developer mode all names queries are saved into a file `allow.list` and in production mode only
// queries from this file can be run. // queries from this file can be run.
func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMessage) (*Result, error) { func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMessage) (*Result, error) {
ct := scontext{Context: c, sg: sg, query: query, vars: vars} var res Result
res.op = qcode.GetQType(query)
res.name = allow.QueryName(query)
// use the chirino/graphql library for introspection queries
// disabled when allow list is enforced
if !sg.conf.UseAllowList && res.name == "IntrospectionQuery" {
r := sg.ge.ServeGraphQL(&graphql.Request{Query: query})
res.Data = r.Data
if r.Error() != nil {
res.Error = r.Error().Error()
}
return &res, r.Error()
}
ct := scontext{Context: c, sg: sg, query: query, vars: vars, res: res}
if len(vars) <= 2 { if len(vars) <= 2 {
ct.vars = nil ct.vars = nil
@ -161,9 +199,6 @@ func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMess
ct.role = "anon" ct.role = "anon"
} }
ct.res.op = qcode.GetQType(query)
ct.res.name = allow.QueryName(query)
data, err := ct.execQuery() data, err := ct.execQuery()
if err != nil { if err != nil {
return &ct.res, err return &ct.res, err
@ -173,3 +208,21 @@ func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMess
return &ct.res, nil return &ct.res, nil
} }
// GraphQLSchema function return the GraphQL schema for the underlying database connected
// to this instance of Super Graph
func (sg *SuperGraph) GraphQLSchema() (string, error) {
return sg.ge.Schema.String(), nil
}
// Operation function return the operation type from the query. It uses a very fast algorithm to
// extract the operation without having to parse the query.
func Operation(query string) OpType {
return OpType(qcode.GetQType(query))
}
// Name function return the operation name from the query. It uses a very fast algorithm to
// extract the operation name without having to parse the query.
func Name(query string) string {
return allow.QueryName(query)
}

62
core/api_test.go Normal file
View File

@ -0,0 +1,62 @@
package core
import (
"context"
"fmt"
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/dosco/super-graph/core/internal/psql"
)
func BenchmarkGraphQL(b *testing.B) {
ct := context.WithValue(context.Background(), UserIDKey, "1")
db, _, err := sqlmock.New()
if err != nil {
b.Fatal(err)
}
defer db.Close()
// mock.ExpectQuery(`^SELECT jsonb_build_object`).WithArgs()
c := &Config{}
sg, err := newSuperGraph(c, db, psql.GetTestDBInfo())
if err != nil {
b.Fatal(err)
}
query := `
query {
products {
id
name
user {
full_name
phone
email
}
customers {
id
email
}
}
users {
id
name
}
}`
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err = sg.GraphQL(ct, query, nil)
}
})
fmt.Println(err)
//fmt.Println(mock.ExpectationsWereMet())
}

View File

@ -1,67 +1,19 @@
package core package core
import ( import (
"bytes"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
) )
func (c *scontext) argMap() func(w io.Writer, tag string) (int, error) { // argList function is used to create a list of arguments to pass
return func(w io.Writer, tag string) (int, error) { // to a prepared statement.
switch tag {
case "user_id_provider":
if v := c.Value(UserIDProviderKey); v != nil {
return io.WriteString(w, v.(string))
}
return 0, argErr("user_id_provider")
case "user_id": func (c *scontext) argList(md psql.Metadata) ([]interface{}, error) {
if v := c.Value(UserIDKey); v != nil { params := md.Params()
return io.WriteString(w, v.(string)) vars := make([]interface{}, len(params))
}
return 0, argErr("user_id")
case "user_role":
if v := c.Value(UserRoleKey); v != nil {
return io.WriteString(w, v.(string))
}
return 0, argErr("user_role")
}
fields := jsn.Get(c.vars, [][]byte{[]byte(tag)})
if len(fields) == 0 {
return 0, argErr(tag)
}
v := fields[0].Value
// Open and close quotes
if len(v) >= 2 && v[0] == '"' && v[len(v)-1] == '"' {
fields[0].Value = v[1 : len(v)-1]
}
if tag == "cursor" {
if bytes.EqualFold(v, []byte("null")) {
return io.WriteString(w, ``)
}
v1, err := c.sg.decrypt(string(fields[0].Value))
if err != nil {
return 0, err
}
return w.Write(v1)
}
return w.Write(escQuote(fields[0].Value))
}
}
func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
vars := make([]interface{}, len(args))
var fields map[string]json.RawMessage var fields map[string]json.RawMessage
var err error var err error
@ -74,31 +26,30 @@ func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
} }
} }
for i := range args { for i, p := range params {
av := args[i] switch p.Name {
switch { case "user_id":
case bytes.Equal(av, []byte("user_id")):
if v := c.Value(UserIDKey); v != nil { if v := c.Value(UserIDKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_id") return nil, argErr(p)
} }
case bytes.Equal(av, []byte("user_id_provider")): case "user_id_provider":
if v := c.Value(UserIDProviderKey); v != nil { if v := c.Value(UserIDProviderKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_id_provider") return nil, argErr(p)
} }
case bytes.Equal(av, []byte("user_role")): case "user_role":
if v := c.Value(UserRoleKey); v != nil { if v := c.Value(UserRoleKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_role") return nil, argErr(p)
} }
case bytes.Equal(av, []byte("cursor")): case "cursor":
if v, ok := fields["cursor"]; ok && v[0] == '"' { if v, ok := fields["cursor"]; ok && v[0] == '"' {
v1, err := c.sg.decrypt(string(v[1 : len(v)-1])) v1, err := c.sg.decrypt(string(v[1 : len(v)-1]))
if err != nil { if err != nil {
@ -106,25 +57,33 @@ func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
} }
vars[i] = v1 vars[i] = v1
} else { } else {
return nil, argErr("cursor") return nil, argErr(p)
} }
default: default:
if v, ok := fields[string(av)]; ok { if v, ok := fields[p.Name]; ok {
switch {
case p.IsArray && v[0] != '[':
return nil, fmt.Errorf("variable '%s' should be an array of type '%s'", p.Name, p.Type)
case p.Type == "json" && v[0] != '[' && v[0] != '{':
return nil, fmt.Errorf("variable '%s' should be an array or object", p.Name)
}
switch v[0] { switch v[0] {
case '[', '{': case '[', '{':
vars[i] = escQuote(v) vars[i] = v
default: default:
var val interface{} var val interface{}
if err := json.Unmarshal(v, &val); err != nil { if err := json.Unmarshal(v, &val); err != nil {
return nil, err return nil, err
} }
vars[i] = val vars[i] = val
} }
} else { } else {
return nil, argErr(string(av)) return nil, argErr(p)
} }
} }
} }
@ -132,34 +91,6 @@ func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
return vars, nil return vars, nil
} }
func escQuote(b []byte) []byte { func argErr(p psql.Param) error {
f := false return fmt.Errorf("required variable '%s' of type '%s' must be set", p.Name, p.Type)
for i := range b {
if b[i] == '\'' {
f = true
break
}
}
if !f {
return b
}
buf := &bytes.Buffer{}
s := 0
for i := range b {
if b[i] == '\'' {
buf.Write(b[s:i])
buf.WriteString(`''`)
s = i + 1
}
}
l := len(b)
if s < (l - 1) {
buf.Write(b[s:l])
}
return buf.Bytes()
}
func argErr(name string) error {
return fmt.Errorf("query requires variable '%s' to be set", name)
} }

View File

@ -14,7 +14,7 @@ import (
type stmt struct { type stmt struct {
role *Role role *Role
qc *qcode.QCode qc *qcode.QCode
skipped uint32 md psql.Metadata
sql string sql string
} }
@ -62,12 +62,11 @@ func (sg *SuperGraph) buildRoleStmt(query, vars []byte, role string) ([]stmt, er
stmts := []stmt{stmt{role: ro, qc: qc}} stmts := []stmt{stmt{role: ro, qc: qc}}
w := &bytes.Buffer{} w := &bytes.Buffer{}
skipped, err := sg.pc.Compile(qc, w, psql.Variables(vm)) stmts[0].md, err = sg.pc.Compile(w, qc, psql.Variables(vm))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stmts[0].skipped = skipped
stmts[0].sql = w.String() stmts[0].sql = w.String()
return stmts, nil return stmts, nil
@ -83,12 +82,13 @@ func (sg *SuperGraph) buildMultiStmt(query, vars []byte) ([]stmt, error) {
} }
} }
if len(sg.conf.RolesQuery) == 0 { if sg.conf.RolesQuery == "" {
return nil, errors.New("roles_query not defined") return nil, errors.New("roles_query not defined")
} }
stmts := make([]stmt, 0, len(sg.conf.Roles)) stmts := make([]stmt, 0, len(sg.conf.Roles))
w := &bytes.Buffer{} w := &bytes.Buffer{}
md := psql.Metadata{}
for i := 0; i < len(sg.conf.Roles); i++ { for i := 0; i < len(sg.conf.Roles); i++ {
role := &sg.conf.Roles[i] role := &sg.conf.Roles[i]
@ -104,19 +104,20 @@ func (sg *SuperGraph) buildMultiStmt(query, vars []byte) ([]stmt, error) {
} }
stmts = append(stmts, stmt{role: role, qc: qc}) stmts = append(stmts, stmt{role: role, qc: qc})
s := &stmts[len(stmts)-1]
skipped, err := sg.pc.Compile(qc, w, psql.Variables(vm)) md, err = sg.pc.CompileWithMetadata(w, qc, psql.Variables(vm), md)
if err != nil { if err != nil {
return nil, err return nil, err
} }
s := &stmts[len(stmts)-1]
s.skipped = skipped
s.sql = w.String() s.sql = w.String()
s.md = md
w.Reset() w.Reset()
} }
sql, err := sg.renderUserQuery(stmts) sql, err := sg.renderUserQuery(md, stmts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -126,13 +127,13 @@ func (sg *SuperGraph) buildMultiStmt(query, vars []byte) ([]stmt, error) {
} }
//nolint: errcheck //nolint: errcheck
func (sg *SuperGraph) renderUserQuery(stmts []stmt) (string, error) { func (sg *SuperGraph) renderUserQuery(md psql.Metadata, stmts []stmt) (string, error) {
w := &bytes.Buffer{} w := &bytes.Buffer{}
io.WriteString(w, `SELECT "_sg_auth_info"."role", (CASE "_sg_auth_info"."role" `) io.WriteString(w, `SELECT "_sg_auth_info"."role", (CASE "_sg_auth_info"."role" `)
for _, s := range stmts { for _, s := range stmts {
if len(s.role.Match) == 0 && if s.role.Match == "" &&
s.role.Name != "user" && s.role.Name != "anon" { s.role.Name != "user" && s.role.Name != "anon" {
continue continue
} }
@ -144,12 +145,12 @@ func (sg *SuperGraph) renderUserQuery(stmts []stmt) (string, error) {
} }
io.WriteString(w, `END) FROM (SELECT (CASE WHEN EXISTS (`) io.WriteString(w, `END) FROM (SELECT (CASE WHEN EXISTS (`)
io.WriteString(w, sg.conf.RolesQuery) md.RenderVar(w, sg.conf.RolesQuery)
io.WriteString(w, `) THEN `) io.WriteString(w, `) THEN `)
io.WriteString(w, `(SELECT (CASE`) io.WriteString(w, `(SELECT (CASE`)
for _, s := range stmts { for _, s := range stmts {
if len(s.role.Match) == 0 { if s.role.Match == "" {
continue continue
} }
io.WriteString(w, ` WHEN `) io.WriteString(w, ` WHEN `)
@ -160,23 +161,23 @@ func (sg *SuperGraph) renderUserQuery(stmts []stmt) (string, error) {
} }
io.WriteString(w, ` ELSE 'user' END) FROM (`) io.WriteString(w, ` ELSE 'user' END) FROM (`)
io.WriteString(w, sg.conf.RolesQuery) md.RenderVar(w, sg.conf.RolesQuery)
io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `) io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `)
io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler") AS "_sg_auth_info"(role) LIMIT 1; `) io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler") AS "_sg_auth_info"(role) LIMIT 1; `)
return w.String(), nil return w.String(), nil
} }
func (sg *SuperGraph) hasTablesWithConfig(qc *qcode.QCode, role *Role) bool { // func (sg *SuperGraph) hasTablesWithConfig(qc *qcode.QCode, role *Role) bool {
for _, id := range qc.Roots { // for _, id := range qc.Roots {
t, err := sg.schema.GetTable(qc.Selects[id].Name) // t, err := sg.schema.GetTable(qc.Selects[id].Name)
if err != nil { // if err != nil {
return false // return false
} // }
if r := role.GetTable(t.Name); r == nil { // if r := role.GetTable(t.Name); r == nil {
return false // return false
} // }
} // }
return true // return true
} // }

View File

@ -3,6 +3,7 @@ package core
import ( import (
"fmt" "fmt"
"path" "path"
"path/filepath"
"strings" "strings"
"github.com/spf13/viper" "github.com/spf13/viper"
@ -10,22 +11,68 @@ import (
// Core struct contains core specific config value // Core struct contains core specific config value
type Config struct { type Config struct {
// SecretKey is used to encrypt opaque values such as
// the cursor. Auto-generated if not set
SecretKey string `mapstructure:"secret_key"` SecretKey string `mapstructure:"secret_key"`
// UseAllowList (aka production mode) when set to true ensures
// only queries lists in the allow.list file can be used. All
// queries are pre-prepared so no compiling happens and things are
// very fast.
UseAllowList bool `mapstructure:"use_allow_list"` UseAllowList bool `mapstructure:"use_allow_list"`
// AllowListFile if the path to allow list file if not set the
// path is assumed to tbe the same as the config path (allow.list)
AllowListFile string `mapstructure:"allow_list_file"` AllowListFile string `mapstructure:"allow_list_file"`
// SetUserID forces the database session variable `user.id` to
// be set to the user id. This variables can be used by triggers
// or other database functions
SetUserID bool `mapstructure:"set_user_id"` SetUserID bool `mapstructure:"set_user_id"`
// DefaultBlock ensures that in anonymous mode (role 'anon') all tables
// are blocked from queries and mutations. To open access to tables in
// anonymous mode they have to be added to the 'anon' role config.
DefaultBlock bool `mapstructure:"default_block"`
// Vars is a map of hardcoded variables that can be leveraged in your
// queries (eg variable admin_id will be $admin_id in the query)
Vars map[string]string `mapstructure:"variables"` Vars map[string]string `mapstructure:"variables"`
// Blocklist is a list of tables and columns that should be filtered
// out from any and all queries
Blocklist []string Blocklist []string
// Tables contains all table specific configuration such as aliased tables
// creating relationships between tables, etc
Tables []Table Tables []Table
// RolesQuery if set enabled attributed based access control. This query
// is use to fetch the user attributes that then dynamically define the users
// role.
RolesQuery string `mapstructure:"roles_query"` RolesQuery string `mapstructure:"roles_query"`
// Roles contains all the configuration for all the roles you want to support
// `user` and `anon` are two default roles. User role is for when a user ID is
// available and Anon when it's not.
//
// If you're using the RolesQuery config to enable atribute based acess control then
// you can add more custom roles.
Roles []Role Roles []Role
Inflections map[string]string
// Inflections is to add additionally singular to plural mappings
// to the engine (eg. sheep: sheep)
Inflections map[string]string `mapstructure:"inflections"`
// Database schema name. Defaults to 'public'
DBSchema string `mapstructure:"db_schema"`
} }
// Table struct defines a database table // Table struct defines a database table
type Table struct { type Table struct {
Name string Name string
Table string Table string
Type string
Blocklist []string Blocklist []string
Remotes []Remote Remotes []Remote
Columns []Column Columns []Column
@ -63,11 +110,12 @@ type Role struct {
// RoleTable struct contains role specific access control values for a database table // RoleTable struct contains role specific access control values for a database table
type RoleTable struct { type RoleTable struct {
Name string Name string
ReadOnly bool `mapstructure:"read_only"`
Query Query Query *Query
Insert Insert Insert *Insert
Update Update Update *Update
Delete Delete Delete *Delete
} }
// Query struct contains access control values for query operations // Query struct contains access control values for query operations
@ -102,33 +150,74 @@ type Delete struct {
Block bool Block bool
} }
// AddRoleTable function is a helper function to make it easy to add per-table
// row-level config
func (c *Config) AddRoleTable(role, table string, conf interface{}) error {
var r *Role
for i := range c.Roles {
if strings.EqualFold(c.Roles[i].Name, role) {
r = &c.Roles[i]
break
}
}
if r == nil {
nr := Role{Name: role}
c.Roles = append(c.Roles, nr)
r = &nr
}
var t *RoleTable
for i := range r.Tables {
if strings.EqualFold(r.Tables[i].Name, table) {
t = &r.Tables[i]
break
}
}
if t == nil {
nt := RoleTable{Name: table}
r.Tables = append(r.Tables, nt)
t = &nt
}
switch v := conf.(type) {
case Query:
t.Query = &v
case Insert:
t.Insert = &v
case Update:
t.Update = &v
case Delete:
t.Delete = &v
default:
return fmt.Errorf("unsupported object type: %t", v)
}
return nil
}
// ReadInConfig function reads in the config file for the environment specified in the GO_ENV // ReadInConfig function reads in the config file for the environment specified in the GO_ENV
// environment variable. This is the best way to create a new Super Graph config. // environment variable. This is the best way to create a new Super Graph config.
func ReadInConfig(configFile string) (*Config, error) { func ReadInConfig(configFile string) (*Config, error) {
cpath := path.Dir(configFile) cp := path.Dir(configFile)
cfile := path.Base(configFile) vi := newViper(cp, path.Base(configFile))
vi := newViper(cpath, cfile)
if err := vi.ReadInConfig(); err != nil { if err := vi.ReadInConfig(); err != nil {
return nil, err return nil, err
} }
inherits := vi.GetString("inherits") if pcf := vi.GetString("inherits"); pcf != "" {
cf := vi.ConfigFileUsed()
if len(inherits) != 0 { vi = newViper(cp, pcf)
vi = newViper(cpath, inherits)
if err := vi.ReadInConfig(); err != nil { if err := vi.ReadInConfig(); err != nil {
return nil, err return nil, err
} }
if vi.IsSet("inherits") { if v := vi.GetString("inherits"); v != "" {
return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)", return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)", pcf, v)
inherits,
vi.GetString("inherits"))
} }
vi.SetConfigName(cfile) vi.SetConfigFile(cf)
if err := vi.MergeInConfig(); err != nil { if err := vi.MergeInConfig(); err != nil {
return nil, err return nil, err
@ -141,8 +230,8 @@ func ReadInConfig(configFile string) (*Config, error) {
return nil, fmt.Errorf("failed to decode config, %v", err) return nil, fmt.Errorf("failed to decode config, %v", err)
} }
if len(c.AllowListFile) == 0 { if c.AllowListFile == "" {
c.AllowListFile = path.Join(cpath, "allow.list") c.AllowListFile = path.Join(cp, "allow.list")
} }
return c, nil return c, nil
@ -155,9 +244,13 @@ func newViper(configPath, configFile string) *viper.Viper {
vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_")) vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
vi.AutomaticEnv() vi.AutomaticEnv()
if filepath.Ext(configFile) != "" {
vi.SetConfigFile(path.Join(configPath, configFile))
} else {
vi.SetConfigName(configFile) vi.SetConfigName(configFile)
vi.AddConfigPath(configPath) vi.AddConfigPath(configPath)
vi.AddConfigPath("./config") vi.AddConfigPath("./config")
}
return vi return vi
} }

View File

@ -5,11 +5,6 @@ import (
"errors" "errors"
) )
const (
openVar = "{{"
closeVar = "}}"
)
var ( var (
errNotFound = errors.New("not found in prepared statements") errNotFound = errors.New("not found in prepared statements")
) )

View File

@ -1,17 +1,22 @@
package core package core
import ( import (
"bytes"
"context" "context"
"database/sql" "database/sql"
"encoding/json" "encoding/json"
"fmt" "fmt"
"hash/maphash"
"time" "time"
"github.com/dosco/super-graph/core/internal/psql" "github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
)
"github.com/valyala/fasttemplate" type OpType int
const (
OpQuery OpType = iota
OpMutation
) )
type extensions struct { type extensions struct {
@ -50,25 +55,43 @@ type scontext struct {
} }
func (sg *SuperGraph) initCompilers() error { func (sg *SuperGraph) initCompilers() error {
di, err := psql.GetDBInfo(sg.db) var err error
var schema string
if sg.conf.DBSchema == "" {
schema = "public"
} else {
schema = sg.conf.DBSchema
}
// If sg.di is not null then it's probably set
// for tests
if sg.dbinfo == nil {
sg.dbinfo, err = psql.GetDBInfo(sg.db, schema)
if err != nil { if err != nil {
return err return err
} }
}
if err = addTables(sg.conf, di); err != nil { if len(sg.dbinfo.Tables) == 0 {
return fmt.Errorf("no tables found in database (schema: %s)", schema)
}
if err = addTables(sg.conf, sg.dbinfo); err != nil {
return err return err
} }
if err = addForeignKeys(sg.conf, di); err != nil { if err = addForeignKeys(sg.conf, sg.dbinfo); err != nil {
return err return err
} }
sg.schema, err = psql.NewDBSchema(di, getDBTableAliases(sg.conf)) sg.schema, err = psql.NewDBSchema(sg.dbinfo, getDBTableAliases(sg.conf))
if err != nil { if err != nil {
return err return err
} }
sg.qc, err = qcode.NewCompiler(qcode.Config{ sg.qc, err = qcode.NewCompiler(qcode.Config{
DefaultBlock: sg.conf.DefaultBlock,
Blocklist: sg.conf.Blocklist, Blocklist: sg.conf.Blocklist,
}) })
if err != nil { if err != nil {
@ -89,25 +112,25 @@ func (sg *SuperGraph) initCompilers() error {
func (c *scontext) execQuery() ([]byte, error) { func (c *scontext) execQuery() ([]byte, error) {
var data []byte var data []byte
// var st *stmt var st *stmt
var err error var err error
if c.sg.conf.UseAllowList { if c.sg.conf.UseAllowList {
data, _, err = c.resolvePreparedSQL() data, st, err = c.resolvePreparedSQL()
if err != nil {
return nil, err
}
} else { } else {
data, _, err = c.resolveSQL() data, st, err = c.resolveSQL()
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(data) == 0 || st.md.Skipped() == 0 {
return data, nil
} }
return data, nil // return c.sg.execRemoteJoin(st, data, c.req.hdr)
return c.sg.execRemoteJoin(st, data, nil)
//return execRemoteJoin(st, data, c.req.hdr)
} }
func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) { func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
@ -143,32 +166,44 @@ func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
} else { } else {
role = c.role role = c.role
} }
c.res.role = role c.res.role = role
ps, ok := c.sg.prepared[stmtHash(c.res.name, role)] h := maphash.Hash{}
h.SetSeed(c.sg.hashSeed)
id := queryID(&h, c.res.name, role)
q, ok := c.sg.queries[id]
if !ok { if !ok {
return nil, nil, errNotFound return nil, nil, errNotFound
} }
c.res.sql = ps.st.sql
if q.sd == nil {
q.Do(func() { c.sg.prepare(q, role) })
if q.err != nil {
return nil, nil, err
}
}
c.res.sql = q.st.sql
var root []byte var root []byte
var row *sql.Row var row *sql.Row
varsList, err := c.argList(ps.args) varsList, err := c.argList(q.st.md)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
if useTx { if useTx {
row = tx.Stmt(ps.sd).QueryRow(varsList...) row = tx.Stmt(q.sd).QueryRow(varsList...)
} else { } else {
row = ps.sd.QueryRow(varsList...) row = q.sd.QueryRow(varsList...)
} }
if ps.roleArg { if q.roleArg {
err = row.Scan(&role, &root) err = row.Scan(&role, &root)
} else { } else {
err = row.Scan(&root) err = row.Scan(&root)
@ -182,15 +217,15 @@ func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
if useTx { if useTx {
if err := tx.Commit(); err != nil { if err := tx.Commit(); err != nil {
return nil, nil, err return nil, nil, q.err
} }
} }
if root, err = c.sg.encryptCursor(ps.st.qc, root); err != nil { if root, err = c.sg.encryptCursor(q.st.qc, root); err != nil {
return nil, nil, err return nil, nil, err
} }
return root, &ps.st, nil return root, &q.st, nil
} }
func (c *scontext) resolveSQL() ([]byte, *stmt, error) { func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
@ -228,15 +263,23 @@ func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
return nil, nil, err return nil, nil, err
} }
st := &stmts[0] st := &stmts[0]
c.res.sql = st.sql
t := fasttemplate.New(st.sql, openVar, closeVar) varList, err := c.argList(st.md)
buf := &bytes.Buffer{}
_, err = t.ExecuteFunc(buf, c.argMap())
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
finalSQL := buf.String() // finalSQL := buf.String()
////
// _, err = t.ExecuteFunc(buf, c.argMap(st.md))
// if err != nil {
// return nil, nil, err
// }
// finalSQL := buf.String()
/////
// var stime time.Time // var stime time.Time
@ -251,9 +294,9 @@ func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
// defaultRole := c.role // defaultRole := c.role
if useTx { if useTx {
row = tx.QueryRow(finalSQL) row = tx.QueryRowContext(c, st.sql, varList...)
} else { } else {
row = c.sg.db.QueryRow(finalSQL) row = c.sg.db.QueryRowContext(c, st.sql, varList...)
} }
if len(stmts) > 1 { if len(stmts) > 1 {
@ -262,9 +305,7 @@ func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
err = row.Scan(&root) err = row.Scan(&root)
} }
c.res.sql = finalSQL if role == "" {
if len(role) == 0 {
c.res.role = c.role c.res.role = c.role
} else { } else {
c.res.role = role c.res.role = role
@ -322,7 +363,20 @@ func (c *scontext) executeRoleQuery(tx *sql.Tx) (string, error) {
return role, nil return role, nil
} }
func (r *Result) Operation() string { func (r *Result) Operation() OpType {
switch r.op {
case qcode.QTQuery:
return OpQuery
case qcode.QTMutation, qcode.QTInsert, qcode.QTUpdate, qcode.QTUpsert, qcode.QTDelete:
return OpMutation
default:
return -1
}
}
func (r *Result) OperationName() string {
return r.op.String() return r.op.String()
} }

View File

@ -5,8 +5,8 @@ import (
"encoding/base64" "encoding/base64"
"github.com/dosco/super-graph/core/internal/crypto" "github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/jsn"
) )
func (sg *SuperGraph) encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) { func (sg *SuperGraph) encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) {

View File

@ -2,9 +2,7 @@ package core
import ( import (
"fmt" "fmt"
"regexp"
"strings" "strings"
"unicode"
"github.com/dosco/super-graph/core/internal/psql" "github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
@ -18,17 +16,12 @@ func (sg *SuperGraph) initConfig() error {
flect.AddPlural(k, v) flect.AddPlural(k, v)
} }
// Variables: Validate and sanitize
for k, v := range c.Vars {
c.Vars[k] = sanitizeVars(v)
}
// Tables: Validate and sanitize // Tables: Validate and sanitize
tm := make(map[string]struct{}) tm := make(map[string]struct{})
for i := 0; i < len(c.Tables); i++ { for i := 0; i < len(c.Tables); i++ {
t := &c.Tables[i] t := &c.Tables[i]
t.Name = flect.Pluralize(strings.ToLower(t.Name)) // t.Name = flect.Pluralize(strings.ToLower(t.Name))
if _, ok := tm[t.Name]; ok { if _, ok := tm[t.Name]; ok {
sg.conf.Tables = append(c.Tables[:i], c.Tables[i+1:]...) sg.conf.Tables = append(c.Tables[:i], c.Tables[i+1:]...)
@ -70,17 +63,33 @@ func (sg *SuperGraph) initConfig() error {
sg.roles["user"] = &ur sg.roles["user"] = &ur
} }
// Roles: validate and sanitize // If anon role is not defined then create it
c.RolesQuery = sanitize(c.RolesQuery) if _, ok := sg.roles["anon"]; !ok {
ur := Role{
if len(c.RolesQuery) == 0 { Name: "anon",
sg.log.Printf("WRN roles_query not defined: attribute based access control disabled") tm: make(map[string]*RoleTable),
}
c.Roles = append(c.Roles, ur)
sg.roles["anon"] = &ur
} }
_, userExists := sg.roles["user"] if c.RolesQuery == "" {
_, sg.anonExists = sg.roles["anon"] sg.log.Printf("INF roles_query not defined: attribute based access control disabled")
} else {
n := 0
for k, v := range sg.roles {
if k == "user" || k == "anon" {
n++
} else if v.Match != "" {
n++
}
}
sg.abacEnabled = (n > 2)
sg.abacEnabled = userExists && len(c.RolesQuery) != 0 if !sg.abacEnabled {
sg.log.Printf("WRN attribute based access control disabled: no custom roles found (with 'match' defined)")
}
}
return nil return nil
} }
@ -91,38 +100,45 @@ func getDBTableAliases(c *Config) map[string][]string {
for i := range c.Tables { for i := range c.Tables {
t := c.Tables[i] t := c.Tables[i]
if len(t.Table) == 0 || len(t.Columns) != 0 { if t.Table != "" && t.Type == "" {
continue
}
m[t.Table] = append(m[t.Table], t.Name) m[t.Table] = append(m[t.Table], t.Name)
} }
}
return m return m
} }
func addTables(c *Config, di *psql.DBInfo) error { func addTables(c *Config, di *psql.DBInfo) error {
var err error
for _, t := range c.Tables { for _, t := range c.Tables {
if len(t.Table) == 0 || len(t.Columns) == 0 { switch t.Type {
continue case "json", "jsonb":
err = addJsonTable(di, t.Columns, t)
case "polymorphic":
err = addVirtualTable(di, t.Columns, t)
} }
if err := addTable(di, t.Columns, t); err != nil {
if err != nil {
return err return err
} }
} }
return nil return nil
} }
func addTable(di *psql.DBInfo, cols []Column, t Table) error { func addJsonTable(di *psql.DBInfo, cols []Column, t Table) error {
// This is for jsonb columns that want to be tables.
bc, ok := di.GetColumn(t.Table, t.Name) bc, ok := di.GetColumn(t.Table, t.Name)
if !ok { if !ok {
return fmt.Errorf( return fmt.Errorf(
"Column '%s' not found on table '%s'", "json table: column '%s' not found on table '%s'",
t.Name, t.Table) t.Name, t.Table)
} }
if bc.Type != "json" && bc.Type != "jsonb" { if bc.Type != "json" && bc.Type != "jsonb" {
return fmt.Errorf( return fmt.Errorf(
"Column '%s' in table '%s' is of type '%s'. Only JSON or JSONB is valid", "json table: column '%s' in table '%s' is of type '%s'. Only JSON or JSONB is valid",
t.Name, t.Table, bc.Type) t.Name, t.Table, bc.Type)
} }
@ -149,10 +165,40 @@ func addTable(di *psql.DBInfo, cols []Column, t Table) error {
return nil return nil
} }
func addVirtualTable(di *psql.DBInfo, cols []Column, t Table) error {
if len(cols) == 0 {
return fmt.Errorf("polymorphic table: no id column specified")
}
c := cols[0]
if c.ForeignKey == "" {
return fmt.Errorf("polymorphic table: no 'related_to' specified on id column")
}
s := strings.SplitN(c.ForeignKey, ".", 2)
if len(s) != 2 {
return fmt.Errorf("polymorphic table: foreign key must be <type column>.<foreign key column>")
}
di.VTables = append(di.VTables, psql.VirtualTable{
Name: t.Name,
IDColumn: c.Name,
TypeColumn: s[0],
FKeyColumn: s[1],
})
return nil
}
func addForeignKeys(c *Config, di *psql.DBInfo) error { func addForeignKeys(c *Config, di *psql.DBInfo) error {
for _, t := range c.Tables { for _, t := range c.Tables {
if t.Type != "" {
continue
}
for _, c := range t.Columns { for _, c := range t.Columns {
if len(c.ForeignKey) == 0 { if c.ForeignKey == "" {
continue continue
} }
if err := addForeignKey(di, c, t); err != nil { if err := addForeignKey(di, c, t); err != nil {
@ -164,30 +210,52 @@ func addForeignKeys(c *Config, di *psql.DBInfo) error {
} }
func addForeignKey(di *psql.DBInfo, c Column, t Table) error { func addForeignKey(di *psql.DBInfo, c Column, t Table) error {
c1, ok := di.GetColumn(t.Name, c.Name) var tn string
if t.Type == "polymorphic" {
tn = t.Table
} else {
tn = t.Name
}
c1, ok := di.GetColumn(tn, c.Name)
if !ok { if !ok {
return fmt.Errorf( return fmt.Errorf(
"Invalid table '%s' or column '%s' in Config", "config: invalid table '%s' or column '%s' defined",
t.Name, c.Name) tn, c.Name)
} }
v := strings.SplitN(c.ForeignKey, ".", 2) v := strings.SplitN(c.ForeignKey, ".", 2)
if len(v) != 2 { if len(v) != 2 {
return fmt.Errorf( return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s", "config: invalid foreign_key defined for table '%s' and column '%s': %s",
t.Name, c.Name) tn, c.Name, c.ForeignKey)
}
// check if it's a polymorphic foreign key
if _, ok := di.GetColumn(tn, v[0]); ok {
c2, ok := di.GetColumn(tn, v[1])
if !ok {
return fmt.Errorf(
"config: invalid column '%s' for polymorphic relationship on table '%s' and column '%s'",
v[1], tn, c.Name)
}
c1.FKeyTable = v[0]
c1.FKeyColID = []int16{c2.ID}
return nil
} }
fkt, fkc := v[0], v[1] fkt, fkc := v[0], v[1]
c2, ok := di.GetColumn(fkt, fkc) c3, ok := di.GetColumn(fkt, fkc)
if !ok { if !ok {
return fmt.Errorf( return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s", "config: foreign_key for table '%s' and column '%s' points to unknown table '%s' and column '%s'",
t.Name, c.Name) t.Name, c.Name, v[0], v[1])
} }
c1.FKeyTable = fkt c1.FKeyTable = fkt
c1.FKeyColID = []int16{c2.ID} c1.FKeyColID = []int16{c3.ID}
return nil return nil
} }
@ -195,7 +263,7 @@ func addForeignKey(di *psql.DBInfo, c Column, t Table) error {
func addRoles(c *Config, qc *qcode.Compiler) error { func addRoles(c *Config, qc *qcode.Compiler) error {
for _, r := range c.Roles { for _, r := range c.Roles {
for _, t := range r.Tables { for _, t := range r.Tables {
if err := addRole(qc, r, t); err != nil { if err := addRole(qc, r, t, c.DefaultBlock); err != nil {
return err return err
} }
} }
@ -204,54 +272,63 @@ func addRoles(c *Config, qc *qcode.Compiler) error {
return nil return nil
} }
func addRole(qc *qcode.Compiler, r Role, t RoleTable) error { func addRole(qc *qcode.Compiler, r Role, t RoleTable, defaultBlock bool) error {
blockFilter := []string{"false"} ro := false // read-only
query := qcode.QueryConfig{ if defaultBlock && r.Name == "anon" {
ro = true
}
if t.ReadOnly {
ro = true
}
query := qcode.QueryConfig{Block: false}
insert := qcode.InsertConfig{Block: ro}
update := qcode.UpdateConfig{Block: ro}
del := qcode.DeleteConfig{Block: ro}
if t.Query != nil {
query = qcode.QueryConfig{
Limit: t.Query.Limit, Limit: t.Query.Limit,
Filters: t.Query.Filters, Filters: t.Query.Filters,
Columns: t.Query.Columns, Columns: t.Query.Columns,
DisableFunctions: t.Query.DisableFunctions, DisableFunctions: t.Query.DisableFunctions,
Block: t.Query.Block,
}
} }
if t.Query.Block { if t.Insert != nil {
query.Filters = blockFilter insert = qcode.InsertConfig{
}
insert := qcode.InsertConfig{
Filters: t.Insert.Filters, Filters: t.Insert.Filters,
Columns: t.Insert.Columns, Columns: t.Insert.Columns,
Presets: t.Insert.Presets, Presets: t.Insert.Presets,
Block: t.Insert.Block,
}
} }
if t.Insert.Block { if t.Update != nil {
insert.Filters = blockFilter update = qcode.UpdateConfig{
}
update := qcode.UpdateConfig{
Filters: t.Update.Filters, Filters: t.Update.Filters,
Columns: t.Update.Columns, Columns: t.Update.Columns,
Presets: t.Update.Presets, Presets: t.Update.Presets,
Block: t.Update.Block,
}
} }
if t.Update.Block { if t.Delete != nil {
update.Filters = blockFilter del = qcode.DeleteConfig{
}
delete := qcode.DeleteConfig{
Filters: t.Delete.Filters, Filters: t.Delete.Filters,
Columns: t.Delete.Columns, Columns: t.Delete.Columns,
Block: t.Delete.Block,
} }
if t.Delete.Block {
delete.Filters = blockFilter
} }
return qc.AddRole(r.Name, t.Name, qcode.TRConfig{ return qc.AddRole(r.Name, t.Name, qcode.TRConfig{
Query: query, Query: query,
Insert: insert, Insert: insert,
Update: update, Update: update,
Delete: delete, Delete: del,
}) })
} }
@ -262,23 +339,3 @@ func (r *Role) GetTable(name string) *RoleTable {
func sanitize(value string) string { func sanitize(value string) string {
return strings.ToLower(strings.TrimSpace(value)) return strings.ToLower(strings.TrimSpace(value))
} }
var (
varRe1 = regexp.MustCompile(`(?mi)\$([a-zA-Z0-9_.]+)`)
varRe2 = regexp.MustCompile(`\{\{([a-zA-Z0-9_.]+)\}\}`)
)
func sanitizeVars(s string) string {
s0 := varRe1.ReplaceAllString(s, `{{$1}}`)
s1 := strings.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return ' '
}
return r
}, s0)
return varRe2.ReplaceAllStringFunc(s1, func(m string) string {
return strings.ToLower(m)
})
}

View File

@ -6,21 +6,27 @@ import (
"errors" "errors"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"log"
"os" "os"
"sort" "sort"
"strings" "strings"
"text/scanner"
"github.com/chirino/graphql/schema"
"github.com/dosco/super-graph/jsn"
) )
const ( const (
AL_QUERY int = iota + 1 expComment = iota + 1
AL_VARS expVar
expQuery
) )
type Item struct { type Item struct {
Name string Name string
key string key string
Query string Query string
Vars json.RawMessage Vars string
Comment string Comment string
} }
@ -32,12 +38,13 @@ type List struct {
type Config struct { type Config struct {
CreateIfNotExists bool CreateIfNotExists bool
Persist bool Persist bool
Log *log.Logger
} }
func New(filename string, conf Config) (*List, error) { func New(filename string, conf Config) (*List, error) {
al := List{} al := List{}
if len(filename) != 0 { if filename != "" {
fp := filename fp := filename
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
@ -47,7 +54,7 @@ func New(filename string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
fp := "./allow.list" fp := "./allow.list"
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
@ -57,7 +64,7 @@ func New(filename string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
fp := "./config/allow.list" fp := "./config/allow.list"
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
@ -67,16 +74,22 @@ func New(filename string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
if !conf.CreateIfNotExists { if !conf.CreateIfNotExists {
return nil, errors.New("allow.list not found") return nil, errors.New("allow.list not found")
} }
if len(filename) == 0 { if filename == "" {
al.filepath = "./config/allow.list" al.filepath = "./config/allow.list"
} else { } else {
al.filepath = filename al.filepath = filename
} }
if file, err := os.OpenFile(al.filepath, os.O_RDONLY|os.O_CREATE, 0644); err != nil {
return nil, err
} else {
file.Close()
}
} }
var err error var err error
@ -86,8 +99,10 @@ func New(filename string, conf Config) (*List, error) {
go func() { go func() {
for v := range al.saveChan { for v := range al.saveChan {
if err = al.save(v); err != nil { err := al.save(v)
break
if err != nil && conf.Log != nil {
conf.Log.Println("WRN allow list save:", err)
} }
} }
}() }()
@ -109,131 +124,124 @@ func (al *List) Set(vars []byte, query, comment string) error {
return errors.New("allow.list is read-only") return errors.New("allow.list is read-only")
} }
if len(query) == 0 { if query == "" {
return errors.New("empty query") return errors.New("empty query")
} }
var q string
for i := 0; i < len(query); i++ {
c := query[i]
if c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' {
q = query
break
} else if c == '{' {
q = "query " + query
break
}
}
al.saveChan <- Item{ al.saveChan <- Item{
Comment: comment, Comment: comment,
Query: q, Query: query,
Vars: vars, Vars: string(vars),
} }
return nil return nil
} }
func (al *List) Load() ([]Item, error) { func (al *List) Load() ([]Item, error) {
var list []Item
b, err := ioutil.ReadFile(al.filepath) b, err := ioutil.ReadFile(al.filepath)
if err != nil { if err != nil {
return list, err return nil, err
} }
if len(b) == 0 { return parse(string(b), al.filepath)
return list, nil }
}
var comment bytes.Buffer func parse(b, filename string) ([]Item, error) {
var varBytes []byte var items []Item
itemMap := make(map[string]struct{}) var s scanner.Scanner
s.Init(strings.NewReader(b))
s.Filename = filename
s.Mode ^= scanner.SkipComments
s, e, c := 0, 0, 0 var op, sp scanner.Position
ty := 0 var item Item
for { newComment := false
fq := false st := expComment
if c == 0 && b[e] == '#' { for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() {
s = e txt := s.TokenText()
for e < len(b) && b[e] != '\n' {
e++
}
if (e - s) > 2 {
comment.Write(b[(s + 1):(e + 1)])
}
}
if e >= len(b) { switch {
break case strings.HasPrefix(txt, "/*"):
if st == expQuery {
v := b[sp.Offset:s.Pos().Offset]
item.Query = strings.TrimSpace(v[:strings.LastIndexByte(v, '}')+1])
items = append(items, item)
} }
item = Item{Comment: strings.TrimSpace(txt[2 : len(txt)-2])}
sp = s.Pos()
st = expComment
newComment = true
if matchPrefix(b, e, "query") || matchPrefix(b, e, "mutation") { case !newComment && strings.HasPrefix(txt, "#"):
if c == 0 { if st == expQuery {
s = e v := b[sp.Offset:s.Pos().Offset]
item.Query = strings.TrimSpace(v[:strings.LastIndexByte(v, '}')+1])
items = append(items, item)
} }
ty = AL_QUERY item = Item{}
} else if matchPrefix(b, e, "variables") { sp = s.Pos()
if c == 0 { st = expComment
s = e + len("variables") + 1
}
ty = AL_VARS
} else if b[e] == '{' {
c++
} else if b[e] == '}' { case strings.HasPrefix(txt, "variables"):
c-- if st == expComment {
v := b[sp.Offset:s.Pos().Offset]
item.Comment = strings.TrimSpace(v[:strings.IndexByte(v, '\n')])
}
sp = s.Pos()
st = expVar
if c == 0 { case isGraphQL(txt):
if ty == AL_QUERY { if st == expVar {
fq = true v := b[sp.Offset:s.Pos().Offset]
} else if ty == AL_VARS { item.Vars = strings.TrimSpace(v[:strings.LastIndexByte(v, '}')+1])
varBytes = b[s:(e + 1)]
} }
ty = 0 sp = op
} st = expQuery
}
if fq {
query := string(b[s:(e + 1)])
name := QueryName(query)
key := strings.ToLower(name)
if _, ok := itemMap[key]; !ok {
v := Item{
Name: name,
key: key,
Query: query,
Vars: varBytes,
Comment: comment.String(),
}
list = append(list, v)
comment.Reset()
}
varBytes = nil
} }
op = s.Pos()
e++
if e >= len(b) {
break
}
} }
return list, nil if st == expQuery {
v := b[sp.Offset:s.Pos().Offset]
item.Query = strings.TrimSpace(v[:strings.LastIndexByte(v, '}')+1])
items = append(items, item)
}
for i := range items {
items[i].Name = QueryName(items[i].Query)
items[i].key = strings.ToLower(items[i].Name)
}
return items, nil
}
func isGraphQL(s string) bool {
return strings.HasPrefix(s, "query") ||
strings.HasPrefix(s, "mutation") ||
strings.HasPrefix(s, "subscription")
} }
func (al *List) save(item Item) error { func (al *List) save(item Item) error {
item.Name = QueryName(item.Query) var buf bytes.Buffer
qd := &schema.QueryDocument{}
if err := qd.Parse(item.Query); err != nil {
return err
}
qd.WriteTo(&buf)
query := buf.String()
buf.Reset()
item.Name = QueryName(query)
item.key = strings.ToLower(item.Name) item.key = strings.ToLower(item.Name)
if len(item.Name) == 0 { if item.Name == "" {
return nil return nil
} }
@ -252,7 +260,7 @@ func (al *List) save(item Item) error {
} }
if index != -1 { if index != -1 {
if len(list[index].Comment) != 0 { if list[index].Comment != "" {
item.Comment = list[index].Comment item.Comment = list[index].Comment
} }
list[index] = item list[index] = item
@ -271,50 +279,43 @@ func (al *List) save(item Item) error {
return strings.Compare(list[i].key, list[j].key) == -1 return strings.Compare(list[i].key, list[j].key) == -1
}) })
for _, v := range list { for i, v := range list {
cmtLines := strings.Split(v.Comment, "\n") var vars string
if v.Vars != "" {
i := 0 buf.Reset()
for _, c := range cmtLines { if err := jsn.Clear(&buf, []byte(v.Vars)); err != nil {
if c = strings.TrimSpace(c); len(c) == 0 {
continue continue
} }
vj := json.RawMessage(buf.Bytes())
_, err := f.WriteString(fmt.Sprintf("# %s\n", c)) if vj, err = json.MarshalIndent(vj, "", " "); err != nil {
if err != nil { continue
return err
} }
i++ vars = string(vj)
}
list[i].Vars = vars
list[i].Comment = strings.TrimSpace(v.Comment)
} }
if i != 0 { for _, v := range list {
if _, err := f.WriteString("\n"); err != nil { if v.Comment != "" {
return err _, err = f.WriteString(fmt.Sprintf("/* %s */\n\n", v.Comment))
}
} else { } else {
if _, err := f.WriteString(fmt.Sprintf("# Query named %s\n\n", v.Name)); err != nil { _, err = f.WriteString(fmt.Sprintf("/* %s */\n\n", v.Name))
}
if err != nil {
return err return err
} }
}
if len(v.Vars) != 0 && !bytes.Equal(v.Vars, []byte("{}")) { if v.Vars != "" {
vj, err := json.MarshalIndent(v.Vars, "", " ") _, err = f.WriteString(fmt.Sprintf("variables %s\n\n", v.Vars))
if err != nil {
return fmt.Errorf("failed to marshal vars: %v", err)
}
_, err = f.WriteString(fmt.Sprintf("variables %s\n\n", vj))
if err != nil { if err != nil {
return err return err
} }
} }
if v.Query[0] == '{' {
_, err = f.WriteString(fmt.Sprintf("query %s\n\n", v.Query))
} else {
_, err = f.WriteString(fmt.Sprintf("%s\n\n", v.Query)) _, err = f.WriteString(fmt.Sprintf("%s\n\n", v.Query))
}
if err != nil { if err != nil {
return err return err
} }
@ -323,18 +324,6 @@ func (al *List) save(item Item) error {
return nil return nil
} }
func matchPrefix(b []byte, i int, s string) bool {
if (len(b) - i) < len(s) {
return false
}
for n := 0; n < len(s); n++ {
if b[(i+n)] != s[n] {
return false
}
}
return true
}
func QueryName(b string) string { func QueryName(b string) string {
state, s := 0, 0 state, s := 0, 0

View File

@ -14,7 +14,7 @@ func TestGQLName1(t *testing.T) {
name := QueryName(q) name := QueryName(q)
if len(name) != 0 { if name != "" {
t.Fatal("Name should be empty, not ", name) t.Fatal("Name should be empty, not ", name)
} }
} }
@ -82,3 +82,160 @@ func TestGQLName5(t *testing.T) {
t.Fatal("Name should be empty, not ", name) t.Fatal("Name should be empty, not ", name)
} }
} }
func TestParse1(t *testing.T) {
var al = `
# Hello world
variables {
"data": {
"slug": "",
"body": "",
"post": {
"connect": {
"slug": ""
}
}
}
}
mutation createComment {
comment(insert: $data) {
slug
body
createdAt: created_at
totalVotes: cached_votes_total
totalReplies: cached_replies_total
vote: comment_vote(where: {user_id: {eq: $user_id}}) {
created_at
__typename
}
author: user {
slug
firstName: first_name
lastName: last_name
pictureURL: picture_url
bio
__typename
}
__typename
}
}
# Query named createPost
query createPost {
post(insert: $data) {
slug
body
published
createdAt: created_at
totalVotes: cached_votes_total
totalComments: cached_comments_total
vote: post_vote(where: {user_id: {eq: $user_id}}) {
created_at
__typename
}
author: user {
slug
firstName: first_name
lastName: last_name
pictureURL: picture_url
bio
__typename
}
__typename
}
}`
_, err := parse(al, "allow.list")
if err != nil {
t.Fatal(err)
}
}
func TestParse2(t *testing.T) {
var al = `
/* Hello world */
variables {
"data": {
"slug": "",
"body": "",
"post": {
"connect": {
"slug": ""
}
}
}
}
mutation createComment {
comment(insert: $data) {
slug
body
createdAt: created_at
totalVotes: cached_votes_total
totalReplies: cached_replies_total
vote: comment_vote(where: {user_id: {eq: $user_id}}) {
created_at
__typename
}
author: user {
slug
firstName: first_name
lastName: last_name
pictureURL: picture_url
bio
__typename
}
__typename
}
}
/*
Query named createPost
*/
variables {
"data": {
"thread": {
"connect": {
"slug": ""
}
},
"slug": "",
"published": false,
"body": ""
}
}
query createPost {
post(insert: $data) {
slug
body
published
createdAt: created_at
totalVotes: cached_votes_total
totalComments: cached_comments_total
vote: post_vote(where: {user_id: {eq: $user_id}}) {
created_at
__typename
}
author: user {
slug
firstName: first_name
lastName: last_name
pictureURL: picture_url
bio
__typename
}
__typename
}
}`
_, err := parse(al, "allow.list")
if err != nil {
t.Fatal(err)
}
}

View File

@ -0,0 +1,88 @@
package cockraochdb_test
import (
"database/sql"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"regexp"
"sync/atomic"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
dir, err := ioutil.TempDir("", "temp-cockraochdb-")
if err != nil {
log.Fatal(err)
}
cmd := exec.Command("cockroach", "start", "--insecure", "--listen-addr", ":0", "--http-addr", ":0", "--store=path="+dir)
finder := &urlFinder{
c: make(chan bool),
}
cmd.Stdout = finder
cmd.Stderr = ioutil.Discard
err = cmd.Start()
if err != nil {
t.Skip("is CockroachDB installed?: " + err.Error())
return
}
fmt.Println("started temporary cockroach db")
stopped := int32(0)
stopDatabase := func() {
fmt.Println("stopping temporary cockroach db")
if atomic.CompareAndSwapInt32(&stopped, 0, 1) {
if err := cmd.Process.Kill(); err != nil {
log.Fatal(err)
}
if _, err := cmd.Process.Wait(); err != nil {
log.Fatal(err)
}
os.RemoveAll(dir)
}
}
defer stopDatabase()
// Wait till we figure out the URL we should connect to...
<-finder.c
db, err := sql.Open("pgx", finder.URL)
if err != nil {
stopDatabase()
require.NoError(t, err)
}
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
if t.Name() == "TestCockroachDB/nested_insert" {
t.Skip("nested inserts currently not working yet on cockroach db")
}
})
}
type urlFinder struct {
c chan bool
done bool
URL string
}
func (finder *urlFinder) Write(p []byte) (n int, err error) {
s := string(p)
urlRegex := regexp.MustCompile(`\nsql:\s+(postgresql:[^\s]+)\n`)
if !finder.done {
submatch := urlRegex.FindAllStringSubmatch(s, -1)
if submatch != nil {
finder.URL = submatch[0][1]
finder.done = true
close(finder.c)
}
}
return len(p), nil
}

View File

@ -0,0 +1,249 @@
package integration_tests
import (
"context"
"database/sql"
"encoding/json"
"io/ioutil"
"testing"
"github.com/dosco/super-graph/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func SetupSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`
CREATE TABLE users (
id integer PRIMARY KEY,
full_name text
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE product (
id integer PRIMARY KEY,
name text,
weight float
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE line_item (
id integer PRIMARY KEY,
product integer REFERENCES product(id),
quantity integer,
price float
)`)
require.NoError(t, err)
}
func DropSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`DROP TABLE IF EXISTS line_item`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS product`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS users`)
require.NoError(t, err)
}
func TestSuperGraph(t *testing.T, db *sql.DB, before func(t *testing.T)) {
config := core.Config{}
config.UseAllowList = false
config.AllowListFile = "./allow.list"
config.RolesQuery = `SELECT * FROM users WHERE id = $user_id`
sg, err := core.NewSuperGraph(&config, db)
require.NoError(t, err)
ctx := context.Background()
t.Run("seed fixtures", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { products (insert: $products) { id } }`,
json.RawMessage(`{"products":[
{"id":1, "name":"Charmin Ultra Soft", "weight": 0.5},
{"id":2, "name":"Hand Sanitizer", "weight": 0.2},
{"id":3, "name":"Case of Corona", "weight": 1.2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"products": [{"id": 1}, {"id": 2}, {"id": 3}]}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`mutation { line_items (insert: $line_items) { id } }`,
json.RawMessage(`{"line_items":[
{"id":5001, "product":1, "price":6.95, "quantity":10},
{"id":5002, "product":2, "price":10.99, "quantity":2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001}, {"id": 5002}]}`, string(res.Data))
})
t.Run("get line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 10}}`, string(res.Data))
})
t.Run("get line items", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 10}, {"id": 5002, "price": 10.99, "quantity": 2}]}`, string(res.Data))
})
t.Run("update line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(update:$update, id:$id) { id } }`,
json.RawMessage(`{"id":5001, "update":{"quantity":20}}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 20}}`, string(res.Data))
})
t.Run("delete line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(delete:true, id:$id) { id } }`,
json.RawMessage(`{"id":5002}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5002}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 20}]}`, string(res.Data))
})
t.Run("nested insert", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_items (insert: $line_item) { id, product { name } } }`,
json.RawMessage(`{"line_item":
{"id":5003, "product": { "connect": { "id": 1} }, "price":10.95, "quantity":15}
}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5003, "product": {"name": "Charmin Ultra Soft"}}]}`, string(res.Data))
})
t.Run("schema introspection", func(t *testing.T) {
before(t)
schema, err := sg.GraphQLSchema()
require.NoError(t, err)
// Uncomment the following line if you need to regenerate the expected schema.
//ioutil.WriteFile("../introspection.graphql", []byte(schema), 0644)
expected, err := ioutil.ReadFile("../introspection.graphql")
require.NoError(t, err)
assert.Equal(t, string(expected), schema)
})
res, err := sg.GraphQL(ctx, introspectionQuery, json.RawMessage(``))
assert.NoError(t, err)
assert.Contains(t, string(res.Data),
`{"queryType":{"name":"Query"},"mutationType":{"name":"Mutation"},"subscriptionType":null,"types":`)
}
const introspectionQuery = `
query IntrospectionQuery {
__schema {
queryType { name }
mutationType { name }
subscriptionType { name }
types {
...FullType
}
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type { ...TypeRef }
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
`

View File

@ -0,0 +1,319 @@
input FloatExpression {
contained_in:String!
contains:[Float!]!
eq:Float!
equals:Float!
greater_or_equals:Float!
greater_than:Float!
gt:Float!
gte:Float!
has_key:Float!
has_key_all:[Float!]!
has_key_any:[Float!]!
ilike:String!
in:[Float!]!
is_null:Boolean!
lesser_or_equals:Float!
lesser_than:Float!
like:String!
lt:Float!
lte:Float!
neq:Float!
nilike:String!
nin:[Float!]!
nlike:String!
not_equals:Float!
not_ilike:String!
not_in:[Float!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input IntExpression {
contained_in:String!
contains:[Int!]!
eq:Int!
equals:Int!
greater_or_equals:Int!
greater_than:Int!
gt:Int!
gte:Int!
has_key:Int!
has_key_all:[Int!]!
has_key_any:[Int!]!
ilike:String!
in:[Int!]!
is_null:Boolean!
lesser_or_equals:Int!
lesser_than:Int!
like:String!
lt:Int!
lte:Int!
neq:Int!
nilike:String!
nin:[Int!]!
nlike:String!
not_equals:Int!
not_ilike:String!
not_in:[Int!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
type Mutation {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput, inserts:[line_itemInput!]!, updates:[line_itemInput!]!, upserts:[line_itemInput!]!
):line_itemOutput
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput, inserts:[productInput!]!, updates:[productInput!]!, upserts:[productInput!]!
):productOutput
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput, inserts:[userInput!]!, updates:[userInput!]!, upserts:[userInput!]!
):userOutput
}
enum OrderDirection {
asc
desc
}
type Query {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[line_itemOutput!]!
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[productOutput!]!
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[userOutput!]!
}
input StringExpression {
contained_in:String!
contains:[String!]!
eq:String!
equals:String!
greater_or_equals:String!
greater_than:String!
gt:String!
gte:String!
has_key:String!
has_key_all:[String!]!
has_key_any:[String!]!
ilike:String!
in:[String!]!
is_null:Boolean!
lesser_or_equals:String!
lesser_than:String!
like:String!
lt:String!
lte:String!
neq:String!
nilike:String!
nin:[String!]!
nlike:String!
not_equals:String!
not_ilike:String!
not_in:[String!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input line_itemExpression {
and:line_itemExpression!
id:IntExpression!
not:line_itemExpression!
or:line_itemExpression!
price:FloatExpression!
product:IntExpression!
quantity:IntExpression!
}
input line_itemInput {
id:Int!
price:Float
product:Int
quantity:Int
}
input line_itemOrderBy {
id:OrderDirection!
price:OrderDirection!
product:OrderDirection!
quantity:OrderDirection!
}
type line_itemOutput {
avg_id:Int!
avg_price:Float
avg_product:Int
avg_quantity:Int
count_id:Int!
count_price:Float
count_product:Int
count_quantity:Int
id:Int!
max_id:Int!
max_price:Float
max_product:Int
max_quantity:Int
min_id:Int!
min_price:Float
min_product:Int
min_quantity:Int
price:Float
product:Int
quantity:Int
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_price:Float
stddev_pop_product:Int
stddev_pop_quantity:Int
stddev_price:Float
stddev_product:Int
stddev_quantity:Int
stddev_samp_id:Int!
stddev_samp_price:Float
stddev_samp_product:Int
stddev_samp_quantity:Int
var_pop_id:Int!
var_pop_price:Float
var_pop_product:Int
var_pop_quantity:Int
var_samp_id:Int!
var_samp_price:Float
var_samp_product:Int
var_samp_quantity:Int
variance_id:Int!
variance_price:Float
variance_product:Int
variance_quantity:Int
}
input productExpression {
and:productExpression!
id:IntExpression!
name:StringExpression!
not:productExpression!
or:productExpression!
weight:FloatExpression!
}
input productInput {
id:Int!
name:String
weight:Float
}
input productOrderBy {
id:OrderDirection!
name:OrderDirection!
weight:OrderDirection!
}
type productOutput {
avg_id:Int!
avg_weight:Float
count_id:Int!
count_weight:Float
id:Int!
max_id:Int!
max_weight:Float
min_id:Int!
min_weight:Float
name:String
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_weight:Float
stddev_samp_id:Int!
stddev_samp_weight:Float
stddev_weight:Float
var_pop_id:Int!
var_pop_weight:Float
var_samp_id:Int!
var_samp_weight:Float
variance_id:Int!
variance_weight:Float
weight:Float
}
input userExpression {
and:userExpression!
full_name:StringExpression!
id:IntExpression!
not:userExpression!
or:userExpression!
}
input userInput {
full_name:String
id:Int!
}
input userOrderBy {
full_name:OrderDirection!
id:OrderDirection!
}
type userOutput {
avg_id:Int!
count_id:Int!
full_name:String
id:Int!
max_id:Int!
min_id:Int!
stddev_id:Int!
stddev_pop_id:Int!
stddev_samp_id:Int!
var_pop_id:Int!
var_samp_id:Int!
variance_id:Int!
}
schema {
mutation: Mutation
query: Query
}

View File

@ -0,0 +1,27 @@
package cockraochdb_test
import (
"database/sql"
"os"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
url, found := os.LookupEnv("SG_POSTGRESQL_TEST_URL")
if !found {
t.Skip("set the SG_POSTGRESQL_TEST_URL env variable if you want to run integration tests against a PostgreSQL database")
} else {
db, err := sql.Open("pgx", url)
require.NoError(t, err)
integration_tests.DropSchema(t, db)
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
})
}
}

View File

@ -1,4 +1,3 @@
//nolint:errcheck
package psql package psql
import ( import (
@ -12,8 +11,7 @@ import (
func (c *compilerContext) renderBaseColumns( func (c *compilerContext) renderBaseColumns(
sel *qcode.Select, sel *qcode.Select,
ti *DBTableInfo, ti *DBTableInfo,
childCols []*qcode.Column, childCols []*qcode.Column) ([]int, bool, error) {
skipped uint32) ([]int, bool, error) {
var realColsRendered []int var realColsRendered []int
@ -113,15 +111,15 @@ func (c *compilerContext) renderColumnSearchRank(sel *qcode.Select, ti *DBTableI
c.renderComma(columnsRendered) c.renderComma(columnsRendered)
//fmt.Fprintf(w, `ts_rank("%s"."%s", websearch_to_tsquery('%s')) AS %s`, //fmt.Fprintf(w, `ts_rank("%s"."%s", websearch_to_tsquery('%s')) AS %s`,
//c.sel.Name, cn, arg.Val, col.Name) //c.sel.Name, cn, arg.Val, col.Name)
io.WriteString(c.w, `ts_rank(`) _, _ = io.WriteString(c.w, `ts_rank(`)
colWithTable(c.w, ti.Name, cn) colWithTable(c.w, ti.Name, cn)
if c.schema.ver >= 110000 { if c.schema.ver >= 110000 {
io.WriteString(c.w, `, websearch_to_tsquery('{{`) _, _ = io.WriteString(c.w, `, websearch_to_tsquery(`)
} else { } else {
io.WriteString(c.w, `, to_tsquery('{{`) _, _ = io.WriteString(c.w, `, to_tsquery(`)
} }
io.WriteString(c.w, arg.Val) c.md.renderValueExp(c.w, Param{Name: arg.Val, Type: "string"})
io.WriteString(c.w, `}}'))`) _, _ = io.WriteString(c.w, `))`)
alias(c.w, col.Name) alias(c.w, col.Name)
return nil return nil
@ -138,15 +136,15 @@ func (c *compilerContext) renderColumnSearchHeadline(sel *qcode.Select, ti *DBTa
c.renderComma(columnsRendered) c.renderComma(columnsRendered)
//fmt.Fprintf(w, `ts_headline("%s"."%s", websearch_to_tsquery('%s')) AS %s`, //fmt.Fprintf(w, `ts_headline("%s"."%s", websearch_to_tsquery('%s')) AS %s`,
//c.sel.Name, cn, arg.Val, col.Name) //c.sel.Name, cn, arg.Val, col.Name)
io.WriteString(c.w, `ts_headline(`) _, _ = io.WriteString(c.w, `ts_headline(`)
colWithTable(c.w, ti.Name, cn) colWithTable(c.w, ti.Name, cn)
if c.schema.ver >= 110000 { if c.schema.ver >= 110000 {
io.WriteString(c.w, `, websearch_to_tsquery('{{`) _, _ = io.WriteString(c.w, `, websearch_to_tsquery(`)
} else { } else {
io.WriteString(c.w, `, to_tsquery('{{`) _, _ = io.WriteString(c.w, `, to_tsquery(`)
} }
io.WriteString(c.w, arg.Val) c.md.renderValueExp(c.w, Param{Name: arg.Val, Type: "string"})
io.WriteString(c.w, `}}'))`) _, _ = io.WriteString(c.w, `))`)
alias(c.w, col.Name) alias(c.w, col.Name)
return nil return nil
@ -158,21 +156,21 @@ func (c *compilerContext) renderColumnTypename(sel *qcode.Select, ti *DBTableInf
} }
c.renderComma(columnsRendered) c.renderComma(columnsRendered)
io.WriteString(c.w, `(`) _, _ = io.WriteString(c.w, `(`)
squoted(c.w, ti.Name) squoted(c.w, ti.Name)
io.WriteString(c.w, ` :: text)`) _, _ = io.WriteString(c.w, ` :: text)`)
alias(c.w, col.Name) alias(c.w, col.Name)
return nil return nil
} }
func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error { func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error {
pl := funcPrefixLen(col.Name) pl := funcPrefixLen(c.schema.fm, col.Name)
// if pl == 0 { // if pl == 0 {
// //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name) // //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name)
// io.WriteString(c.w, `'`) // _, _ = io.WriteString(c.w, `'`)
// io.WriteString(c.w, col.Name) // _, _ = io.WriteString(c.w, col.Name)
// io.WriteString(c.w, ` not defined'`) // _, _ = io.WriteString(c.w, ` not defined'`)
// alias(c.w, col.Name) // alias(c.w, col.Name)
// } // }
@ -191,10 +189,10 @@ func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInf
c.renderComma(columnsRendered) c.renderComma(columnsRendered)
//fmt.Fprintf(w, `%s("%s"."%s") AS %s`, fn, c.sel.Name, cn, col.Name) //fmt.Fprintf(w, `%s("%s"."%s") AS %s`, fn, c.sel.Name, cn, col.Name)
io.WriteString(c.w, fn) _, _ = io.WriteString(c.w, fn)
io.WriteString(c.w, `(`) _, _ = io.WriteString(c.w, `(`)
colWithTable(c.w, ti.Name, cn) colWithTable(c.w, ti.Name, cn)
io.WriteString(c.w, `)`) _, _ = io.WriteString(c.w, `)`)
alias(c.w, col.Name) alias(c.w, col.Name)
return nil return nil
@ -202,7 +200,7 @@ func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInf
func (c *compilerContext) renderComma(columnsRendered int) { func (c *compilerContext) renderComma(columnsRendered int) {
if columnsRendered != 0 { if columnsRendered != 0 {
io.WriteString(c.w, `, `) _, _ = io.WriteString(c.w, `, `)
} }
} }

View File

@ -4,17 +4,18 @@ package psql
import ( import (
"encoding/json" "encoding/json"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
var ( var (
qcompileTest, _ = qcode.NewCompiler(qcode.Config{}) qcompileTest, _ = qcode.NewCompiler(qcode.Config{})
schema = getTestSchema() schema, _ = GetTestSchema()
vars = NewVariables(map[string]string{ vars = map[string]string{
"admin_account_id": "5", "admin_account_id": "5",
}) }
pcompileTest = NewCompiler(Config{ pcompileTest = NewCompiler(Config{
Schema: schema, Schema: schema,
@ -24,6 +25,37 @@ var (
// FuzzerEntrypoint for Fuzzbuzz // FuzzerEntrypoint for Fuzzbuzz
func Fuzz(data []byte) int { func Fuzz(data []byte) int {
err1 := query(data)
err2 := insert(data)
err3 := update(data)
err4 := delete(data)
if err1 != nil || err2 != nil || err3 != nil || err4 != nil {
return 0
}
return 1
}
func query(data []byte) error {
gql := data
qc, err1 := qcompileTest.Compile(gql, "user")
vars := map[string]json.RawMessage{
"data": json.RawMessage(data),
}
_, _, err2 := pcompileTest.CompileEx(qc, vars)
if err1 != nil {
return err1
} else {
return err2
}
}
func insert(data []byte) error {
gql := `mutation { gql := `mutation {
product(insert: $data) { product(insert: $data) {
id id
@ -46,9 +78,57 @@ func Fuzz(data []byte) int {
} }
_, _, err = pcompileTest.CompileEx(qc, vars) _, _, err = pcompileTest.CompileEx(qc, vars)
return err
}
func update(data []byte) error {
gql := `mutation {
product(insert: $data) {
id
name
user {
id
full_name
email
}
}
}`
qc, err := qcompileTest.Compile([]byte(gql), "user")
if err != nil { if err != nil {
return 0 panic("qcompile can't fail")
} }
return 1 vars := map[string]json.RawMessage{
"data": json.RawMessage(data),
}
_, _, err = pcompileTest.CompileEx(qc, vars)
return err
}
func delete(data []byte) error {
gql := `mutation {
product(insert: $data) {
id
name
user {
id
full_name
email
}
}
}`
qc, err := qcompileTest.Compile([]byte(gql), "user")
if err != nil {
panic("qcompile can't fail")
}
vars := map[string]json.RawMessage{
"data": json.RawMessage(data),
}
_, _, err = pcompileTest.CompileEx(qc, vars)
return err
} }

View File

@ -0,0 +1,20 @@
// +build gofuzz
package psql
import (
"testing"
)
var ret int
func TestFuzzCrashers(t *testing.T) {
var crashers = []string{
"{\"connect\":{}}",
"q(q{q{q{q{q{q{q{q{",
}
for _, f := range crashers {
ret = Fuzz([]byte(f))
}
}

View File

@ -10,8 +10,8 @@ import (
"github.com/dosco/super-graph/core/internal/util" "github.com/dosco/super-graph/core/internal/util"
) )
func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderInsert(
vars Variables, ti *DBTableInfo) (uint32, error) { w io.Writer, qc *qcode.QCode, vars Variables, ti *DBTableInfo) (uint32, error) {
insert, ok := vars[qc.ActionVar] insert, ok := vars[qc.ActionVar]
if !ok { if !ok {
@ -21,9 +21,16 @@ func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer,
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar) return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
} }
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`) io.WriteString(c.w, `WITH "_sg_input" AS (SELECT `)
io.WriteString(c.w, qc.ActionVar) if insert[0] == '[' {
io.WriteString(c.w, `}}' :: json AS j)`) io.WriteString(c.w, `json_array_elements(`)
}
c.md.renderValueExp(c.w, Param{Name: qc.ActionVar, Type: "json"})
io.WriteString(c.w, ` :: json`)
if insert[0] == '[' {
io.WriteString(c.w, `)`)
}
io.WriteString(c.w, ` AS j)`)
st := util.NewStack() st := util.NewStack()
st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti}) st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti})
@ -82,34 +89,17 @@ func (c *compilerContext) renderInsertStmt(qc *qcode.QCode, w io.Writer, item re
io.WriteString(w, `INSERT INTO `) io.WriteString(w, `INSERT INTO `)
quoted(w, ti.Name) quoted(w, ti.Name)
io.WriteString(w, ` (`) io.WriteString(w, ` (`)
renderInsertUpdateColumns(w, qc, jt, ti, sk, false) c.renderInsertUpdateColumns(qc, jt, ti, sk, false)
renderNestedInsertRelColumns(w, item.kvitem, false) renderNestedInsertRelColumns(w, item.kvitem, false)
io.WriteString(w, `)`) io.WriteString(w, `)`)
io.WriteString(w, ` SELECT `) io.WriteString(w, ` SELECT `)
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) c.renderInsertUpdateColumns(qc, jt, ti, sk, true)
renderNestedInsertRelColumns(w, item.kvitem, true) renderNestedInsertRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedInsertRelTables(w, item.kvitem) renderNestedInsertRelTables(w, item.kvitem)
io.WriteString(w, ` RETURNING *)`)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t RETURNING *)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t RETURNING *)`)
}
return nil return nil
} }
@ -172,21 +162,21 @@ func renderNestedInsertRelColumns(w io.Writer, item kvitem, values bool) error {
func renderNestedInsertRelTables(w io.Writer, item kvitem) error { func renderNestedInsertRelTables(w io.Writer, item kvitem) error {
if len(item.items) == 0 { if len(item.items) == 0 {
if item.relPC != nil && item.relPC.Type == RelOneToMany { if item.relPC != nil && item.relPC.Type == RelOneToMany {
quoted(w, item.relPC.Left.Table)
io.WriteString(w, `, `) io.WriteString(w, `, `)
quoted(w, item.relPC.Left.Table)
} }
} else { } else {
// Render tables needed to set values if child-to-parent // Render tables needed to set values if child-to-parent
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v.relCP.Type == RelOneToMany { if v.relCP.Type == RelOneToMany {
io.WriteString(w, `, `)
if v._ctype > 0 { if v._ctype > 0 {
io.WriteString(w, `"_x_`) io.WriteString(w, `"_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} else { } else {
quoted(w, v.relCP.Left.Table) quoted(w, v.relCP.Left.Table)
io.WriteString(w, `, `)
} }
} }
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -0,0 +1,61 @@
package psql
import (
"io"
)
func (md *Metadata) RenderVar(w io.Writer, vv string) {
f, s := -1, 0
for i := range vv {
v := vv[i]
switch {
case (i > 0 && vv[i-1] != '\\' && v == '$') || v == '$':
if (i - s) > 0 {
_, _ = io.WriteString(w, vv[s:i])
}
f = i
case (v < 'a' && v > 'z') &&
(v < 'A' && v > 'Z') &&
(v < '0' && v > '9') &&
v != '_' &&
f != -1 &&
(i-f) > 1:
md.renderValueExp(w, Param{Name: vv[f+1 : i]})
s = i
f = -1
}
}
if f != -1 && (len(vv)-f) > 1 {
md.renderValueExp(w, Param{Name: vv[f+1:]})
} else {
_, _ = io.WriteString(w, vv[s:])
}
}
func (md *Metadata) renderValueExp(w io.Writer, p Param) {
_, _ = io.WriteString(w, `$`)
if v, ok := md.pindex[p.Name]; ok {
int32String(w, int32(v))
} else {
md.params = append(md.params, p)
n := len(md.params)
if md.pindex == nil {
md.pindex = make(map[string]int)
}
md.pindex[p.Name] = n
int32String(w, int32(n))
}
}
func (md Metadata) Skipped() uint32 {
return md.skipped
}
func (md Metadata) Params() []Param {
return md.params
}

View File

@ -6,10 +6,11 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"strings"
"github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/core/internal/util" "github.com/dosco/super-graph/core/internal/util"
"github.com/dosco/super-graph/jsn"
) )
type itemType int type itemType int
@ -33,42 +34,44 @@ var updateTypes = map[string]itemType{
var noLimit = qcode.Paging{NoLimit: true} var noLimit = qcode.Paging{NoLimit: true}
func (co *Compiler) compileMutation(qc *qcode.QCode, w io.Writer, vars Variables) (uint32, error) { func (co *Compiler) compileMutation(w io.Writer, qc *qcode.QCode, vars Variables) (Metadata, error) {
md := Metadata{}
if len(qc.Selects) == 0 { if len(qc.Selects) == 0 {
return 0, errors.New("empty query") return md, errors.New("empty query")
} }
c := &compilerContext{w, qc.Selects, co} c := &compilerContext{md, w, qc.Selects, co}
root := &qc.Selects[0] root := &qc.Selects[0]
ti, err := c.schema.GetTable(root.Name) ti, err := c.schema.GetTable(root.Name)
if err != nil { if err != nil {
return 0, err return c.md, err
} }
switch qc.Type { switch qc.Type {
case qcode.QTInsert: case qcode.QTInsert:
if _, err := c.renderInsert(qc, w, vars, ti); err != nil { if _, err := c.renderInsert(w, qc, vars, ti); err != nil {
return 0, err return c.md, err
} }
case qcode.QTUpdate: case qcode.QTUpdate:
if _, err := c.renderUpdate(qc, w, vars, ti); err != nil { if _, err := c.renderUpdate(w, qc, vars, ti); err != nil {
return 0, err return c.md, err
} }
case qcode.QTUpsert: case qcode.QTUpsert:
if _, err := c.renderUpsert(qc, w, vars, ti); err != nil { if _, err := c.renderUpsert(w, qc, vars, ti); err != nil {
return 0, err return c.md, err
} }
case qcode.QTDelete: case qcode.QTDelete:
if _, err := c.renderDelete(qc, w, vars, ti); err != nil { if _, err := c.renderDelete(w, qc, vars, ti); err != nil {
return 0, err return c.md, err
} }
default: default:
return 0, errors.New("valid mutations are 'insert', 'update', 'upsert' and 'delete'") return c.md, errors.New("valid mutations are 'insert', 'update', 'upsert' and 'delete'")
} }
root.Paging = noLimit root.Paging = noLimit
@ -77,7 +80,7 @@ func (co *Compiler) compileMutation(qc *qcode.QCode, w io.Writer, vars Variables
root.Where = nil root.Where = nil
root.Args = nil root.Args = nil
return c.compileQuery(qc, w, vars) return co.compileQueryWithMetadata(w, qc, vars, c.md)
} }
type kvitem struct { type kvitem struct {
@ -365,12 +368,12 @@ func (c *compilerContext) renderUnionStmt(w io.Writer, item renitem) error {
return nil return nil
} }
func renderInsertUpdateColumns(w io.Writer, func (c *compilerContext) renderInsertUpdateColumns(
qc *qcode.QCode, qc *qcode.QCode,
jt map[string]json.RawMessage, jt map[string]json.RawMessage,
ti *DBTableInfo, ti *DBTableInfo,
skipcols map[string]struct{}, skipcols map[string]struct{},
values bool) (uint32, error) { isValues bool) (uint32, error) {
root := &qc.Selects[0] root := &qc.Selects[0]
renderedCol := false renderedCol := false
@ -392,13 +395,18 @@ func renderInsertUpdateColumns(w io.Writer,
} }
} }
if n != 0 { if n != 0 {
io.WriteString(w, `, `) io.WriteString(c.w, `, `)
} }
if values { if isValues {
colWithTable(w, "t", cn.Name) io.WriteString(c.w, `CAST( i.j ->>`)
io.WriteString(c.w, `'`)
io.WriteString(c.w, cn.Name)
io.WriteString(c.w, `' AS `)
io.WriteString(c.w, cn.Type)
io.WriteString(c.w, `)`)
} else { } else {
quoted(w, cn.Name) quoted(c.w, cn.Name)
} }
if !renderedCol { if !renderedCol {
@ -417,16 +425,28 @@ func renderInsertUpdateColumns(w io.Writer,
continue continue
} }
if i != 0 || n != 0 { if i != 0 || n != 0 {
io.WriteString(w, `, `) io.WriteString(c.w, `, `)
} }
if values { if isValues {
io.WriteString(w, `'`) val := root.PresetMap[cn]
io.WriteString(w, root.PresetMap[cn]) switch {
io.WriteString(w, `' :: `) case ok && len(val) > 1 && val[0] == '$':
io.WriteString(w, col.Type) c.md.renderValueExp(c.w, Param{Name: val[1:], Type: col.Type})
case ok && strings.HasPrefix(val, "sql:"):
io.WriteString(c.w, `(`)
c.md.RenderVar(c.w, val[4:])
io.WriteString(c.w, `)`)
case ok:
squoted(c.w, val)
}
io.WriteString(c.w, ` :: `)
io.WriteString(c.w, col.Type)
} else { } else {
quoted(w, cn) quoted(c.w, cn)
} }
if !renderedCol { if !renderedCol {
@ -435,15 +455,15 @@ func renderInsertUpdateColumns(w io.Writer,
} }
if len(skipcols) != 0 && renderedCol { if len(skipcols) != 0 && renderedCol {
io.WriteString(w, `, `) io.WriteString(c.w, `, `)
} }
return 0, nil return 0, nil
} }
func (c *compilerContext) renderUpsert(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderUpsert(
vars Variables, ti *DBTableInfo) (uint32, error) { w io.Writer, qc *qcode.QCode, vars Variables, ti *DBTableInfo) (uint32, error) {
root := &qc.Selects[0]
root := &qc.Selects[0]
upsert, ok := vars[qc.ActionVar] upsert, ok := vars[qc.ActionVar]
if !ok { if !ok {
return 0, fmt.Errorf("variable '%s' not defined", qc.ActionVar) return 0, fmt.Errorf("variable '%s' not defined", qc.ActionVar)
@ -461,7 +481,7 @@ func (c *compilerContext) renderUpsert(qc *qcode.QCode, w io.Writer,
return 0, err return 0, err
} }
if _, err := c.renderInsert(qc, w, vars, ti); err != nil { if _, err := c.renderInsert(w, qc, vars, ti); err != nil {
return 0, err return 0, err
} }
@ -522,6 +542,10 @@ func (c *compilerContext) renderConnectStmt(qc *qcode.QCode, w io.Writer,
rel := item.relPC rel := item.relPC
if rel == nil {
return errors.New("invalid connect value")
}
// Render only for parent-to-child relationship of one-to-one // Render only for parent-to-child relationship of one-to-one
// For this to work the child needs to found first so it's primary key // For this to work the child needs to found first so it's primary key
// can be set in the related column on the parent object. // can be set in the related column on the parent object.
@ -667,7 +691,7 @@ func renderCteName(w io.Writer, item kvitem) error {
io.WriteString(w, item.ti.Name) io.WriteString(w, item.ti.Name)
if item._type == itemConnect || item._type == itemDisconnect { if item._type == itemConnect || item._type == itemDisconnect {
io.WriteString(w, `_`) io.WriteString(w, `_`)
int2string(w, item.id) int32String(w, item.id)
} }
io.WriteString(w, `"`) io.WriteString(w, `"`)
return nil return nil

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"
@ -72,7 +72,7 @@ func delete(t *testing.T) {
// } // }
// }` // }`
// sql := `WITH "users" AS (WITH "input" AS (SELECT '{{data}}' :: json AS j) INSERT INTO "users" ("full_name", "email") SELECT "full_name", "email" FROM input i, json_populate_record(NULL::users, i.j) t WHERE false RETURNING *) SELECT json_object_agg('user', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "users_0"."id" AS "id") AS "json_row_0")) AS "json_0" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0" LIMIT ('1') :: integer) AS "sel_0"` // sql := `WITH "users" AS (WITH "input" AS (SELECT '$1' :: json AS j) INSERT INTO "users" ("full_name", "email") SELECT "full_name", "email" FROM input i, json_populate_record(NULL::users, i.j) t WHERE false RETURNING *) SELECT json_object_agg('user', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "users_0"."id" AS "id") AS "json_row_0")) AS "json_0" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0" LIMIT ('1') :: integer) AS "sel_0"`
// vars := map[string]json.RawMessage{ // vars := map[string]json.RawMessage{
// "data": json.RawMessage(`{"email": "reannagreenholt@orn.com", "full_name": "Flo Barton"}`), // "data": json.RawMessage(`{"email": "reannagreenholt@orn.com", "full_name": "Flo Barton"}`),
@ -97,7 +97,7 @@ func delete(t *testing.T) {
// } // }
// }` // }`
// sql := `WITH "users" AS (WITH "input" AS (SELECT '{{data}}' :: json AS j) UPDATE "users" SET ("full_name", "email") = (SELECT "full_name", "email" FROM input i, json_populate_record(NULL::users, i.j) t) WHERE false RETURNING *) SELECT json_object_agg('user', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email") AS "json_row_0")) AS "json_0" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LIMIT ('1') :: integer) AS "sel_0"` // sql := `WITH "users" AS (WITH "input" AS (SELECT '$1' :: json AS j) UPDATE "users" SET ("full_name", "email") = (SELECT "full_name", "email" FROM input i, json_populate_record(NULL::users, i.j) t) WHERE false RETURNING *) SELECT json_object_agg('user', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email") AS "json_row_0")) AS "json_0" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LIMIT ('1') :: integer) AS "sel_0"`
// vars := map[string]json.RawMessage{ // vars := map[string]json.RawMessage{
// "data": json.RawMessage(`{"email": "reannagreenholt@orn.com", "full_name": "Flo Barton"}`), // "data": json.RawMessage(`{"email": "reannagreenholt@orn.com", "full_name": "Flo Barton"}`),

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"fmt" "fmt"
@ -8,6 +8,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
@ -19,7 +20,7 @@ const (
var ( var (
qcompile *qcode.Compiler qcompile *qcode.Compiler
pcompile *Compiler pcompile *psql.Compiler
expected map[string][]string expected map[string][]string
) )
@ -133,13 +134,16 @@ func TestMain(m *testing.M) {
log.Fatal(err) log.Fatal(err)
} }
schema := getTestSchema() schema, err := psql.GetTestSchema()
if err != nil {
log.Fatal(err)
}
vars := NewVariables(map[string]string{ vars := map[string]string{
"admin_account_id": "5", "admin_account_id": "5",
}) }
pcompile = NewCompiler(Config{ pcompile = psql.NewCompiler(psql.Config{
Schema: schema, Schema: schema,
Vars: vars, Vars: vars,
}) })
@ -173,7 +177,7 @@ func TestMain(m *testing.M) {
os.Exit(m.Run()) os.Exit(m.Run())
} }
func compileGQLToPSQL(t *testing.T, gql string, vars Variables, role string) { func compileGQLToPSQL(t *testing.T, gql string, vars psql.Variables, role string) {
generateTestFile := false generateTestFile := false
if generateTestFile { if generateTestFile {

View File

@ -7,6 +7,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"strconv"
"strings" "strings"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
@ -17,9 +18,24 @@ const (
closeBlock = 500 closeBlock = 500
) )
var ( type Param struct {
ErrAllTablesSkipped = errors.New("all tables skipped. cannot render query") Name string
) Type string
IsArray bool
}
type Metadata struct {
skipped uint32
params []Param
pindex map[string]int
}
type compilerContext struct {
md Metadata
w io.Writer
s []qcode.Select
*Compiler
}
type Variables map[string]json.RawMessage type Variables map[string]json.RawMessage
@ -40,12 +56,12 @@ func NewCompiler(conf Config) *Compiler {
} }
} }
func (c *Compiler) AddRelationship(child, parent string, rel *DBRel) error { func (co *Compiler) AddRelationship(child, parent string, rel *DBRel) error {
return c.schema.SetRel(child, parent, rel) return co.schema.SetRel(child, parent, rel)
} }
func (c *Compiler) IDColumn(table string) (*DBColumn, error) { func (co *Compiler) IDColumn(table string) (*DBColumn, error) {
ti, err := c.schema.GetTable(table) ti, err := co.schema.GetTable(table)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -57,65 +73,79 @@ func (c *Compiler) IDColumn(table string) (*DBColumn, error) {
return ti.PrimaryCol, nil return ti.PrimaryCol, nil
} }
type compilerContext struct { func (co *Compiler) CompileEx(qc *qcode.QCode, vars Variables) (Metadata, []byte, error) {
w io.Writer
s []qcode.Select
*Compiler
}
func (co *Compiler) CompileEx(qc *qcode.QCode, vars Variables) (uint32, []byte, error) {
w := &bytes.Buffer{} w := &bytes.Buffer{}
skipped, err := co.Compile(qc, w, vars) metad, err := co.Compile(w, qc, vars)
return skipped, w.Bytes(), err return metad, w.Bytes(), err
} }
func (co *Compiler) Compile(qc *qcode.QCode, w io.Writer, vars Variables) (uint32, error) { func (co *Compiler) Compile(w io.Writer, qc *qcode.QCode, vars Variables) (Metadata, error) {
return co.CompileWithMetadata(w, qc, vars, Metadata{})
}
func (co *Compiler) CompileWithMetadata(w io.Writer, qc *qcode.QCode, vars Variables, md Metadata) (Metadata, error) {
md.skipped = 0
if qc == nil {
return md, fmt.Errorf("qcode is nil")
}
switch qc.Type { switch qc.Type {
case qcode.QTQuery: case qcode.QTQuery:
return co.compileQuery(qc, w, vars) return co.compileQueryWithMetadata(w, qc, vars, md)
case qcode.QTInsert, qcode.QTUpdate, qcode.QTDelete, qcode.QTUpsert:
return co.compileMutation(qc, w, vars) case qcode.QTInsert,
qcode.QTUpdate,
qcode.QTDelete,
qcode.QTUpsert:
return co.compileMutation(w, qc, vars)
default:
return Metadata{}, fmt.Errorf("Unknown operation type %d", qc.Type)
} }
return 0, fmt.Errorf("Unknown operation type %d", qc.Type)
} }
func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (uint32, error) { func (co *Compiler) compileQueryWithMetadata(
w io.Writer, qc *qcode.QCode, vars Variables, md Metadata) (Metadata, error) {
if len(qc.Selects) == 0 { if len(qc.Selects) == 0 {
return 0, errors.New("empty query") return md, errors.New("empty query")
} }
c := &compilerContext{w, qc.Selects, co} c := &compilerContext{md, w, qc.Selects, co}
st := NewIntStack() st := NewIntStack()
i := 0 i := 0
io.WriteString(c.w, `SELECT jsonb_build_object(`) io.WriteString(c.w, `SELECT jsonb_build_object(`)
for _, id := range qc.Roots { for _, id := range qc.Roots {
root := &qc.Selects[id]
if root.SkipRender || len(root.Cols) == 0 {
continue
}
st.Push(root.ID + closeBlock)
st.Push(root.ID)
if i != 0 { if i != 0 {
io.WriteString(c.w, `, `) io.WriteString(c.w, `, `)
} }
root := &qc.Selects[id]
if root.SkipRender || len(root.Cols) == 0 {
squoted(c.w, root.FieldName)
io.WriteString(c.w, `, `)
io.WriteString(c.w, `NULL`)
} else {
st.Push(root.ID + closeBlock)
st.Push(root.ID)
c.renderRootSelect(root) c.renderRootSelect(root)
}
i++ i++
} }
if st.Len() != 0 {
io.WriteString(c.w, `) as "__root" FROM `) io.WriteString(c.w, `) as "__root" FROM `)
} else {
if i == 0 { io.WriteString(c.w, `) as "__root"`)
return 0, ErrAllTablesSkipped return c.md, nil
} }
var ignored uint32
for { for {
if st.Len() == 0 { if st.Len() == 0 {
break break
@ -126,13 +156,14 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
if id < closeBlock { if id < closeBlock {
sel := &c.s[id] sel := &c.s[id]
if len(sel.Cols) == 0 {
continue
}
ti, err := c.schema.GetTable(sel.Name) ti, err := c.schema.GetTable(sel.Name)
if err != nil { if err != nil {
return 0, err return c.md, err
}
if sel.Type != qcode.STUnion {
if len(sel.Cols) == 0 {
continue
} }
if sel.ParentID == -1 { if sel.ParentID == -1 {
@ -141,25 +172,24 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
c.renderLateralJoin(sel) c.renderLateralJoin(sel)
} }
if !ti.Singular { if !ti.IsSingular {
c.renderPluralSelect(sel, ti) c.renderPluralSelect(sel, ti)
} }
skipped, err := c.renderSelect(sel, ti, vars) if err := c.renderSelect(sel, ti, vars); err != nil {
if err != nil { return c.md, err
return 0, err }
} }
ignored |= skipped
for _, cid := range sel.Children { for _, cid := range sel.Children {
if hasBit(skipped, uint32(cid)) { if hasBit(c.md.skipped, uint32(cid)) {
continue continue
} }
child := &c.s[cid] child := &c.s[cid]
if child.SkipRender { if child.SkipRender {
continue continue
} }
st.Push(child.ID + closeBlock) st.Push(child.ID + closeBlock)
st.Push(child.ID) st.Push(child.ID)
} }
@ -167,9 +197,10 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
} else { } else {
sel := &c.s[(id - closeBlock)] sel := &c.s[(id - closeBlock)]
if sel.Type != qcode.STUnion {
ti, err := c.schema.GetTable(sel.Name) ti, err := c.schema.GetTable(sel.Name)
if err != nil { if err != nil {
return 0, err return c.md, err
} }
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
@ -178,7 +209,7 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sj", sel.ID) aliasWithID(c.w, "__sj", sel.ID)
if !ti.Singular { if !ti.IsSingular {
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sj", sel.ID) aliasWithID(c.w, "__sj", sel.ID)
} }
@ -190,23 +221,24 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
} else { } else {
c.renderLateralJoinClose(sel) c.renderLateralJoinClose(sel)
} }
}
if sel.Type != qcode.STMember {
if len(sel.Args) != 0 { if len(sel.Args) != 0 {
i := 0
for _, v := range sel.Args { for _, v := range sel.Args {
qcode.FreeNode(v, 500) qcode.FreeNode(v)
i++ }
} }
} }
} }
} }
return ignored, nil return c.md, nil
} }
func (c *compilerContext) renderPluralSelect(sel *qcode.Select, ti *DBTableInfo) error { func (c *compilerContext) renderPluralSelect(sel *qcode.Select, ti *DBTableInfo) error {
io.WriteString(c.w, `SELECT coalesce(jsonb_agg("__sj_`) io.WriteString(c.w, `SELECT coalesce(jsonb_agg("__sj_`)
int2string(c.w, sel.ID) int32String(c.w, sel.ID)
io.WriteString(c.w, `"."json"), '[]') as "json"`) io.WriteString(c.w, `"."json"), '[]') as "json"`)
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
@ -230,7 +262,7 @@ func (c *compilerContext) renderPluralSelect(sel *qcode.Select, ti *DBTableInfo)
io.WriteString(c.w, `, CONCAT_WS(','`) io.WriteString(c.w, `, CONCAT_WS(','`)
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
io.WriteString(c.w, `, max("__cur_`) io.WriteString(c.w, `, max("__cur_`)
int2string(c.w, int32(i)) int32String(c.w, int32(i))
io.WriteString(c.w, `")`) io.WriteString(c.w, `")`)
} }
io.WriteString(c.w, `) as "cursor"`) io.WriteString(c.w, `) as "cursor"`)
@ -246,7 +278,7 @@ func (c *compilerContext) renderRootSelect(sel *qcode.Select) error {
io.WriteString(c.w, `', `) io.WriteString(c.w, `', `)
io.WriteString(c.w, `"__sj_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, sel.ID) int32String(c.w, sel.ID)
io.WriteString(c.w, `"."json"`) io.WriteString(c.w, `"."json"`)
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
@ -255,16 +287,14 @@ func (c *compilerContext) renderRootSelect(sel *qcode.Select) error {
io.WriteString(c.w, `_cursor', `) io.WriteString(c.w, `_cursor', `)
io.WriteString(c.w, `"__sj_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, sel.ID) int32String(c.w, sel.ID)
io.WriteString(c.w, `"."cursor"`) io.WriteString(c.w, `"."cursor"`)
} }
return nil return nil
} }
func (c *compilerContext) initSelect(sel *qcode.Select, ti *DBTableInfo, vars Variables) (uint32, []*qcode.Column, error) { func (c *compilerContext) initSelect(sel *qcode.Select, ti *DBTableInfo, vars Variables) ([]*qcode.Column, error) {
var skipped uint32
cols := make([]*qcode.Column, 0, len(sel.Cols)) cols := make([]*qcode.Column, 0, len(sel.Cols))
colmap := make(map[string]struct{}, len(sel.Cols)) colmap := make(map[string]struct{}, len(sel.Cols))
@ -306,9 +336,7 @@ func (c *compilerContext) initSelect(sel *qcode.Select, ti *DBTableInfo, vars Va
rel, err := c.schema.GetRel(child.Name, ti.Name) rel, err := c.schema.GetRel(child.Name, ti.Name)
if err != nil { if err != nil {
return 0, nil, err return nil, err
//skipped |= (1 << uint(id))
//continue
} }
switch rel.Type { switch rel.Type {
@ -334,16 +362,25 @@ func (c *compilerContext) initSelect(sel *qcode.Select, ti *DBTableInfo, vars Va
if _, ok := colmap[rel.Left.Col]; !ok { if _, ok := colmap[rel.Left.Col]; !ok {
cols = append(cols, &qcode.Column{Table: ti.Name, Name: rel.Left.Col, FieldName: rel.Right.Col}) cols = append(cols, &qcode.Column{Table: ti.Name, Name: rel.Left.Col, FieldName: rel.Right.Col})
colmap[rel.Left.Col] = struct{}{} colmap[rel.Left.Col] = struct{}{}
skipped |= (1 << uint(id)) c.md.skipped |= (1 << uint(id))
}
case RelPolymorphic:
if _, ok := colmap[rel.Left.Col]; !ok {
cols = append(cols, &qcode.Column{Table: ti.Name, Name: rel.Left.Col, FieldName: rel.Left.Col})
colmap[rel.Left.Col] = struct{}{}
}
if _, ok := colmap[rel.Right.Table]; !ok {
cols = append(cols, &qcode.Column{Table: ti.Name, Name: rel.Right.Table, FieldName: rel.Right.Table})
colmap[rel.Right.Table] = struct{}{}
} }
default: default:
return 0, nil, fmt.Errorf("unknown relationship %s", rel) return nil, fmt.Errorf("unknown relationship %s", rel)
//skipped |= (1 << uint(id))
} }
} }
return skipped, cols, nil return cols, nil
} }
// This // This
@ -412,22 +449,30 @@ func (c *compilerContext) addSeekPredicate(sel *qcode.Select) error {
return nil return nil
} }
func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars Variables) (uint32, error) { func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars Variables) error {
var rel *DBRel var rel *DBRel
var err error var err error
// Relationships must be between union parents and their parents
if sel.ParentID != -1 { if sel.ParentID != -1 {
parent := c.s[sel.ParentID] if sel.Type == qcode.STMember && sel.UParentID != -1 {
cn := c.s[sel.ParentID].Name
pn := c.s[sel.UParentID].Name
rel, err = c.schema.GetRel(cn, pn)
rel, err = c.schema.GetRel(ti.Name, parent.Name) } else {
if err != nil { pn := c.s[sel.ParentID].Name
return 0, err rel, err = c.schema.GetRel(ti.Name, pn)
} }
} }
skipped, childCols, err := c.initSelect(sel, ti, vars)
if err != nil { if err != nil {
return 0, err return err
}
childCols, err := c.initSelect(sel, ti, vars)
if err != nil {
return err
} }
// SELECT // SELECT
@ -437,13 +482,13 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
// } // }
io.WriteString(c.w, `SELECT to_jsonb("__sr_`) io.WriteString(c.w, `SELECT to_jsonb("__sr_`)
int2string(c.w, sel.ID) int32String(c.w, sel.ID)
io.WriteString(c.w, `") `) io.WriteString(c.w, `".*) `)
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
for i := range sel.OrderBy { for i := range sel.OrderBy {
io.WriteString(c.w, `- '__cur_`) io.WriteString(c.w, `- '__cur_`)
int2string(c.w, int32(i)) int32String(c.w, int32(i))
io.WriteString(c.w, `' `) io.WriteString(c.w, `' `)
} }
} }
@ -453,15 +498,15 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
for i := range sel.OrderBy { for i := range sel.OrderBy {
io.WriteString(c.w, `, "__cur_`) io.WriteString(c.w, `, "__cur_`)
int2string(c.w, int32(i)) int32String(c.w, int32(i))
io.WriteString(c.w, `"`) io.WriteString(c.w, `"`)
} }
} }
io.WriteString(c.w, `FROM (SELECT `) io.WriteString(c.w, `FROM (SELECT `)
if err := c.renderColumns(sel, ti, skipped); err != nil { if err := c.renderColumns(sel, ti); err != nil {
return 0, err return err
} }
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
@ -469,7 +514,7 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
io.WriteString(c.w, `, LAST_VALUE(`) io.WriteString(c.w, `, LAST_VALUE(`)
colWithTableID(c.w, ti.Name, sel.ID, ob.Col) colWithTableID(c.w, ti.Name, sel.ID, ob.Col)
io.WriteString(c.w, `) OVER() AS "__cur_`) io.WriteString(c.w, `) OVER() AS "__cur_`)
int2string(c.w, int32(i)) int32String(c.w, int32(i))
io.WriteString(c.w, `"`) io.WriteString(c.w, `"`)
} }
} }
@ -477,9 +522,8 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
io.WriteString(c.w, ` FROM (`) io.WriteString(c.w, ` FROM (`)
// FROM (SELECT .... ) // FROM (SELECT .... )
err = c.renderBaseSelect(sel, ti, rel, childCols, skipped) if err = c.renderBaseSelect(sel, ti, rel, childCols); err != nil {
if err != nil { return err
return skipped, err
} }
//fmt.Fprintf(w, `) AS "%s_%d"`, c.sel.Name, c.sel.ID) //fmt.Fprintf(w, `) AS "%s_%d"`, c.sel.Name, c.sel.ID)
@ -488,7 +532,7 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
// END-FROM // END-FROM
return skipped, nil return nil
} }
func (c *compilerContext) renderLateralJoin(sel *qcode.Select) error { func (c *compilerContext) renderLateralJoin(sel *qcode.Select) error {
@ -509,41 +553,38 @@ func (c *compilerContext) renderJoin(sel *qcode.Select, ti *DBTableInfo) error {
} }
func (c *compilerContext) renderJoinByName(table, parent string, id int32) error { func (c *compilerContext) renderJoinByName(table, parent string, id int32) error {
rel, err := c.schema.GetRel(table, parent) rel, _ := c.schema.GetRel(table, parent)
if err != nil {
return err
}
// This join is only required for one-to-many relations since // This join is only required for one-to-many relations since
// these make use of join tables that need to be pulled in. // these make use of join tables that need to be pulled in.
if rel.Type != RelOneToManyThrough { if rel == nil || rel.Type != RelOneToManyThrough {
return err return nil
} }
pt, err := c.schema.GetTable(parent) // pt, err := c.schema.GetTable(parent)
if err != nil { // if err != nil {
return err // return err
} // }
//fmt.Fprintf(w, ` LEFT OUTER JOIN "%s" ON (("%s"."%s") = ("%s_%d"."%s"))`, //fmt.Fprintf(w, ` LEFT OUTER JOIN "%s" ON (("%s"."%s") = ("%s_%d"."%s"))`,
//rel.Through, rel.Through, rel.ColT, c.parent.Name, c.parent.ID, rel.Left.Col) //rel.Through, rel.Through, rel.ColT, c.parent.Name, c.parent.ID, rel.Left.Col)
io.WriteString(c.w, ` LEFT OUTER JOIN "`) io.WriteString(c.w, ` LEFT OUTER JOIN "`)
io.WriteString(c.w, rel.Through) io.WriteString(c.w, rel.Through.Table)
io.WriteString(c.w, `" ON ((`) io.WriteString(c.w, `" ON ((`)
colWithTable(c.w, rel.Through, rel.ColT) colWithTable(c.w, rel.Through.Table, rel.Through.ColL)
io.WriteString(c.w, `) = (`) io.WriteString(c.w, `) = (`)
colWithTableID(c.w, pt.Name, id, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
io.WriteString(c.w, `))`) io.WriteString(c.w, `))`)
return nil return nil
} }
func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skipped uint32) error { func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo) error {
i := 0 i := 0
var cn string var cn string
for _, col := range sel.Cols { for _, col := range sel.Cols {
if n := funcPrefixLen(col.Name); n != 0 { if n := funcPrefixLen(c.schema.fm, col.Name); n != 0 {
if !sel.Functions { if !sel.Functions {
continue continue
} }
@ -574,7 +615,7 @@ func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skip
i += c.renderRemoteRelColumns(sel, ti, i) i += c.renderRemoteRelColumns(sel, ti, i)
return c.renderJoinColumns(sel, ti, skipped, i) return c.renderJoinColumns(sel, ti, i)
} }
func (c *compilerContext) renderRemoteRelColumns(sel *qcode.Select, ti *DBTableInfo, colsRendered int) int { func (c *compilerContext) renderRemoteRelColumns(sel *qcode.Select, ti *DBTableInfo, colsRendered int) int {
@ -599,12 +640,12 @@ func (c *compilerContext) renderRemoteRelColumns(sel *qcode.Select, ti *DBTableI
return i return i
} }
func (c *compilerContext) renderJoinColumns(sel *qcode.Select, ti *DBTableInfo, skipped uint32, colsRendered int) error { func (c *compilerContext) renderJoinColumns(sel *qcode.Select, ti *DBTableInfo, colsRendered int) error {
// columns previously rendered // columns previously rendered
i := colsRendered i := colsRendered
for _, id := range sel.Children { for _, id := range sel.Children {
if hasBit(skipped, uint32(id)) { if hasBit(c.md.skipped, uint32(id)) {
continue continue
} }
childSel := &c.s[id] childSel := &c.s[id]
@ -619,14 +660,37 @@ func (c *compilerContext) renderJoinColumns(sel *qcode.Select, ti *DBTableInfo,
continue continue
} }
if childSel.Type == qcode.STUnion {
rel, err := c.schema.GetRel(childSel.Name, ti.Name)
if err != nil {
return err
}
io.WriteString(c.w, `(CASE `)
for _, uid := range childSel.Children {
unionSel := &c.s[uid]
io.WriteString(c.w, `WHEN `)
colWithTableID(c.w, ti.Name, sel.ID, rel.Right.Table)
io.WriteString(c.w, ` = `)
squoted(c.w, unionSel.Name)
io.WriteString(c.w, ` THEN `)
io.WriteString(c.w, `"__sj_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, childSel.ID) int32String(c.w, unionSel.ID)
io.WriteString(c.w, `"."json"`)
}
io.WriteString(c.w, `END)`)
alias(c.w, childSel.FieldName)
} else {
io.WriteString(c.w, `"__sj_`)
int32String(c.w, childSel.ID)
io.WriteString(c.w, `"."json"`) io.WriteString(c.w, `"."json"`)
alias(c.w, childSel.FieldName) alias(c.w, childSel.FieldName)
}
if childSel.Paging.Type != qcode.PtOffset { if childSel.Paging.Type != qcode.PtOffset {
io.WriteString(c.w, `, "__sj_`) io.WriteString(c.w, `, "__sj_`)
int2string(c.w, childSel.ID) int32String(c.w, childSel.ID)
io.WriteString(c.w, `"."cursor" AS "`) io.WriteString(c.w, `"."cursor" AS "`)
io.WriteString(c.w, childSel.FieldName) io.WriteString(c.w, childSel.FieldName)
io.WriteString(c.w, `_cursor"`) io.WriteString(c.w, `_cursor"`)
@ -639,7 +703,7 @@ func (c *compilerContext) renderJoinColumns(sel *qcode.Select, ti *DBTableInfo,
} }
func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, rel *DBRel, func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, rel *DBRel,
childCols []*qcode.Column, skipped uint32) error { childCols []*qcode.Column) error {
isRoot := (rel == nil) isRoot := (rel == nil)
isFil := (sel.Where != nil && sel.Where.Op != qcode.OpNop) isFil := (sel.Where != nil && sel.Where.Op != qcode.OpNop)
hasOrder := len(sel.OrderBy) != 0 hasOrder := len(sel.OrderBy) != 0
@ -654,7 +718,7 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
c.renderDistinctOn(sel, ti) c.renderDistinctOn(sel, ti)
} }
realColsRendered, isAgg, err := c.renderBaseColumns(sel, ti, childCols, skipped) realColsRendered, isAgg, err := c.renderBaseColumns(sel, ti, childCols)
if err != nil { if err != nil {
return err return err
} }
@ -677,7 +741,8 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
} }
io.WriteString(c.w, ` WHERE (`) io.WriteString(c.w, ` WHERE (`)
if err := c.renderRelationship(sel, ti); err != nil {
if err := c.renderRelationship(sel, rel); err != nil {
return err return err
} }
if isFil { if isFil {
@ -706,10 +771,10 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
} }
switch { switch {
case ti.Singular: case ti.IsSingular:
io.WriteString(c.w, ` LIMIT ('1') :: integer`) io.WriteString(c.w, ` LIMIT ('1') :: integer`)
case len(sel.Paging.Limit) != 0: case sel.Paging.Limit != "":
//fmt.Fprintf(w, ` LIMIT ('%s') :: integer`, c.sel.Paging.Limit) //fmt.Fprintf(w, ` LIMIT ('%s') :: integer`, c.sel.Paging.Limit)
io.WriteString(c.w, ` LIMIT ('`) io.WriteString(c.w, ` LIMIT ('`)
io.WriteString(c.w, sel.Paging.Limit) io.WriteString(c.w, sel.Paging.Limit)
@ -722,7 +787,7 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
io.WriteString(c.w, ` LIMIT ('20') :: integer`) io.WriteString(c.w, ` LIMIT ('20') :: integer`)
} }
if len(sel.Paging.Offset) != 0 { if sel.Paging.Offset != "" {
//fmt.Fprintf(w, ` OFFSET ('%s') :: integer`, c.sel.Paging.Offset) //fmt.Fprintf(w, ` OFFSET ('%s') :: integer`, c.sel.Paging.Offset)
io.WriteString(c.w, ` OFFSET ('`) io.WriteString(c.w, ` OFFSET ('`)
io.WriteString(c.w, sel.Paging.Offset) io.WriteString(c.w, sel.Paging.Offset)
@ -781,30 +846,35 @@ func (c *compilerContext) renderCursorCTE(sel *qcode.Select) error {
io.WriteString(c.w, `, `) io.WriteString(c.w, `, `)
} }
io.WriteString(c.w, `a[`) io.WriteString(c.w, `a[`)
int2string(c.w, int32(i+1)) int32String(c.w, int32(i+1))
io.WriteString(c.w, `] as `) io.WriteString(c.w, `] as `)
quoted(c.w, ob.Col) quoted(c.w, ob.Col)
} }
io.WriteString(c.w, ` FROM string_to_array('{{cursor}}', ',') as a) `) io.WriteString(c.w, ` FROM string_to_array(`)
c.md.renderValueExp(c.w, Param{Name: "cursor", Type: "json"})
io.WriteString(c.w, `, ',') as a) `)
return nil return nil
} }
func (c *compilerContext) renderRelationship(sel *qcode.Select, ti *DBTableInfo) error { func (c *compilerContext) renderRelationshipByName(table, parent string) error {
parent := c.s[sel.ParentID]
pti, err := c.schema.GetTable(parent.Name)
if err != nil {
return err
}
return c.renderRelationshipByName(ti.Name, pti.Name, parent.ID)
}
func (c *compilerContext) renderRelationshipByName(table, parent string, id int32) error {
rel, err := c.schema.GetRel(table, parent) rel, err := c.schema.GetRel(table, parent)
if err != nil { if err != nil {
return err return err
} }
return c.renderRelationship(nil, rel)
}
func (c *compilerContext) renderRelationship(sel *qcode.Select, rel *DBRel) error {
var pid int32
switch {
case sel == nil:
pid = int32(-1)
case sel.Type == qcode.STMember:
pid = sel.UParentID
default:
pid = sel.ParentID
}
io.WriteString(c.w, `((`) io.WriteString(c.w, `((`)
@ -816,19 +886,19 @@ func (c *compilerContext) renderRelationshipByName(table, parent string, id int3
switch { switch {
case !rel.Left.Array && rel.Right.Array: case !rel.Left.Array && rel.Right.Array:
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
io.WriteString(c.w, `) = any (`) io.WriteString(c.w, `) = any (`)
colWithTableID(c.w, parent, id, rel.Right.Col) colWithTableID(c.w, rel.Right.Table, pid, rel.Right.Col)
case rel.Left.Array && !rel.Right.Array: case rel.Left.Array && !rel.Right.Array:
colWithTableID(c.w, parent, id, rel.Right.Col) colWithTableID(c.w, rel.Right.Table, pid, rel.Right.Col)
io.WriteString(c.w, `) = any (`) io.WriteString(c.w, `) = any (`)
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
default: default:
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
io.WriteString(c.w, `) = (`) io.WriteString(c.w, `) = (`)
colWithTableID(c.w, parent, id, rel.Right.Col) colWithTableID(c.w, rel.Right.Table, pid, rel.Right.Col)
} }
case RelOneToManyThrough: case RelOneToManyThrough:
@ -838,25 +908,34 @@ func (c *compilerContext) renderRelationshipByName(table, parent string, id int3
switch { switch {
case !rel.Left.Array && rel.Right.Array: case !rel.Left.Array && rel.Right.Array:
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
io.WriteString(c.w, `) = any (`) io.WriteString(c.w, `) = any (`)
colWithTable(c.w, rel.Through, rel.Right.Col) colWithTable(c.w, rel.Through.Table, rel.Through.ColR)
case rel.Left.Array && !rel.Right.Array: case rel.Left.Array && !rel.Right.Array:
colWithTable(c.w, rel.Through, rel.Right.Col) colWithTable(c.w, rel.Through.Table, rel.Through.ColR)
io.WriteString(c.w, `) = any (`) io.WriteString(c.w, `) = any (`)
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
default: default:
colWithTable(c.w, table, rel.Left.Col) colWithTable(c.w, rel.Through.Table, rel.Through.ColR)
io.WriteString(c.w, `) = (`) io.WriteString(c.w, `) = (`)
colWithTable(c.w, rel.Through, rel.Right.Col) colWithTable(c.w, rel.Right.Table, rel.Right.Col)
} }
case RelEmbedded: case RelEmbedded:
colWithTable(c.w, rel.Left.Table, rel.Left.Col) colWithTable(c.w, rel.Left.Table, rel.Left.Col)
io.WriteString(c.w, `) = (`) io.WriteString(c.w, `) = (`)
colWithTableID(c.w, parent, id, rel.Left.Col) colWithTableID(c.w, rel.Left.Table, pid, rel.Left.Col)
case RelPolymorphic:
colWithTable(c.w, sel.Name, rel.Right.Col)
io.WriteString(c.w, `) = (`)
colWithTableID(c.w, rel.Left.Table, pid, rel.Left.Col)
io.WriteString(c.w, `) AND (`)
colWithTableID(c.w, rel.Left.Table, pid, rel.Right.Table)
io.WriteString(c.w, `) = (`)
squoted(c.w, sel.Name)
} }
io.WriteString(c.w, `))`) io.WriteString(c.w, `))`)
@ -921,8 +1000,6 @@ func (c *compilerContext) renderExp(ex *qcode.Exp, ti *DBTableInfo, skipNested b
st.Push('(') st.Push('(')
case qcode.OpNot: case qcode.OpNot:
//fmt.Printf("1> %s %d %s %s\n", val.Op, len(val.Children), val.Children[0].Op, val.Children[1].Op)
st.Push(val.Children[0]) st.Push(val.Children[0])
st.Push(qcode.OpNot) st.Push(qcode.OpNot)
@ -934,13 +1011,10 @@ func (c *compilerContext) renderExp(ex *qcode.Exp, ti *DBTableInfo, skipNested b
return err return err
} }
} else { } else if err := c.renderOp(val, ti); err != nil {
//fmt.Fprintf(w, `(("%s"."%s") `, c.sel.Name, val.Col)
if err := c.renderOp(val, ti); err != nil {
return err return err
} }
} }
}
//qcode.FreeExp(val) //qcode.FreeExp(val)
default: default:
@ -971,7 +1045,7 @@ func (c *compilerContext) renderNestedWhere(ex *qcode.Exp, ti *DBTableInfo) erro
io.WriteString(c.w, ` WHERE `) io.WriteString(c.w, ` WHERE `)
if err := c.renderRelationshipByName(cti.Name, ti.Name, -1); err != nil { if err := c.renderRelationshipByName(cti.Name, ti.Name); err != nil {
return err return err
} }
@ -1000,7 +1074,7 @@ func (c *compilerContext) renderOp(ex *qcode.Exp, ti *DBTableInfo) error {
return nil return nil
} }
if len(ex.Col) != 0 { if ex.Col != "" {
if col, ok = ti.ColMap[ex.Col]; !ok { if col, ok = ti.ColMap[ex.Col]; !ok {
return fmt.Errorf("no column '%s' found ", ex.Col) return fmt.Errorf("no column '%s' found ", ex.Col)
} }
@ -1028,9 +1102,9 @@ func (c *compilerContext) renderOp(ex *qcode.Exp, ti *DBTableInfo) error {
case qcode.OpLesserThan: case qcode.OpLesserThan:
io.WriteString(c.w, `<`) io.WriteString(c.w, `<`)
case qcode.OpIn: case qcode.OpIn:
io.WriteString(c.w, `IN`) io.WriteString(c.w, `= ANY`)
case qcode.OpNotIn: case qcode.OpNotIn:
io.WriteString(c.w, `NOT IN`) io.WriteString(c.w, `!= ANY`)
case qcode.OpLike: case qcode.OpLike:
io.WriteString(c.w, `LIKE`) io.WriteString(c.w, `LIKE`)
case qcode.OpNotLike: case qcode.OpNotLike:
@ -1080,12 +1154,13 @@ func (c *compilerContext) renderOp(ex *qcode.Exp, ti *DBTableInfo) error {
io.WriteString(c.w, `((`) io.WriteString(c.w, `((`)
colWithTable(c.w, ti.Name, ti.TSVCol.Name) colWithTable(c.w, ti.Name, ti.TSVCol.Name)
if c.schema.ver >= 110000 { if c.schema.ver >= 110000 {
io.WriteString(c.w, `) @@ websearch_to_tsquery('{{`) io.WriteString(c.w, `) @@ websearch_to_tsquery(`)
} else { } else {
io.WriteString(c.w, `) @@ to_tsquery('{{`) io.WriteString(c.w, `) @@ to_tsquery(`)
} }
io.WriteString(c.w, ex.Val) c.md.renderValueExp(c.w, Param{Name: ex.Val, Type: "string"})
io.WriteString(c.w, `}}'))`) io.WriteString(c.w, `))`)
return nil return nil
default: default:
@ -1171,15 +1246,25 @@ func (c *compilerContext) renderVal(ex *qcode.Exp, vars map[string]string, col *
val, ok := vars[ex.Val] val, ok := vars[ex.Val]
switch { switch {
case ok && strings.HasPrefix(val, "sql:"): case ok && strings.HasPrefix(val, "sql:"):
io.WriteString(c.w, ` (`) io.WriteString(c.w, `(`)
io.WriteString(c.w, val[4:]) c.md.RenderVar(c.w, val[4:])
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
case ok: case ok:
squoted(c.w, val) squoted(c.w, val)
case ex.Op == qcode.OpIn || ex.Op == qcode.OpNotIn:
io.WriteString(c.w, `(ARRAY(SELECT json_array_elements_text(`)
c.md.renderValueExp(c.w, Param{Name: ex.Val, Type: col.Type, IsArray: true})
io.WriteString(c.w, `))`)
io.WriteString(c.w, ` :: `)
io.WriteString(c.w, col.Type)
io.WriteString(c.w, `[])`)
return
default: default:
io.WriteString(c.w, ` '{{`) c.md.renderValueExp(c.w, Param{Name: ex.Val, Type: col.Type, IsArray: false})
io.WriteString(c.w, ex.Val)
io.WriteString(c.w, `}}'`)
} }
case qcode.ValRef: case qcode.ValRef:
@ -1193,7 +1278,7 @@ func (c *compilerContext) renderVal(ex *qcode.Exp, vars map[string]string, col *
io.WriteString(c.w, col.Type) io.WriteString(c.w, col.Type)
} }
func funcPrefixLen(fn string) int { func funcPrefixLen(fm map[string]*DBFunction, fn string) int {
switch { switch {
case strings.HasPrefix(fn, "avg_"): case strings.HasPrefix(fn, "avg_"):
return 4 return 4
@ -1218,10 +1303,18 @@ func funcPrefixLen(fn string) int {
case strings.HasPrefix(fn, "var_samp_"): case strings.HasPrefix(fn, "var_samp_"):
return 9 return 9
} }
fnLen := len(fn)
for k := range fm {
kLen := len(k)
if kLen < fnLen && k[0] == fn[0] && strings.HasPrefix(fn, k) && fn[kLen] == '_' {
return kLen + 1
}
}
return 0 return 0
} }
func hasBit(n uint32, pos uint32) bool { func hasBit(n, pos uint32) bool {
val := n & (1 << pos) val := n & (1 << pos)
return (val > 0) return (val > 0)
} }
@ -1236,7 +1329,7 @@ func aliasWithID(w io.Writer, alias string, id int32) {
io.WriteString(w, ` AS "`) io.WriteString(w, ` AS "`)
io.WriteString(w, alias) io.WriteString(w, alias)
io.WriteString(w, `_`) io.WriteString(w, `_`)
int2string(w, id) int32String(w, id)
io.WriteString(w, `"`) io.WriteString(w, `"`)
} }
@ -1253,7 +1346,7 @@ func colWithTableID(w io.Writer, table string, id int32, col string) {
io.WriteString(w, table) io.WriteString(w, table)
if id >= 0 { if id >= 0 {
io.WriteString(w, `_`) io.WriteString(w, `_`)
int2string(w, id) int32String(w, id)
} }
io.WriteString(w, `"."`) io.WriteString(w, `"."`)
io.WriteString(w, col) io.WriteString(w, col)
@ -1272,26 +1365,6 @@ func squoted(w io.Writer, identifier string) {
io.WriteString(w, `'`) io.WriteString(w, `'`)
} }
const charset = "0123456789" func int32String(w io.Writer, val int32) {
io.WriteString(w, strconv.FormatInt(int64(val), 10))
func int2string(w io.Writer, val int32) {
if val < 10 {
w.Write([]byte{charset[val]})
return
}
temp := int32(0)
val2 := val
for val2 > 0 {
temp *= 10
temp += val2 % 10
val2 = int32(float64(val2 / 10))
}
val3 := temp
for val3 > 0 {
d := val3 % 10
val3 /= 10
w.Write([]byte{charset[d]})
}
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"bytes" "bytes"
@ -32,6 +32,20 @@ func withComplexArgs(t *testing.T) {
compileGQLToPSQL(t, gql, nil, "user") compileGQLToPSQL(t, gql, nil, "user")
} }
func withWhereIn(t *testing.T) {
gql := `query {
products(where: { id: { in: $list } }) {
id
}
}`
vars := map[string]json.RawMessage{
"list": json.RawMessage(`[1,2,3]`),
}
compileGQLToPSQL(t, gql, vars, "user")
}
func withWhereAndList(t *testing.T) { func withWhereAndList(t *testing.T) {
gql := `query { gql := `query {
products( products(
@ -293,6 +307,100 @@ func multiRoot(t *testing.T) {
compileGQLToPSQL(t, gql, nil, "user") compileGQLToPSQL(t, gql, nil, "user")
} }
func withFragment1(t *testing.T) {
gql := `
fragment userFields1 on user {
id
email
}
query {
users {
...userFields2
created_at
...userFields1
}
}
fragment userFields2 on user {
first_name
last_name
}`
compileGQLToPSQL(t, gql, nil, "anon")
}
func withFragment2(t *testing.T) {
gql := `
query {
users {
...userFields2
created_at
...userFields1
}
}
fragment userFields1 on user {
id
email
}
fragment userFields2 on user {
first_name
last_name
}`
compileGQLToPSQL(t, gql, nil, "anon")
}
func withFragment3(t *testing.T) {
gql := `
fragment userFields1 on user {
id
email
}
fragment userFields2 on user {
first_name
last_name
}
query {
users {
...userFields2
created_at
...userFields1
}
}
`
compileGQLToPSQL(t, gql, nil, "anon")
}
// func withInlineFragment(t *testing.T) {
// gql := `
// query {
// users {
// ... on users {
// id
// email
// }
// created_at
// ... on user {
// first_name
// last_name
// }
// }
// }
// `
// compileGQLToPSQL(t, gql, nil, "anon")
// }
func withCursor(t *testing.T) { func withCursor(t *testing.T) {
gql := `query { gql := `query {
Products( Products(
@ -367,6 +475,7 @@ func blockedFunctions(t *testing.T) {
func TestCompileQuery(t *testing.T) { func TestCompileQuery(t *testing.T) {
t.Run("withComplexArgs", withComplexArgs) t.Run("withComplexArgs", withComplexArgs)
t.Run("withWhereIn", withWhereIn)
t.Run("withWhereAndList", withWhereAndList) t.Run("withWhereAndList", withWhereAndList)
t.Run("withWhereIsNull", withWhereIsNull) t.Run("withWhereIsNull", withWhereIsNull)
t.Run("withWhereMultiOr", withWhereMultiOr) t.Run("withWhereMultiOr", withWhereMultiOr)
@ -385,6 +494,10 @@ func TestCompileQuery(t *testing.T) {
t.Run("queryWithVariables", queryWithVariables) t.Run("queryWithVariables", queryWithVariables)
t.Run("withWhereOnRelations", withWhereOnRelations) t.Run("withWhereOnRelations", withWhereOnRelations)
t.Run("multiRoot", multiRoot) t.Run("multiRoot", multiRoot)
t.Run("withFragment1", withFragment1)
t.Run("withFragment2", withFragment2)
t.Run("withFragment3", withFragment3)
//t.Run("withInlineFragment", withInlineFragment)
t.Run("jsonColumnAsTable", jsonColumnAsTable) t.Run("jsonColumnAsTable", jsonColumnAsTable)
t.Run("withCursor", withCursor) t.Run("withCursor", withCursor)
t.Run("nullForAuthRequiredInAnon", nullForAuthRequiredInAnon) t.Run("nullForAuthRequiredInAnon", nullForAuthRequiredInAnon)
@ -429,7 +542,7 @@ func BenchmarkCompile(b *testing.B) {
b.Fatal(err) b.Fatal(err)
} }
_, err = pcompile.Compile(qc, w, nil) _, err = pcompile.Compile(w, qc, nil)
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }
@ -450,7 +563,7 @@ func BenchmarkCompileParallel(b *testing.B) {
b.Fatal(err) b.Fatal(err)
} }
_, err = pcompile.Compile(qc, w, nil) _, err = pcompile.Compile(w, qc, nil)
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }

View File

@ -11,17 +11,21 @@ type DBSchema struct {
ver int ver int
t map[string]*DBTableInfo t map[string]*DBTableInfo
rm map[string]map[string]*DBRel rm map[string]map[string]*DBRel
vt map[string]*VirtualTable
fm map[string]*DBFunction
} }
type DBTableInfo struct { type DBTableInfo struct {
Name string Name string
Type string Type string
Singular bool IsSingular bool
Columns []DBColumn Columns []DBColumn
PrimaryCol *DBColumn PrimaryCol *DBColumn
TSVCol *DBColumn TSVCol *DBColumn
ColMap map[string]*DBColumn ColMap map[string]*DBColumn
ColIDMap map[int16]*DBColumn ColIDMap map[int16]*DBColumn
Singular string
Plural string
} }
type RelType int type RelType int
@ -30,14 +34,18 @@ const (
RelOneToOne RelType = iota + 1 RelOneToOne RelType = iota + 1
RelOneToMany RelOneToMany
RelOneToManyThrough RelOneToManyThrough
RelPolymorphic
RelEmbedded RelEmbedded
RelRemote RelRemote
) )
type DBRel struct { type DBRel struct {
Type RelType Type RelType
Through string Through struct {
ColT string Table string
ColL string
ColR string
}
Left struct { Left struct {
col *DBColumn col *DBColumn
Table string Table string
@ -54,8 +62,11 @@ type DBRel struct {
func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) { func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
schema := &DBSchema{ schema := &DBSchema{
ver: info.Version,
t: make(map[string]*DBTableInfo), t: make(map[string]*DBTableInfo),
rm: make(map[string]map[string]*DBRel), rm: make(map[string]map[string]*DBRel),
vt: make(map[string]*VirtualTable),
fm: make(map[string]*DBFunction, len(info.Functions)),
} }
for i, t := range info.Tables { for i, t := range info.Tables {
@ -65,6 +76,10 @@ func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
} }
} }
if err := schema.virtualRels(info.VTables); err != nil {
return nil, err
}
for i, t := range info.Tables { for i, t := range info.Tables {
err := schema.firstDegreeRels(t, info.Columns[i]) err := schema.firstDegreeRels(t, info.Columns[i])
if err != nil { if err != nil {
@ -79,6 +94,12 @@ func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
} }
} }
for k, f := range info.Functions {
if len(f.Params) == 1 {
schema.fm[strings.ToLower(f.Name)] = &info.Functions[k]
}
}
return schema, nil return schema, nil
} }
@ -89,32 +110,39 @@ func (s *DBSchema) addTable(
colidmap := make(map[int16]*DBColumn, len(cols)) colidmap := make(map[int16]*DBColumn, len(cols))
singular := flect.Singularize(t.Key) singular := flect.Singularize(t.Key)
s.t[singular] = &DBTableInfo{
Name: t.Name,
Type: t.Type,
Singular: true,
Columns: cols,
ColMap: colmap,
ColIDMap: colidmap,
}
plural := flect.Pluralize(t.Key) plural := flect.Pluralize(t.Key)
s.t[plural] = &DBTableInfo{
ts := &DBTableInfo{
Name: t.Name, Name: t.Name,
Type: t.Type, Type: t.Type,
Singular: false, IsSingular: true,
Columns: cols, Columns: cols,
ColMap: colmap, ColMap: colmap,
ColIDMap: colidmap, ColIDMap: colidmap,
Singular: singular,
Plural: plural,
} }
s.t[singular] = ts
tp := &DBTableInfo{
Name: t.Name,
Type: t.Type,
IsSingular: false,
Columns: cols,
ColMap: colmap,
ColIDMap: colidmap,
Singular: singular,
Plural: plural,
}
s.t[plural] = tp
if al, ok := aliases[t.Key]; ok { if al, ok := aliases[t.Key]; ok {
for i := range al { for i := range al {
k1 := flect.Singularize(al[i]) k1 := flect.Singularize(al[i])
s.t[k1] = s.t[singular] s.t[k1] = ts
k2 := flect.Pluralize(al[i]) k2 := flect.Pluralize(al[i])
s.t[k2] = s.t[plural] s.t[k2] = tp
} }
} }
@ -138,6 +166,54 @@ func (s *DBSchema) addTable(
return nil return nil
} }
func (s *DBSchema) virtualRels(vts []VirtualTable) error {
for _, vt := range vts {
s.vt[vt.Name] = &vt
for _, t := range s.t {
idCol, ok := t.ColMap[vt.IDColumn]
if !ok {
continue
}
if _, ok = t.ColMap[vt.TypeColumn]; !ok {
continue
}
nt := DBTable{
ID: -1,
Name: vt.Name,
Key: strings.ToLower(vt.Name),
Type: "virtual",
}
if err := s.addTable(nt, nil, nil); err != nil {
return err
}
rel := &DBRel{Type: RelPolymorphic}
rel.Left.col = idCol
rel.Left.Table = t.Name
rel.Left.Col = idCol.Name
rcol := DBColumn{
Name: vt.FKeyColumn,
Key: strings.ToLower(vt.FKeyColumn),
Type: idCol.Type,
}
rel.Right.col = &rcol
rel.Right.Table = vt.TypeColumn
rel.Right.Col = rcol.Name
if err := s.SetRel(vt.Name, t.Name, rel); err != nil {
return err
}
}
}
return nil
}
func (s *DBSchema) firstDegreeRels(t DBTable, cols []DBColumn) error { func (s *DBSchema) firstDegreeRels(t DBTable, cols []DBColumn) error {
ct := t.Key ct := t.Key
cti, ok := s.t[ct] cti, ok := s.t[ct]
@ -148,7 +224,7 @@ func (s *DBSchema) firstDegreeRels(t DBTable, cols []DBColumn) error {
for i := range cols { for i := range cols {
c := cols[i] c := cols[i]
if len(c.FKeyTable) == 0 { if c.FKeyTable == "" {
continue continue
} }
@ -252,7 +328,7 @@ func (s *DBSchema) secondDegreeRels(t DBTable, cols []DBColumn) error {
for i := range cols { for i := range cols {
c := cols[i] c := cols[i]
if len(c.FKeyTable) == 0 { if c.FKeyTable == "" {
continue continue
} }
@ -328,16 +404,17 @@ func (s *DBSchema) updateSchemaOTMT(
// One-to-many-through relation between 1nd foreign key table and the // One-to-many-through relation between 1nd foreign key table and the
// 2nd foreign key table // 2nd foreign key table
rel1 := &DBRel{Type: RelOneToManyThrough} rel1 := &DBRel{Type: RelOneToManyThrough}
rel1.Through = ti.Name rel1.Through.Table = ti.Name
rel1.ColT = col2.Name rel1.Through.ColL = col1.Name
rel1.Through.ColR = col2.Name
rel1.Left.col = &col2 rel1.Left.col = fc1
rel1.Left.Table = col2.FKeyTable rel1.Left.Table = col1.FKeyTable
rel1.Left.Col = fc2.Name rel1.Left.Col = fc1.Name
rel1.Right.col = &col1 rel1.Right.col = fc2
rel1.Right.Table = ti.Name rel1.Right.Table = t2
rel1.Right.Col = col1.Name rel1.Right.Col = fc2.Name
if err := s.SetRel(t1, t2, rel1); err != nil { if err := s.SetRel(t1, t2, rel1); err != nil {
return err return err
@ -346,16 +423,17 @@ func (s *DBSchema) updateSchemaOTMT(
// One-to-many-through relation between 2nd foreign key table and the // One-to-many-through relation between 2nd foreign key table and the
// 1nd foreign key table // 1nd foreign key table
rel2 := &DBRel{Type: RelOneToManyThrough} rel2 := &DBRel{Type: RelOneToManyThrough}
rel2.Through = ti.Name rel2.Through.Table = ti.Name
rel2.ColT = col1.Name rel2.Through.ColL = col2.Name
rel2.Through.ColR = col1.Name
rel1.Left.col = fc1 rel2.Left.col = fc2
rel2.Left.Table = col1.FKeyTable rel2.Left.Table = col2.FKeyTable
rel2.Left.Col = fc1.Name rel2.Left.Col = fc2.Name
rel1.Right.col = &col2 rel2.Right.col = fc1
rel2.Right.Table = ti.Name rel2.Right.Table = t1
rel2.Right.Col = col2.Name rel2.Right.Col = fc1.Name
if err := s.SetRel(t2, t1, rel2); err != nil { if err := s.SetRel(t2, t1, rel2); err != nil {
return err return err
@ -364,6 +442,14 @@ func (s *DBSchema) updateSchemaOTMT(
return nil return nil
} }
func (s *DBSchema) GetTableNames() []string {
var names []string
for name := range s.t {
names = append(names, name)
}
return names
}
func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) { func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) {
t, ok := s.t[table] t, ok := s.t[table]
if !ok { if !ok {
@ -424,3 +510,11 @@ func (s *DBSchema) GetRel(child, parent string) (*DBRel, error) {
} }
return rel, nil return rel, nil
} }
func (s *DBSchema) GetFunctions() []*DBFunction {
var funcs []*DBFunction
for _, f := range s.fm {
funcs = append(funcs, f)
}
return funcs
}

View File

@ -14,14 +14,18 @@ func (rt RelType) String() string {
return "remote" return "remote"
case RelEmbedded: case RelEmbedded:
return "embedded" return "embedded"
case RelPolymorphic:
return "polymorphic"
} }
return "" return ""
} }
func (re *DBRel) String() string { func (re *DBRel) String() string {
if re.Type == RelOneToManyThrough { if re.Type == RelOneToManyThrough {
return fmt.Sprintf("'%s.%s' --(Through: %s)--> '%s.%s'", return fmt.Sprintf("'%s.%s' --(%s.%s, %s.%s)--> '%s.%s'",
re.Left.Table, re.Left.Col, re.Through, re.Right.Table, re.Right.Col) re.Left.Table, re.Left.Col,
re.Through.Table, re.Through.ColL, re.Through.Table, re.Through.ColR,
re.Right.Table, re.Right.Col)
} }
return fmt.Sprintf("'%s.%s' --(%s)--> '%s.%s'", return fmt.Sprintf("'%s.%s' --(%s)--> '%s.%s'",
re.Left.Table, re.Left.Col, re.Type, re.Right.Table, re.Right.Col) re.Left.Table, re.Left.Col, re.Type, re.Right.Table, re.Right.Col)

View File

@ -13,10 +13,19 @@ type DBInfo struct {
Version int Version int
Tables []DBTable Tables []DBTable
Columns [][]DBColumn Columns [][]DBColumn
colmap map[string]map[string]*DBColumn Functions []DBFunction
VTables []VirtualTable
colMap map[string]map[string]*DBColumn
} }
func GetDBInfo(db *sql.DB) (*DBInfo, error) { type VirtualTable struct {
Name string
IDColumn string
TypeColumn string
FKeyColumn string
}
func GetDBInfo(db *sql.DB, schema string) (*DBInfo, error) {
di := &DBInfo{} di := &DBInfo{}
var version string var version string
@ -30,46 +39,61 @@ func GetDBInfo(db *sql.DB) (*DBInfo, error) {
return nil, err return nil, err
} }
di.Tables, err = GetTables(db) di.Tables, err = GetTables(db, schema)
if err != nil { if err != nil {
return nil, err return nil, err
} }
di.colmap = make(map[string]map[string]*DBColumn, len(di.Tables)) for _, t := range di.Tables {
cols, err := GetColumns(db, schema, t.Name)
for i, t := range di.Tables {
cols, err := GetColumns(db, "public", t.Name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols))
for n, c := range di.Columns[i] {
di.colmap[t.Key][c.Key] = &di.Columns[i][n]
} }
di.colMap = newColMap(di.Tables, di.Columns)
di.Functions, err = GetFunctions(db, schema)
if err != nil {
return nil, err
} }
return di, nil return di, nil
} }
func newColMap(tables []DBTable, columns [][]DBColumn) map[string]map[string]*DBColumn {
cm := make(map[string]map[string]*DBColumn, len(tables))
for i, t := range tables {
cols := columns[i]
cm[t.Key] = make(map[string]*DBColumn, len(cols))
for n, c := range cols {
cm[t.Key][c.Key] = &columns[i][n]
}
}
return cm
}
func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) { func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) {
t.ID = di.Tables[len(di.Tables)-1].ID t.ID = di.Tables[len(di.Tables)-1].ID
di.Tables = append(di.Tables, t) di.Tables = append(di.Tables, t)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols)) di.colMap[t.Key] = make(map[string]*DBColumn, len(cols))
for i := range cols { for i := range cols {
cols[i].ID = int16(i) cols[i].ID = int16(i)
c := &cols[i] c := &cols[i]
di.colmap[t.Key][c.Key] = c di.colMap[t.Key][c.Key] = c
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
} }
func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) { func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) {
v, ok := di.colmap[strings.ToLower(table)][strings.ToLower(column)] v, ok := di.colMap[strings.ToLower(table)][strings.ToLower(column)]
return v, ok return v, ok
} }
@ -80,7 +104,7 @@ type DBTable struct {
Type string Type string
} }
func GetTables(db *sql.DB) ([]DBTable, error) { func GetTables(db *sql.DB, schema string) ([]DBTable, error) {
sqlStmt := ` sqlStmt := `
SELECT SELECT
c.relname as "name", c.relname as "name",
@ -92,14 +116,12 @@ SELECT
FROM pg_catalog.pg_class c FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m','f','') WHERE c.relkind IN ('r','v','m','f','')
AND n.nspname <> ('pg_catalog') AND n.nspname = $1
AND n.nspname <> ('information_schema') AND pg_catalog.pg_table_is_visible(c.oid);`
AND n.nspname !~ ('^pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid);`
var tables []DBTable var tables []DBTable
rows, err := db.Query(sqlStmt) rows, err := db.Query(sqlStmt, schema)
if err != nil { if err != nil {
return nil, fmt.Errorf("Error fetching tables: %s", err) return nil, fmt.Errorf("Error fetching tables: %s", err)
} }
@ -237,6 +259,71 @@ ORDER BY id;`
return cols, nil return cols, nil
} }
type DBFunction struct {
Name string
Params []DBFuncParam
}
type DBFuncParam struct {
ID int
Name sql.NullString
Type string
}
func GetFunctions(db *sql.DB, schema string) ([]DBFunction, error) {
sqlStmt := `
SELECT
routines.routine_name,
parameters.specific_name,
parameters.data_type,
parameters.parameter_name,
parameters.ordinal_position
FROM
information_schema.routines
RIGHT JOIN
information_schema.parameters
ON (routines.specific_name = parameters.specific_name and parameters.ordinal_position IS NOT NULL)
WHERE
routines.specific_schema = $1
ORDER BY
routines.routine_name, parameters.ordinal_position;`
rows, err := db.Query(sqlStmt, schema)
if err != nil {
return nil, fmt.Errorf("Error fetching functions: %s", err)
}
defer rows.Close()
var funcs []DBFunction
fm := make(map[string]int)
parameterIndex := 1
for rows.Next() {
var fn, fid string
fp := DBFuncParam{}
err = rows.Scan(&fn, &fid, &fp.Type, &fp.Name, &fp.ID)
if err != nil {
return nil, err
}
if !fp.Name.Valid {
fp.Name.String = string(parameterIndex)
fp.Name.Valid = true
}
if i, ok := fm[fid]; ok {
funcs[i].Params = append(funcs[i].Params, fp)
} else {
funcs = append(funcs, DBFunction{Name: fn, Params: []DBFuncParam{fp}})
fm[fid] = len(funcs) - 1
}
parameterIndex++
}
return funcs, nil
}
// func GetValType(type string) qcode.ValType { // func GetValType(type string) qcode.ValType {
// switch { // switch {
// case "bigint", "integer", "smallint", "numeric", "bigserial": // case "bigint", "integer", "smallint", "numeric", "bigserial":

View File

@ -1,11 +1,10 @@
package psql package psql
import ( import (
"log"
"strings" "strings"
) )
func getTestSchema() *DBSchema { func GetTestDBInfo() *DBInfo {
tables := []DBTable{ tables := []DBTable{
DBTable{Name: "customers", Type: "table"}, DBTable{Name: "customers", Type: "table"},
DBTable{Name: "users", Type: "table"}, DBTable{Name: "users", Type: "table"},
@ -74,36 +73,19 @@ func getTestSchema() *DBSchema {
} }
} }
schema := &DBSchema{ return &DBInfo{
ver: 110000, Version: 110000,
t: make(map[string]*DBTableInfo), Tables: tables,
rm: make(map[string]map[string]*DBRel), Columns: columns,
Functions: []DBFunction{},
colMap: newColMap(tables, columns),
} }
}
func GetTestSchema() (*DBSchema, error) {
aliases := map[string][]string{ aliases := map[string][]string{
"users": []string{"mes"}, "users": []string{"mes"},
} }
for i, t := range tables { return NewDBSchema(GetTestDBInfo(), aliases)
err := schema.addTable(t, columns[i], aliases)
if err != nil {
log.Fatal(err)
}
}
for i, t := range tables {
err := schema.firstDegreeRels(t, columns[i])
if err != nil {
log.Fatal(err)
}
}
for i, t := range tables {
err := schema.secondDegreeRels(t, columns[i])
if err != nil {
log.Fatal(err)
}
}
return schema
} }

View File

@ -1,26 +1,26 @@
=== RUN TestCompileInsert === RUN TestCompileInsert
=== RUN TestCompileInsert/simpleInsert === RUN TestCompileInsert/simpleInsert
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/singleInsert === RUN TestCompileInsert/singleInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description", "price", "user_id") SELECT "t"."name", "t"."description", "t"."price", "t"."user_id" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "description", "price", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'user_id' AS bigint) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/bulkInsert === RUN TestCompileInsert/bulkInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_recordset(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/simpleInsertWithPresets === RUN TestCompileInsert/simpleInsertWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", 'now' :: timestamp without time zone, 'now' :: timestamp without time zone, '{{user_id}}' :: bigint FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone, 'now' :: timestamp without time zone, $2 :: bigint FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertManyToMany === RUN TestCompileInsert/nestedInsertManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price") SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t RETURNING *), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "customer_id", "product_id") SELECT "t"."sale_type", "t"."quantity", "t"."due_date", "customers"."id", "products"."id" FROM "_sg_input" i, "customers", "products", json_populate_record(NULL::purchases, i.j) t RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "customer_id", "product_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "customers"."id", "products"."id" FROM "_sg_input" i, "customers", "products" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price") SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "product_id", "customer_id") SELECT "t"."sale_type", "t"."quantity", "t"."due_date", "products"."id", "customers"."id" FROM "_sg_input" i, "products", "customers", json_populate_record(NULL::purchases, i.j) t RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "product_id", "customer_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "products"."id", "customers"."id" FROM "_sg_input" i, "products", "customers" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToMany === RUN TestCompileInsert/nestedInsertOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j->'product') t RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOne === RUN TestCompileInsert/nestedInsertOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j->'user') t RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToManyWithConnect === RUN TestCompileInsert/nestedInsertOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t RETURNING *), "products" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnect === RUN TestCompileInsert/nestedInsertOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user", "__sj_2"."json" AS "tags" FROM (SELECT "products"."id", "products"."name", "products"."user_id", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "tags_2"."id" AS "id", "tags_2"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_0"."tags"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user", "__sj_2"."json" AS "tags" FROM (SELECT "products"."id", "products"."name", "products"."user_id", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."id" AS "id", "tags_2"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_0"."tags"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnectArray === RUN TestCompileInsert/nestedInsertOneToOneWithConnectArray
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id" = ANY((select a::bigint AS list from json_array_elements_text((i.j->'user'->'connect'->>'id')::json) AS a)) LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id" = ANY((select a::bigint AS list from json_array_elements_text((i.j->'user'->'connect'->>'id')::json) AS a)) LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileInsert (0.02s) --- PASS: TestCompileInsert (0.03s)
--- PASS: TestCompileInsert/simpleInsert (0.00s) --- PASS: TestCompileInsert/simpleInsert (0.00s)
--- PASS: TestCompileInsert/singleInsert (0.00s) --- PASS: TestCompileInsert/singleInsert (0.00s)
--- PASS: TestCompileInsert/bulkInsert (0.00s) --- PASS: TestCompileInsert/bulkInsert (0.00s)
@ -33,13 +33,13 @@ WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id"
--- PASS: TestCompileInsert/nestedInsertOneToOneWithConnectArray (0.00s) --- PASS: TestCompileInsert/nestedInsertOneToOneWithConnectArray (0.00s)
=== RUN TestCompileMutate === RUN TestCompileMutate
=== RUN TestCompileMutate/singleUpsert === RUN TestCompileMutate/singleUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/singleUpsertWhere === RUN TestCompileMutate/singleUpsertWhere
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) WHERE (("products"."price") > '3' :: numeric(7,2)) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) WHERE (("products"."price") > '3' :: numeric(7,2)) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/bulkUpsert === RUN TestCompileMutate/bulkUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_recordset(NULL::products, i.j) t RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/delete === RUN TestCompileMutate/delete
WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '1' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '1' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileMutate (0.01s) --- PASS: TestCompileMutate (0.01s)
--- PASS: TestCompileMutate/singleUpsert (0.00s) --- PASS: TestCompileMutate/singleUpsert (0.00s)
--- PASS: TestCompileMutate/singleUpsertWhere (0.00s) --- PASS: TestCompileMutate/singleUpsertWhere (0.00s)
@ -47,55 +47,64 @@ WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :
--- PASS: TestCompileMutate/delete (0.00s) --- PASS: TestCompileMutate/delete (0.00s)
=== RUN TestCompileQuery === RUN TestCompileQuery
=== RUN TestCompileQuery/withComplexArgs === RUN TestCompileQuery/withComplexArgs
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT DISTINCT ON ("products"."price") "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."id") < '28' :: bigint) AND (("products"."id") >= '20' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) ORDER BY "products"."price" DESC LIMIT ('30') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT DISTINCT ON ("products"."price") "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."id") < '28' :: bigint) AND (("products"."id") >= '20' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) ORDER BY "products"."price" DESC LIMIT ('30') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereIn
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = ANY (ARRAY(SELECT json_array_elements_text($1)) :: bigint[])))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereAndList === RUN TestCompileQuery/withWhereAndList
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereIsNull === RUN TestCompileQuery/withWhereIsNull
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereMultiOr === RUN TestCompileQuery/withWhereMultiOr
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND ((("products"."price") < '20' :: numeric(7,2)) OR (("products"."price") > '10' :: numeric(7,2)) OR NOT (("products"."id") IS NULL)))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND ((("products"."price") < '20' :: numeric(7,2)) OR (("products"."price") > '10' :: numeric(7,2)) OR NOT (("products"."id") IS NULL)))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/fetchByID === RUN TestCompileQuery/fetchByID
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '{{id}}' :: bigint))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = $1 :: bigint))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/searchQuery === RUN TestCompileQuery/searchQuery
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."search_rank" AS "search_rank", "products_0"."search_headline_description" AS "search_headline_description" FROM (SELECT "products"."id", "products"."name", ts_rank("products"."tsv", websearch_to_tsquery('{{query}}')) AS "search_rank", ts_headline("products"."description", websearch_to_tsquery('{{query}}')) AS "search_headline_description" FROM "products" WHERE ((("products"."tsv") @@ websearch_to_tsquery('{{query}}'))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."search_rank" AS "search_rank", "products_0"."search_headline_description" AS "search_headline_description" FROM (SELECT "products"."id", "products"."name", ts_rank("products"."tsv", websearch_to_tsquery($1)) AS "search_rank", ts_headline("products"."description", websearch_to_tsquery($1)) AS "search_headline_description" FROM "products" WHERE ((("products"."tsv") @@ websearch_to_tsquery($1))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToMany === RUN TestCompileQuery/oneToMany
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."email" AS "email", "__sj_1"."json" AS "products" FROM (SELECT "users"."email", "users"."id" FROM "users" LIMIT ('20') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email", "__sj_1"."json" AS "products" FROM (SELECT "users"."email", "users"."id" FROM "users" LIMIT ('20') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyReverse === RUN TestCompileQuery/oneToManyReverse
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."price" AS "price", "__sj_1"."json" AS "users" FROM (SELECT "products"."name", "products"."price", "products"."user_id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."email" AS "email" FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('20') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."price" AS "price", "__sj_1"."json" AS "users" FROM (SELECT "products"."name", "products"."price", "products"."user_id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."email" AS "email" FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('20') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyArray === RUN TestCompileQuery/oneToManyArray
SELECT jsonb_build_object('tags', "__sj_0"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "products_2"."name" AS "name", "products_2"."price" AS "price", "__sj_3"."json" AS "tags" FROM (SELECT "products"."name", "products"."price", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3") AS "json"FROM (SELECT "tags_3"."id" AS "id", "tags_3"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_2"."tags"))) LIMIT ('20') :: integer) AS "tags_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "tags_0"."name" AS "name", "__sj_1"."json" AS "product" FROM (SELECT "tags"."name", "tags"."slug" FROM "tags" LIMIT ('20') :: integer) AS "tags_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" WHERE ((("tags_0"."slug") = any ("products"."tags"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('tags', "__sj_0"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."name" AS "name", "products_2"."price" AS "price", "__sj_3"."json" AS "tags" FROM (SELECT "products"."name", "products"."price", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "tags_3"."id" AS "id", "tags_3"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_2"."tags"))) LIMIT ('20') :: integer) AS "tags_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "tags_0"."name" AS "name", "__sj_1"."json" AS "product" FROM (SELECT "tags"."name", "tags"."slug" FROM "tags" LIMIT ('20') :: integer) AS "tags_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" WHERE ((("tags_0"."slug") = any ("products"."tags"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToMany === RUN TestCompileQuery/manyToMany
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "__sj_1"."json" AS "customers" FROM (SELECT "products"."name", "products"."id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "customers_1"."email" AS "email", "customers_1"."full_name" AS "full_name" FROM (SELECT "customers"."email", "customers"."full_name" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_0"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "__sj_1"."json" AS "customers" FROM (SELECT "products"."name", "products"."id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "customers_1"."email" AS "email", "customers_1"."full_name" AS "full_name" FROM (SELECT "customers"."email", "customers"."full_name" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers"."id")) WHERE ((("purchases"."product_id") = ("products"."id"))) LIMIT ('20') :: integer) AS "customers_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToManyReverse === RUN TestCompileQuery/manyToManyReverse
SELECT jsonb_build_object('customers', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "customers_0"."email" AS "email", "customers_0"."full_name" AS "full_name", "__sj_1"."json" AS "products" FROM (SELECT "customers"."email", "customers"."full_name", "customers"."id" FROM "customers" LIMIT ('20') :: integer) AS "customers_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers_0"."id")) WHERE ((("products"."id") = ("purchases"."product_id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('customers', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."email" AS "email", "customers_0"."full_name" AS "full_name", "__sj_1"."json" AS "products" FROM (SELECT "customers"."email", "customers"."full_name", "customers"."id" FROM "customers" LIMIT ('20') :: integer) AS "customers_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products"."id")) WHERE ((("purchases"."customer_id") = ("customers"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunction === RUN TestCompileQuery/aggFunction
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."count_price" AS "count_price" FROM (SELECT "products"."name", count("products"."price") AS "count_price" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."count_price" AS "count_price" FROM (SELECT "products"."name", count("products"."price") AS "count_price" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionBlockedByCol === RUN TestCompileQuery/aggFunctionBlockedByCol
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionDisabled === RUN TestCompileQuery/aggFunctionDisabled
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionWithFilter === RUN TestCompileQuery/aggFunctionWithFilter
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."max_price" AS "max_price" FROM (SELECT "products"."id", max("products"."price") AS "max_price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") > '10' :: bigint))) GROUP BY "products"."id" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."max_price" AS "max_price" FROM (SELECT "products"."id", max("products"."price") AS "max_price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") > '10' :: bigint))) GROUP BY "products"."id" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/syntheticTables === RUN TestCompileQuery/syntheticTables
SELECT jsonb_build_object('me', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = '{{user_id}}' :: bigint)) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('me', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = $1 :: bigint)) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/queryWithVariables === RUN TestCompileQuery/queryWithVariables
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") = '{{product_price}}' :: numeric(7,2)) AND (("products"."id") = '{{product_id}}' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") = $1 :: numeric(7,2)) AND (("products"."id") = $2 :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereOnRelations === RUN TestCompileQuery/withWhereOnRelations
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" WHERE (NOT EXISTS (SELECT 1 FROM products WHERE (("products"."user_id") = ("users"."id")) AND ((("products"."price") > '3' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" WHERE (NOT EXISTS (SELECT 1 FROM products WHERE (("products"."user_id") = ("users"."id")) AND ((("products"."price") > '3' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/multiRoot === RUN TestCompileQuery/multiRoot
SELECT jsonb_build_object('customer', "__sj_0"."json", 'user', "__sj_1"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "products_2"."id" AS "id", "products_2"."name" AS "name", "__sj_3"."json" AS "customers", "__sj_4"."json" AS "customer" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_4") AS "json"FROM (SELECT "customers_4"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('1') :: integer) AS "customers_4") AS "__sr_4") AS "__sj_4" ON ('true') LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3") AS "json"FROM (SELECT "customers_3"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1", (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "customers_0"."id" AS "id" FROM (SELECT "customers"."id" FROM "customers" LIMIT ('1') :: integer) AS "customers_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('customer', "__sj_0"."json", 'user', "__sj_1"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."id" AS "id", "products_2"."name" AS "name", "__sj_3"."json" AS "customers", "__sj_4"."json" AS "customer" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_4".*) AS "json"FROM (SELECT "customers_4"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers"."id")) WHERE ((("purchases"."product_id") = ("products"."id"))) LIMIT ('1') :: integer) AS "customers_4") AS "__sr_4") AS "__sj_4" ON ('true') LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "customers_3"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers"."id")) WHERE ((("purchases"."product_id") = ("products"."id"))) LIMIT ('20') :: integer) AS "customers_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1", (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."id" AS "id" FROM (SELECT "customers"."id" FROM "customers" LIMIT ('1') :: integer) AS "customers_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/withFragment1
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."first_name" AS "first_name", "users_0"."last_name" AS "last_name", "users_0"."created_at" AS "created_at", "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT , "users"."created_at", "users"."id", "users"."email" FROM "users" GROUP BY "users"."created_at", "users"."id", "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withFragment2
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."first_name" AS "first_name", "users_0"."last_name" AS "last_name", "users_0"."created_at" AS "created_at", "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT , "users"."created_at", "users"."id", "users"."email" FROM "users" GROUP BY "users"."created_at", "users"."id", "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withFragment3
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."first_name" AS "first_name", "users_0"."last_name" AS "last_name", "users_0"."created_at" AS "created_at", "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT , "users"."created_at", "users"."id", "users"."email" FROM "users" GROUP BY "users"."created_at", "users"."id", "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/jsonColumnAsTable === RUN TestCompileQuery/jsonColumnAsTable
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "tag_count" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "tag_count_1"."count" AS "count", "__sj_2"."json" AS "tags" FROM (SELECT "tag_count"."count", "tag_count"."tag_id" FROM "products", json_to_recordset("products"."tag_count") AS "tag_count"(tag_id bigint, count int) WHERE ((("products"."id") = ("products_0"."id"))) LIMIT ('1') :: integer) AS "tag_count_1" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "tags_2"."name" AS "name" FROM (SELECT "tags"."name" FROM "tags" WHERE ((("tags"."id") = ("tag_count_1"."tag_id"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true')) AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "tag_count" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "tag_count_1"."count" AS "count", "__sj_2"."json" AS "tags" FROM (SELECT "tag_count"."count", "tag_count"."tag_id" FROM "products", json_to_recordset("products"."tag_count") AS "tag_count"(tag_id bigint, count int) WHERE ((("products"."id") = ("products_0"."id"))) LIMIT ('1') :: integer) AS "tag_count_1" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."name" AS "name" FROM (SELECT "tags"."name" FROM "tags" WHERE ((("tags"."id") = ("tag_count_1"."tag_id"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true')) AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withCursor === RUN TestCompileQuery/withCursor
SELECT jsonb_build_object('products', "__sj_0"."json", 'products_cursor', "__sj_0"."cursor") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT to_jsonb("__sr_0") - '__cur_0' - '__cur_1' AS "json", "__cur_0", "__cur_1"FROM (SELECT "products_0"."name" AS "name", LAST_VALUE("products_0"."price") OVER() AS "__cur_0", LAST_VALUE("products_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "price", a[2] as "id" FROM string_to_array('{{cursor}}', ',') as a) SELECT "products"."name", "products"."id", "products"."price" FROM "products", "__cur" WHERE (((("products"."price") < "__cur"."price" :: numeric(7,2)) OR ((("products"."price") = "__cur"."price" :: numeric(7,2)) AND (("products"."id") > "__cur"."id" :: bigint)))) ORDER BY "products"."price" DESC, "products"."id" ASC LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json", 'products_cursor', "__sj_0"."cursor") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT to_jsonb("__sr_0".*) - '__cur_0' - '__cur_1' AS "json", "__cur_0", "__cur_1"FROM (SELECT "products_0"."name" AS "name", LAST_VALUE("products_0"."price") OVER() AS "__cur_0", LAST_VALUE("products_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "price", a[2] as "id" FROM string_to_array($1, ',') as a) SELECT "products"."name", "products"."id", "products"."price" FROM "products", "__cur" WHERE (((("products"."price") < "__cur"."price" :: numeric(7,2)) OR ((("products"."price") = "__cur"."price" :: numeric(7,2)) AND (("products"."id") > "__cur"."id" :: bigint)))) ORDER BY "products"."price" DESC, "products"."id" ASC LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/nullForAuthRequiredInAnon === RUN TestCompileQuery/nullForAuthRequiredInAnon
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", NULL AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", NULL AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedQuery === RUN TestCompileQuery/blockedQuery
SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE (false) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0" SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE (false) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedFunctions === RUN TestCompileQuery/blockedFunctions
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."email" AS "email" FROM (SELECT , "users"."email" FROM "users" WHERE (false) GROUP BY "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0" SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email" FROM (SELECT , "users"."email" FROM "users" WHERE (false) GROUP BY "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
--- PASS: TestCompileQuery (0.02s) --- PASS: TestCompileQuery (0.03s)
--- PASS: TestCompileQuery/withComplexArgs (0.00s) --- PASS: TestCompileQuery/withComplexArgs (0.00s)
--- PASS: TestCompileQuery/withWhereIn (0.00s)
--- PASS: TestCompileQuery/withWhereAndList (0.00s) --- PASS: TestCompileQuery/withWhereAndList (0.00s)
--- PASS: TestCompileQuery/withWhereIsNull (0.00s) --- PASS: TestCompileQuery/withWhereIsNull (0.00s)
--- PASS: TestCompileQuery/withWhereMultiOr (0.00s) --- PASS: TestCompileQuery/withWhereMultiOr (0.00s)
@ -114,6 +123,9 @@ SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coa
--- PASS: TestCompileQuery/queryWithVariables (0.00s) --- PASS: TestCompileQuery/queryWithVariables (0.00s)
--- PASS: TestCompileQuery/withWhereOnRelations (0.00s) --- PASS: TestCompileQuery/withWhereOnRelations (0.00s)
--- PASS: TestCompileQuery/multiRoot (0.00s) --- PASS: TestCompileQuery/multiRoot (0.00s)
--- PASS: TestCompileQuery/withFragment1 (0.00s)
--- PASS: TestCompileQuery/withFragment2 (0.00s)
--- PASS: TestCompileQuery/withFragment3 (0.00s)
--- PASS: TestCompileQuery/jsonColumnAsTable (0.00s) --- PASS: TestCompileQuery/jsonColumnAsTable (0.00s)
--- PASS: TestCompileQuery/withCursor (0.00s) --- PASS: TestCompileQuery/withCursor (0.00s)
--- PASS: TestCompileQuery/nullForAuthRequiredInAnon (0.00s) --- PASS: TestCompileQuery/nullForAuthRequiredInAnon (0.00s)
@ -121,23 +133,23 @@ SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coa
--- PASS: TestCompileQuery/blockedFunctions (0.00s) --- PASS: TestCompileQuery/blockedFunctions (0.00s)
=== RUN TestCompileUpdate === RUN TestCompileUpdate
=== RUN TestCompileUpdate/singleUpdate === RUN TestCompileUpdate/singleUpdate
WITH "_sg_input" AS (SELECT '{{update}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "description") = (SELECT "t"."name", "t"."description" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE ((("products"."id") = '1' :: bigint) AND (("products"."id") = '{{id}}' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (UPDATE "products" SET ("name", "description") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i) WHERE ((("products"."id") = '1' :: bigint) AND (("products"."id") = $2 :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/simpleUpdateWithPresets === RUN TestCompileUpdate/simpleUpdateWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "updated_at") = (SELECT "t"."name", "t"."price", 'now' :: timestamp without time zone FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE (("products"."user_id") = '{{user_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone FROM "_sg_input" i) WHERE (("products"."user_id") = $2 :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateManyToMany === RUN TestCompileUpdate/nestedUpdateManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT "t"."sale_type", "t"."quantity", "t"."due_date" FROM "_sg_input" i, json_populate_record(NULL::purchases, i.j) t) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = $2 :: bigint) RETURNING "purchases".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT "t"."sale_type", "t"."quantity", "t"."due_date" FROM "_sg_input" i, json_populate_record(NULL::purchases, i.j) t) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::customers, i.j->'customer') t) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT "t"."name", "t"."price" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2") AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = $2 :: bigint) RETURNING "purchases".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToMany === RUN TestCompileUpdate/nestedUpdateOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t) WHERE (("users"."id") = '8' :: bigint) RETURNING "users".*), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::products, i.j->'product') t) FROM "users" WHERE (("products"."user_id") = ("users"."id") AND "products"."id"= ((i.j->'product'->'where'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = '8' :: bigint) RETURNING "users".*), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) FROM "users" WHERE (("products"."user_id") = ("users"."id") AND "products"."id"= ((i.j->'product'->'where'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOne === RUN TestCompileUpdate/nestedUpdateOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT "t"."name", "t"."price", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*), "users" AS (UPDATE "users" SET ("email") = (SELECT "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j->'user') t) FROM "products" WHERE (("users"."id") = ("products"."user_id")) RETURNING "users".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("products"."id") = $2 :: bigint) RETURNING "products".*), "users" AS (UPDATE "users" SET ("email") = (SELECT CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "products" WHERE (("users"."id") = ("products"."user_id")) RETURNING "users".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToManyWithConnect === RUN TestCompileUpdate/nestedUpdateOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT "t"."full_name", "t"."email", "t"."created_at", "t"."updated_at" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t) WHERE (("users"."id") = '{{id}}' :: bigint) RETURNING "users".*), "products_c" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*), "products_d" AS ( UPDATE "products" SET "user_id" = NULL FROM "users" WHERE ("products"."id"= ((i.j->'product'->'disconnect'->>'id'))::bigint) RETURNING "products".*), "products" AS (SELECT * FROM "products_c" UNION ALL SELECT * FROM "products_d") SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = $2 :: bigint) RETURNING "users".*), "products_c" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*), "products_d" AS ( UPDATE "products" SET "user_id" = NULL FROM "users" WHERE ("products"."id"= ((i.j->'product'->'disconnect'->>'id'))::bigint) RETURNING "products".*), "products" AS (SELECT * FROM "products_c" UNION ALL SELECT * FROM "products_d") SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithConnect === RUN TestCompileUpdate/nestedUpdateOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint AND "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying AND "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = $2 :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying AND "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1") AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint AND "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = $2 :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithDisconnect === RUN TestCompileUpdate/nestedUpdateOneToOneWithDisconnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "_x_users"."id" FROM "_sg_input" i, "_x_users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0") AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0" WITH "_sg_input" AS (SELECT $1 :: json AS j), "_x_users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = $2 :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileUpdate (0.02s) --- PASS: TestCompileUpdate (0.02s)
--- PASS: TestCompileUpdate/singleUpdate (0.00s) --- PASS: TestCompileUpdate/singleUpdate (0.00s)
--- PASS: TestCompileUpdate/simpleUpdateWithPresets (0.00s) --- PASS: TestCompileUpdate/simpleUpdateWithPresets (0.00s)
@ -148,4 +160,4 @@ WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FR
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithConnect (0.00s) --- PASS: TestCompileUpdate/nestedUpdateOneToOneWithConnect (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithDisconnect (0.00s) --- PASS: TestCompileUpdate/nestedUpdateOneToOneWithDisconnect (0.00s)
PASS PASS
ok github.com/dosco/super-graph/core/internal/psql 0.320s ok github.com/dosco/super-graph/core/internal/psql 0.323s

View File

@ -10,8 +10,8 @@ import (
"github.com/dosco/super-graph/core/internal/util" "github.com/dosco/super-graph/core/internal/util"
) )
func (c *compilerContext) renderUpdate(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderUpdate(
vars Variables, ti *DBTableInfo) (uint32, error) { w io.Writer, qc *qcode.QCode, vars Variables, ti *DBTableInfo) (uint32, error) {
update, ok := vars[qc.ActionVar] update, ok := vars[qc.ActionVar]
if !ok { if !ok {
@ -21,9 +21,10 @@ func (c *compilerContext) renderUpdate(qc *qcode.QCode, w io.Writer,
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar) return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
} }
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`) io.WriteString(c.w, `WITH "_sg_input" AS (SELECT `)
io.WriteString(c.w, qc.ActionVar) c.md.renderValueExp(c.w, Param{Name: qc.ActionVar, Type: "json"})
io.WriteString(c.w, `}}' :: json AS j)`) // io.WriteString(c.w, qc.ActionVar)
io.WriteString(c.w, ` :: json AS j)`)
st := util.NewStack() st := util.NewStack()
st.Push(kvitem{_type: itemUpdate, key: ti.Name, val: update, ti: ti}) st.Push(kvitem{_type: itemUpdate, key: ti.Name, val: update, ti: ti})
@ -84,32 +85,16 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
io.WriteString(w, `UPDATE `) io.WriteString(w, `UPDATE `)
quoted(w, ti.Name) quoted(w, ti.Name)
io.WriteString(w, ` SET (`) io.WriteString(w, ` SET (`)
renderInsertUpdateColumns(w, qc, jt, ti, sk, false) c.renderInsertUpdateColumns(qc, jt, ti, sk, false)
renderNestedUpdateRelColumns(w, item.kvitem, false) renderNestedUpdateRelColumns(w, item.kvitem, false)
io.WriteString(w, `) = (SELECT `) io.WriteString(w, `) = (SELECT `)
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) c.renderInsertUpdateColumns(qc, jt, ti, sk, true)
renderNestedUpdateRelColumns(w, item.kvitem, true) renderNestedUpdateRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedUpdateRelTables(w, item.kvitem) renderNestedUpdateRelTables(w, item.kvitem)
io.WriteString(w, `) `)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t) `)
}
if item.id != 0 { if item.id != 0 {
// Render sql to set id values if child-to-parent // Render sql to set id values if child-to-parent
@ -136,8 +121,8 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
} }
io.WriteString(w, `)`) io.WriteString(w, `)`)
} else { } else if qc.Selects[0].Where != nil {
io.WriteString(w, ` WHERE `) io.WriteString(w, `WHERE `)
if err := c.renderWhere(&qc.Selects[0], ti); err != nil { if err := c.renderWhere(&qc.Selects[0], ti); err != nil {
return err return err
} }
@ -202,17 +187,18 @@ func renderNestedUpdateRelTables(w io.Writer, item kvitem) error {
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v._ctype > 0 && v.relCP.Type == RelOneToMany { if v._ctype > 0 && v.relCP.Type == RelOneToMany {
io.WriteString(w, `"_x_`) io.WriteString(w, `, "_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} }
} }
return nil return nil
} }
func (c *compilerContext) renderDelete(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderDelete(
vars Variables, ti *DBTableInfo) (uint32, error) { w io.Writer, qc *qcode.QCode, vars Variables, ti *DBTableInfo) (uint32, error) {
root := &qc.Selects[0] root := &qc.Selects[0]
io.WriteString(c.w, `WITH `) io.WriteString(c.w, `WITH `)

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"
@ -223,7 +223,7 @@ func nestedUpdateOneToOneWithDisconnect(t *testing.T) {
// } // }
// }` // }`
// sql := `WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = 2) RETURNING "products".*) SELECT json_object_agg('product', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id") AS "json_row_0")) AS "json_0" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LIMIT ('1') :: integer) AS "sel_0"` // sql := `WITH "_sg_input" AS (SELECT $1 :: json AS j), "users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT "t"."name", "t"."price", "users"."id" FROM "_sg_input" i, "users", json_populate_record(NULL::products, i.j) t) WHERE (("products"."id") = 2) RETURNING "products".*) SELECT json_object_agg('product', json_0) FROM (SELECT row_to_json((SELECT "json_row_0" FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id") AS "json_row_0")) AS "json_0" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LIMIT ('1') :: integer) AS "sel_0"`
// vars := map[string]json.RawMessage{ // vars := map[string]json.RawMessage{
// "data": json.RawMessage(`{ // "data": json.RawMessage(`{

View File

@ -1,13 +0,0 @@
package psql
import "regexp"
func NewVariables(varlist map[string]string) map[string]string {
re := regexp.MustCompile(`(?mi)\$([a-zA-Z0-9_.]+)`)
vars := make(map[string]string, len(varlist))
for k, v := range varlist {
vars[k] = re.ReplaceAllString(v, `{{$1}}`)
}
return vars
}

View File

@ -0,0 +1,11 @@
goos: darwin
goarch: amd64
pkg: github.com/dosco/super-graph/core/internal/qcode
BenchmarkQCompile-16 120888 9236 ns/op 3755 B/op 28 allocs/op
BenchmarkQCompileP-16 502248 2620 ns/op 3795 B/op 28 allocs/op
BenchmarkParse-16 128370 9294 ns/op 3902 B/op 18 allocs/op
BenchmarkParseP-16 575752 2340 ns/op 3903 B/op 18 allocs/op
BenchmarkSchemaParse-16 212048 5779 ns/op 3968 B/op 57 allocs/op
BenchmarkSchemaParseP-16 630918 1686 ns/op 3968 B/op 57 allocs/op
PASS
ok github.com/dosco/super-graph/core/internal/qcode 7.710s

View File

@ -0,0 +1,13 @@
goos: darwin
goarch: amd64
pkg: github.com/dosco/super-graph/core/internal/qcode
BenchmarkQCompile-16 118282 9686 ns/op 4031 B/op 30 allocs/op
BenchmarkQCompileP-16 427531 2710 ns/op 4077 B/op 30 allocs/op
BenchmarkQCompileFragment-16 140588 8328 ns/op 8903 B/op 13 allocs/op
BenchmarkParse-16 131396 9212 ns/op 4175 B/op 18 allocs/op
BenchmarkParseP-16 503778 2310 ns/op 4176 B/op 18 allocs/op
BenchmarkParseFragment-16 143725 8158 ns/op 10193 B/op 9 allocs/op
BenchmarkSchemaParse-16 240609 5060 ns/op 3968 B/op 57 allocs/op
BenchmarkSchemaParseP-16 785116 1534 ns/op 3968 B/op 57 allocs/op
PASS
ok github.com/dosco/super-graph/core/internal/qcode 11.092s

View File

@ -0,0 +1,17 @@
goos: darwin
goarch: amd64
pkg: github.com/dosco/super-graph/core/internal/qcode
BenchmarkQCompile
BenchmarkQCompile-16 129614 8649 ns/op 3756 B/op 28 allocs/op
BenchmarkQCompileP
BenchmarkQCompileP-16 487488 2525 ns/op 3792 B/op 28 allocs/op
BenchmarkParse
BenchmarkParse-16 127582 8731 ns/op 3902 B/op 18 allocs/op
BenchmarkParseP
BenchmarkParseP-16 561373 2223 ns/op 3903 B/op 18 allocs/op
BenchmarkSchemaParse
BenchmarkSchemaParse-16 209142 5523 ns/op 3968 B/op 57 allocs/op
BenchmarkSchemaParseP
BenchmarkSchemaParseP-16 716437 1734 ns/op 3968 B/op 57 allocs/op
PASS
ok github.com/dosco/super-graph/core/internal/qcode 8.483s

View File

@ -1,12 +1,12 @@
package qcode package qcode
import ( import (
"regexp"
"sort" "sort"
"strings" "strings"
) )
type Config struct { type Config struct {
DefaultBlock bool
Blocklist []string Blocklist []string
} }
@ -15,23 +15,27 @@ type QueryConfig struct {
Filters []string Filters []string
Columns []string Columns []string
DisableFunctions bool DisableFunctions bool
Block bool
} }
type InsertConfig struct { type InsertConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Presets map[string]string Presets map[string]string
Block bool
} }
type UpdateConfig struct { type UpdateConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Presets map[string]string Presets map[string]string
Block bool
} }
type DeleteConfig struct { type DeleteConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Block bool
} }
type TRConfig struct { type TRConfig struct {
@ -47,9 +51,8 @@ type trval struct {
fil *Exp fil *Exp
filNU bool filNU bool
cols map[string]struct{} cols map[string]struct{}
disable struct { disable struct{ funcs bool }
funcs bool block bool
}
} }
insert struct { insert struct {
@ -58,6 +61,7 @@ type trval struct {
cols map[string]struct{} cols map[string]struct{}
psmap map[string]string psmap map[string]string
pslist []string pslist []string
block bool
} }
update struct { update struct {
@ -66,12 +70,14 @@ type trval struct {
cols map[string]struct{} cols map[string]struct{}
psmap map[string]string psmap map[string]string
pslist []string pslist []string
block bool
} }
delete struct { delete struct {
fil *Exp fil *Exp
filNU bool filNU bool
cols map[string]struct{} cols map[string]struct{}
block bool
} }
} }
@ -125,12 +131,3 @@ func mapToList(m map[string]string) []string {
sort.Strings(list) sort.Strings(list)
return list return list
} }
var varRe = regexp.MustCompile(`\$([a-zA-Z0-9_]+)`)
func parsePresets(m map[string]string) map[string]string {
for k, v := range m {
m[k] = varRe.ReplaceAllString(v, `{{$1}}`)
}
return m
}

View File

@ -11,15 +11,18 @@ import (
var ( var (
queryToken = []byte("query") queryToken = []byte("query")
mutationToken = []byte("mutation") mutationToken = []byte("mutation")
fragmentToken = []byte("fragment")
subscriptionToken = []byte("subscription") subscriptionToken = []byte("subscription")
onToken = []byte("on")
trueToken = []byte("true") trueToken = []byte("true")
falseToken = []byte("false") falseToken = []byte("false")
quotesToken = []byte(`'"`) quotesToken = []byte(`'"`)
signsToken = []byte(`+-`) signsToken = []byte(`+-`)
punctuatorToken = []byte(`!():=[]{|}`)
spreadToken = []byte(`...`) spreadToken = []byte(`...`)
digitToken = []byte(`0123456789`) digitToken = []byte(`0123456789`)
dotToken = []byte(`.`) dotToken = []byte(`.`)
punctuatorToken = `!():=[]{|}`
) )
// Pos represents a byte position in the original input text from which // Pos represents a byte position in the original input text from which
@ -43,6 +46,8 @@ const (
itemName itemName
itemQuery itemQuery
itemMutation itemMutation
itemFragment
itemOn
itemSub itemSub
itemPunctuator itemPunctuator
itemArgsOpen itemArgsOpen
@ -136,8 +141,7 @@ func (l *lexer) current() (Pos, Pos) {
func (l *lexer) emit(t itemType) { func (l *lexer) emit(t itemType) {
l.items = append(l.items, item{t, l.start, l.pos, l.line}) l.items = append(l.items, item{t, l.start, l.pos, l.line})
// Some items contain text internally. If so, count their newlines. // Some items contain text internally. If so, count their newlines.
switch t { if t == itemStringVal {
case itemStringVal:
for i := l.start; i < l.pos; i++ { for i := l.start; i < l.pos; i++ {
if l.input[i] == '\n' { if l.input[i] == '\n' {
l.line++ l.line++
@ -263,12 +267,12 @@ func lexRoot(l *lexer) stateFn {
l.backup() l.backup()
return lexString return lexString
case r == '.': case r == '.':
if len(l.input) >= 3 { l.acceptRun(dotToken)
if equals(l.input, 0, 3, spreadToken) { s, e := l.current()
if equals(l.input, s, e, spreadToken) {
l.emit(itemSpread) l.emit(itemSpread)
return lexRoot return lexRoot
} }
}
fallthrough // '.' can start a number. fallthrough // '.' can start a number.
case r == '+' || r == '-' || ('0' <= r && r <= '9'): case r == '+' || r == '-' || ('0' <= r && r <= '9'):
l.backup() l.backup()
@ -299,10 +303,14 @@ func lexName(l *lexer) stateFn {
switch { switch {
case equals(l.input, s, e, queryToken): case equals(l.input, s, e, queryToken):
l.emitL(itemQuery) l.emitL(itemQuery)
case equals(l.input, s, e, fragmentToken):
l.emitL(itemFragment)
case equals(l.input, s, e, mutationToken): case equals(l.input, s, e, mutationToken):
l.emitL(itemMutation) l.emitL(itemMutation)
case equals(l.input, s, e, subscriptionToken): case equals(l.input, s, e, subscriptionToken):
l.emitL(itemSub) l.emitL(itemSub)
case equals(l.input, s, e, onToken):
l.emitL(itemOn)
case equals(l.input, s, e, trueToken): case equals(l.input, s, e, trueToken):
l.emitL(itemBoolVal) l.emitL(itemBoolVal)
case equals(l.input, s, e, falseToken): case equals(l.input, s, e, falseToken):
@ -395,35 +403,15 @@ func isAlphaNumeric(r rune) bool {
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r) return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
} }
func equals(b []byte, s Pos, e Pos, val []byte) bool { func equals(b []byte, s, e Pos, val []byte) bool {
n := 0 return bytes.EqualFold(b[s:e], val)
for i := s; i < e; i++ {
if n >= len(val) {
return true
}
switch {
case b[i] >= 'A' && b[i] <= 'Z' && ('a'+(b[i]-'A')) != val[n]:
return false
case b[i] != val[n]:
return false
}
n++
}
return true
} }
func contains(b []byte, s Pos, e Pos, val []byte) bool { func contains(b []byte, s, e Pos, chars string) bool {
for i := s; i < e; i++ { return bytes.ContainsAny(b[s:e], chars)
for n := 0; n < len(val); n++ {
if b[i] == val[n] {
return true
}
}
}
return false
} }
func lowercase(b []byte, s Pos, e Pos) { func lowercase(b []byte, s, e Pos) {
for i := s; i < e; i++ { for i := s; i < e; i++ {
if b[i] >= 'A' && b[i] <= 'Z' { if b[i] >= 'A' && b[i] <= 'Z' {
b[i] = ('a' + (b[i] - 'A')) b[i] = ('a' + (b[i] - 'A'))

View File

@ -3,10 +3,9 @@ package qcode
import ( import (
"errors" "errors"
"fmt" "fmt"
"hash/maphash"
"sync" "sync"
"unsafe" "unsafe"
"github.com/dosco/super-graph/core/internal/util"
) )
var ( var (
@ -50,6 +49,19 @@ func (o *Operation) Reset() {
*o = zeroOperation *o = zeroOperation
} }
type Fragment struct {
Name string
On string
Fields []Field
fieldsA [10]Field
}
var zeroFragment = Fragment{}
func (f *Fragment) Reset() {
*f = zeroFragment
}
type Field struct { type Field struct {
ID int32 ID int32
ParentID int32 ParentID int32
@ -59,11 +71,13 @@ type Field struct {
argsA [5]Arg argsA [5]Arg
Children []int32 Children []int32
childrenA [5]int32 childrenA [5]int32
Union bool
} }
type Arg struct { type Arg struct {
Name string Name string
Val *Node Val *Node
df bool
} }
type Node struct { type Node struct {
@ -82,6 +96,8 @@ func (n *Node) Reset() {
} }
type Parser struct { type Parser struct {
frags map[uint64]*Fragment
h maphash.Hash
input []byte // the string being scanned input []byte // the string being scanned
pos int pos int
items []item items []item
@ -96,12 +112,192 @@ var opPool = sync.Pool{
New: func() interface{} { return new(Operation) }, New: func() interface{} { return new(Operation) },
} }
var fragPool = sync.Pool{
New: func() interface{} { return new(Fragment) },
}
var lexPool = sync.Pool{ var lexPool = sync.Pool{
New: func() interface{} { return new(lexer) }, New: func() interface{} { return new(lexer) },
} }
func Parse(gql []byte) (*Operation, error) { func Parse(gql []byte) (*Operation, error) {
return parseSelectionSet(gql) var err error
if len(gql) == 0 {
return nil, errors.New("blank query")
}
l := lexPool.Get().(*lexer)
l.Reset()
defer lexPool.Put(l)
if err = lex(l, gql); err != nil {
return nil, err
}
p := &Parser{
input: l.input,
pos: -1,
items: l.items,
}
op := opPool.Get().(*Operation)
op.Reset()
op.Fields = op.fieldsA[:0]
s := -1
qf := false
for {
if p.peek(itemEOF) {
p.ignore()
break
}
if p.peek(itemFragment) {
p.ignore()
if f, err := p.parseFragment(); err != nil {
fragPool.Put(f)
return nil, err
}
} else {
if !qf && p.peek(itemQuery, itemMutation, itemSub, itemObjOpen) {
s = p.pos
qf = true
}
p.ignore()
}
}
p.reset(s)
if err := p.parseOp(op); err != nil {
return nil, err
}
for _, v := range p.frags {
fragPool.Put(v)
}
return op, nil
}
func (p *Parser) parseFragment() (*Fragment, error) {
var err error
frag := fragPool.Get().(*Fragment)
frag.Reset()
frag.Fields = frag.fieldsA[:0]
if p.peek(itemName) {
frag.Name = p.val(p.next())
} else {
return frag, errors.New("fragment: missing name")
}
if p.peek(itemOn) {
p.ignore()
} else {
return frag, errors.New("fragment: missing 'on' keyword")
}
if p.peek(itemName) {
frag.On = p.vall(p.next())
} else {
return frag, errors.New("fragment: missing table name after 'on' keyword")
}
if p.peek(itemObjOpen) {
p.ignore()
} else {
return frag, fmt.Errorf("fragment: expecting a '{', got: %s", p.next())
}
frag.Fields, err = p.parseFields(frag.Fields)
if err != nil {
return frag, fmt.Errorf("fragment: %v", err)
}
if p.frags == nil {
p.frags = make(map[uint64]*Fragment)
}
_, _ = p.h.WriteString(frag.Name)
k := p.h.Sum64()
p.h.Reset()
p.frags[k] = frag
return frag, nil
}
func (p *Parser) parseOp(op *Operation) error {
var err error
var typeSet bool
if p.peek(itemQuery, itemMutation, itemSub) {
err = p.parseOpTypeAndArgs(op)
if err != nil {
return fmt.Errorf("%s: %v", op.Type, err)
}
typeSet = true
}
if p.peek(itemObjOpen) {
p.ignore()
if !typeSet {
op.Type = opQuery
}
for {
if p.peek(itemEOF, itemFragment) {
p.ignore()
break
}
op.Fields, err = p.parseFields(op.Fields)
if err != nil {
return fmt.Errorf("%s: %v", op.Type, err)
}
}
} else {
return fmt.Errorf("expecting a query, mutation or subscription, got: %s", p.next())
}
return nil
}
func (p *Parser) parseOpTypeAndArgs(op *Operation) error {
item := p.next()
switch item._type {
case itemQuery:
op.Type = opQuery
case itemMutation:
op.Type = opMutate
case itemSub:
op.Type = opSub
}
op.Args = op.argsA[:0]
var err error
if p.peek(itemName) {
op.Name = p.val(p.next())
}
if p.peek(itemArgsOpen) {
p.ignore()
op.Args, err = p.parseOpParams(op.Args)
if err != nil {
return err
}
}
return nil
} }
func ParseArgValue(argVal string) (*Node, error) { func ParseArgValue(argVal string) (*Node, error) {
@ -123,195 +319,59 @@ func ParseArgValue(argVal string) (*Node, error) {
return op, err return op, err
} }
func parseSelectionSet(gql []byte) (*Operation, error) { func (p *Parser) parseFields(fields []Field) ([]Field, error) {
var err error var err error
st := NewStack()
if len(gql) == 0 { if !p.peek(itemName, itemSpread) {
return nil, errors.New("blank query") return nil, fmt.Errorf("unexpected token: %s", p.peekNext())
} }
l := lexPool.Get().(*lexer) for {
l.Reset() if p.peek(itemEOF) {
if err = lex(l, gql); err != nil {
return nil, err
}
p := &Parser{
input: l.input,
pos: -1,
items: l.items,
}
var op *Operation
if p.peek(itemObjOpen) {
p.ignore() p.ignore()
op, err = p.parseQueryOp() return nil, errors.New("invalid query")
} else {
op, err = p.parseOp()
}
if err != nil {
return nil, err
} }
if p.peek(itemObjClose) { if p.peek(itemObjClose) {
p.ignore() p.ignore()
if st.Len() != 0 {
st.Pop()
continue
} else { } else {
return nil, fmt.Errorf("operation missing closing '}'")
}
if !p.peek(itemEOF) {
p.ignore()
return nil, fmt.Errorf("invalid '%s' found after closing '}'", p.current())
}
lexPool.Put(l)
return op, err
}
func (p *Parser) next() item {
n := p.pos + 1
if n >= len(p.items) {
p.err = errEOT
return item{_type: itemEOF}
}
p.pos = n
return p.items[p.pos]
}
func (p *Parser) ignore() {
n := p.pos + 1
if n >= len(p.items) {
p.err = errEOT
return
}
p.pos = n
}
func (p *Parser) current() string {
item := p.items[p.pos]
return b2s(p.input[item.pos:item.end])
}
func (p *Parser) peek(types ...itemType) bool {
n := p.pos + 1
// if p.items[n]._type == itemEOF {
// return false
// }
if n >= len(p.items) {
return false
}
for i := 0; i < len(types); i++ {
if p.items[n]._type == types[i] {
return true
}
}
return false
}
func (p *Parser) parseOp() (*Operation, error) {
if !p.peek(itemQuery, itemMutation, itemSub) {
err := errors.New("expecting a query, mutation or subscription")
return nil, err
}
item := p.next()
op := opPool.Get().(*Operation)
op.Reset()
switch item._type {
case itemQuery:
op.Type = opQuery
case itemMutation:
op.Type = opMutate
case itemSub:
op.Type = opSub
}
op.Fields = op.fieldsA[:0]
op.Args = op.argsA[:0]
var err error
if p.peek(itemName) {
op.Name = p.val(p.next())
}
if p.peek(itemArgsOpen) {
p.ignore()
op.Args, err = p.parseOpParams(op.Args)
if err != nil {
return nil, err
}
}
if p.peek(itemObjOpen) {
p.ignore()
for n := 0; n < 10; n++ {
if !p.peek(itemName) {
break break
} }
op.Fields, err = p.parseFields(op.Fields)
if err != nil {
return nil, err
}
}
} }
return op, nil
}
func (p *Parser) parseQueryOp() (*Operation, error) {
op := opPool.Get().(*Operation)
op.Reset()
op.Type = opQuery
op.Fields = op.fieldsA[:0]
op.Args = op.argsA[:0]
var err error
for n := 0; n < 10; n++ {
if !p.peek(itemName) {
break
}
op.Fields, err = p.parseFields(op.Fields)
if err != nil {
return nil, err
}
}
return op, nil
}
func (p *Parser) parseFields(fields []Field) ([]Field, error) {
st := util.NewStack()
for {
if len(fields) >= maxFields { if len(fields) >= maxFields {
return nil, fmt.Errorf("too many fields (max %d)", maxFields) return nil, fmt.Errorf("too many fields (max %d)", maxFields)
} }
if p.peek(itemObjClose) { isFrag := false
if p.peek(itemSpread) {
p.ignore() p.ignore()
st.Pop() isFrag = true
}
if st.Len() == 0 { if isFrag {
break fields, err = p.parseFragmentFields(st, fields)
} else { } else {
continue fields, err = p.parseNormalFields(st, fields)
}
if err != nil {
return nil, err
} }
} }
return fields, nil
}
func (p *Parser) parseNormalFields(st *Stack, fields []Field) ([]Field, error) {
if !p.peek(itemName) { if !p.peek(itemName) {
return nil, errors.New("expecting an alias or field name") return nil, fmt.Errorf("expecting an alias or field name, got: %s", p.next())
} }
fields = append(fields, Field{ID: int32(len(fields))}) fields = append(fields, Field{ID: int32(len(fields))})
@ -320,18 +380,17 @@ func (p *Parser) parseFields(fields []Field) ([]Field, error) {
f.Args = f.argsA[:0] f.Args = f.argsA[:0]
f.Children = f.childrenA[:0] f.Children = f.childrenA[:0]
// Parse the inside of the the fields () parentheses // Parse the field
// in short parse the args like id, where, etc
if err := p.parseField(f); err != nil { if err := p.parseField(f); err != nil {
return nil, err return nil, err
} }
intf := st.Peek() if st.Len() == 0 {
if pid, ok := intf.(int32); ok { f.ParentID = -1
} else {
pid := st.Peek()
f.ParentID = pid f.ParentID = pid
fields[pid].Children = append(fields[pid].Children, f.ID) fields[pid].Children = append(fields[pid].Children, f.ID)
} else {
f.ParentID = -1
} }
// The first opening curley brackets after this // The first opening curley brackets after this
@ -339,12 +398,79 @@ func (p *Parser) parseFields(fields []Field) ([]Field, error) {
if p.peek(itemObjOpen) { if p.peek(itemObjOpen) {
p.ignore() p.ignore()
st.Push(f.ID) st.Push(f.ID)
}
return fields, nil
}
func (p *Parser) parseFragmentFields(st *Stack, fields []Field) ([]Field, error) {
var err error
pid := st.Peek()
if p.peek(itemOn) {
p.ignore()
fields[pid].Union = true
if fields, err = p.parseNormalFields(st, fields); err != nil {
return nil, err
}
// If parent is a union selector than copy over args from the parent
// to the first child which is the root selector for each union type.
for i := pid + 1; i < int32(len(fields)); i++ {
f := &fields[i]
if f.ParentID == pid {
f.Args = fields[pid].Args
}
}
} else if p.peek(itemObjClose) {
if st.Len() == 0 {
break
} else { } else {
continue if !p.peek(itemName) {
return nil, fmt.Errorf("expecting a fragment name, got: %s", p.next())
}
name := p.val(p.next())
_, _ = p.h.WriteString(name)
id := p.h.Sum64()
p.h.Reset()
fr, ok := p.frags[id]
if !ok {
return nil, fmt.Errorf("no fragment named '%s' defined", name)
}
ff := fr.Fields
n := int32(len(fields))
fields = append(fields, ff...)
for i := 0; i < len(ff); i++ {
k := (n + int32(i))
f := &fields[k]
f.ID = int32(k)
// If this is the top-level point the parent to the parent of the
// previous field.
if f.ParentID == -1 {
f.ParentID = pid
if f.ParentID != -1 {
fields[pid].Children = append(fields[pid].Children, f.ID)
}
// Update all the other parents id's by our new place in this new array
} else {
f.ParentID += n
}
// Copy over children since fields append is not a deep copy
f.Children = make([]int32, len(f.Children))
copy(f.Children, ff[i].Children)
// Copy over args since args append is not a deep copy
f.Args = make([]Arg, len(f.Args))
copy(f.Args, ff[i].Args)
// Update all the children which is needed.
for j := range f.Children {
f.Children[j] += n
} }
} }
} }
@ -385,7 +511,7 @@ func (p *Parser) parseOpParams(args []Arg) ([]Arg, error) {
return nil, fmt.Errorf("too many args (max %d)", maxArgs) return nil, fmt.Errorf("too many args (max %d)", maxArgs)
} }
if p.peek(itemArgsClose) { if p.peek(itemEOF, itemArgsClose) {
p.ignore() p.ignore()
break break
} }
@ -403,7 +529,7 @@ func (p *Parser) parseArgs(args []Arg) ([]Arg, error) {
return nil, fmt.Errorf("too many args (max %d)", maxArgs) return nil, fmt.Errorf("too many args (max %d)", maxArgs)
} }
if p.peek(itemArgsClose) { if p.peek(itemEOF, itemArgsClose) {
p.ignore() p.ignore()
break break
} }
@ -445,11 +571,9 @@ func (p *Parser) parseList() (*Node, error) {
} }
if ty == 0 { if ty == 0 {
ty = node.Type ty = node.Type
} else { } else if ty != node.Type {
if ty != node.Type {
return nil, errors.New("All values in a list must be of the same type") return nil, errors.New("All values in a list must be of the same type")
} }
}
node.Parent = parent node.Parent = parent
nodes = append(nodes, node) nodes = append(nodes, node)
} }
@ -470,7 +594,7 @@ func (p *Parser) parseObj() (*Node, error) {
parent.Reset() parent.Reset()
for { for {
if p.peek(itemObjClose) { if p.peek(itemEOF, itemObjClose) {
p.ignore() p.ignore()
break break
} }
@ -545,6 +669,57 @@ func (p *Parser) vall(v item) string {
return b2s(p.input[v.pos:v.end]) return b2s(p.input[v.pos:v.end])
} }
func (p *Parser) peek(types ...itemType) bool {
n := p.pos + 1
l := len(types)
// if p.items[n]._type == itemEOF {
// return false
// }
if n >= len(p.items) {
return types[0] == itemEOF
}
if l == 1 {
return p.items[n]._type == types[0]
}
for i := 0; i < l; i++ {
if p.items[n]._type == types[i] {
return true
}
}
return false
}
func (p *Parser) next() item {
n := p.pos + 1
if n >= len(p.items) {
p.err = errEOT
return item{_type: itemEOF}
}
p.pos = n
return p.items[p.pos]
}
func (p *Parser) ignore() {
n := p.pos + 1
if n >= len(p.items) {
p.err = errEOT
return
}
p.pos = n
}
func (p *Parser) peekNext() string {
item := p.items[p.pos+1]
return b2s(p.input[item.pos:item.end])
}
func (p *Parser) reset(to int) {
p.pos = to
}
func b2s(b []byte) string { func b2s(b []byte) string {
return *(*string)(unsafe.Pointer(&b)) return *(*string)(unsafe.Pointer(&b))
} }
@ -578,34 +753,9 @@ func (t parserType) String() string {
case NodeList: case NodeList:
v = "node-list" v = "node-list"
} }
return fmt.Sprintf("<%s>", v) return v
} }
// type Frees struct { func FreeNode(n *Node) {
// n *Node
// loc int
// }
// var freeList []Frees
// func FreeNode(n *Node, loc int) {
// j := -1
// for i := range freeList {
// if n == freeList[i].n {
// j = i
// break
// }
// }
// if j == -1 {
// nodePool.Put(n)
// freeList = append(freeList, Frees{n, loc})
// } else {
// fmt.Printf(">>>>(%d) RE_FREE %d %p %s %s\n", loc, freeList[j].loc, freeList[j].n, n.Name, n.Type)
// }
// }
func FreeNode(n *Node, loc int) {
nodePool.Put(n) nodePool.Put(n)
} }

View File

@ -3,6 +3,8 @@ package qcode
import ( import (
"errors" "errors"
"testing" "testing"
"github.com/chirino/graphql/schema"
) )
func TestCompile1(t *testing.T) { func TestCompile1(t *testing.T) {
@ -119,7 +121,7 @@ updateThread {
} }
} }
} }
}` }}`
qcompile, _ := NewCompiler(Config{}) qcompile, _ := NewCompiler(Config{})
_, err := qcompile.Compile([]byte(gql), "anon") _, err := qcompile.Compile([]byte(gql), "anon")
@ -129,8 +131,95 @@ updateThread {
} }
func TestFragmentsCompile1(t *testing.T) {
gql := `
fragment userFields1 on user {
id
email
}
query {
users {
...userFields2
created_at
...userFields1
}
}
fragment userFields2 on user {
first_name
last_name
}
`
qcompile, _ := NewCompiler(Config{})
_, err := qcompile.Compile([]byte(gql), "user")
if err != nil {
t.Fatal(err)
}
}
func TestFragmentsCompile2(t *testing.T) {
gql := `
query {
users {
...userFields2
created_at
...userFields1
}
}
fragment userFields1 on user {
id
email
}
fragment userFields2 on user {
first_name
last_name
}`
qcompile, _ := NewCompiler(Config{})
_, err := qcompile.Compile([]byte(gql), "user")
if err != nil {
t.Fatal(err)
}
}
func TestFragmentsCompile3(t *testing.T) {
gql := `
fragment userFields1 on user {
id
email
}
fragment userFields2 on user {
first_name
last_name
}
query {
users {
...userFields2
created_at
...userFields1
}
}
`
qcompile, _ := NewCompiler(Config{})
_, err := qcompile.Compile([]byte(gql), "user")
if err != nil {
t.Fatal(err)
}
}
var gql = []byte(` var gql = []byte(`
products( {products(
# returns only 30 items # returns only 30 items
limit: 30, limit: 30,
@ -148,7 +237,30 @@ var gql = []byte(`
id id
name name
price price
}`) }}`)
var gqlWithFragments = []byte(`
fragment userFields1 on user {
id
email
__typename
}
query {
users {
...userFields2
created_at
...userFields1
__typename
}
}
fragment userFields2 on user {
first_name
last_name
__typename
}`)
func BenchmarkQCompile(b *testing.B) { func BenchmarkQCompile(b *testing.B) {
qcompile, _ := NewCompiler(Config{}) qcompile, _ := NewCompiler(Config{})
@ -181,3 +293,85 @@ func BenchmarkQCompileP(b *testing.B) {
} }
}) })
} }
func BenchmarkQCompileFragment(b *testing.B) {
qcompile, _ := NewCompiler(Config{})
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
_, err := qcompile.Compile(gqlWithFragments, "user")
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
})
}
func BenchmarkParseFragment(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
_, err := Parse(gqlWithFragments)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkSchemaParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkSchemaParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
})
}

View File

@ -12,6 +12,7 @@ import (
) )
type QType int type QType int
type SType int
type Action int type Action int
const ( const (
@ -19,7 +20,8 @@ const (
) )
const ( const (
QTQuery QType = iota + 1 QTUnknown QType = iota
QTQuery
QTMutation QTMutation
QTInsert QTInsert
QTUpdate QTUpdate
@ -27,6 +29,12 @@ const (
QTUpsert QTUpsert
) )
const (
STNone SType = iota
STUnion
STMember
)
type QCode struct { type QCode struct {
Type QType Type QType
ActionVar string ActionVar string
@ -38,6 +46,8 @@ type QCode struct {
type Select struct { type Select struct {
ID int32 ID int32
ParentID int32 ParentID int32
UParentID int32
Type SType
Args map[string]*Node Args map[string]*Node
Name string Name string
FieldName string FieldName string
@ -172,6 +182,8 @@ const (
type Compiler struct { type Compiler struct {
tr map[string]map[string]*trval tr map[string]map[string]*trval
bl map[string]struct{} bl map[string]struct{}
defBlock bool
} }
var expPool = sync.Pool{ var expPool = sync.Pool{
@ -179,7 +191,7 @@ var expPool = sync.Pool{
} }
func NewCompiler(c Config) (*Compiler, error) { func NewCompiler(c Config) (*Compiler, error) {
co := &Compiler{} co := &Compiler{defBlock: c.DefaultBlock}
co.tr = make(map[string]map[string]*trval) co.tr = make(map[string]map[string]*trval)
co.bl = make(map[string]struct{}, len(c.Blocklist)) co.bl = make(map[string]struct{}, len(c.Blocklist))
@ -218,6 +230,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
} }
trv.query.cols = listToMap(trc.Query.Columns) trv.query.cols = listToMap(trc.Query.Columns)
trv.query.disable.funcs = trc.Query.DisableFunctions trv.query.disable.funcs = trc.Query.DisableFunctions
trv.query.block = trc.Query.Block
// insert config // insert config
trv.insert.fil, trv.insert.filNU, err = compileFilter(trc.Insert.Filters) trv.insert.fil, trv.insert.filNU, err = compileFilter(trc.Insert.Filters)
@ -225,8 +238,9 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
return err return err
} }
trv.insert.cols = listToMap(trc.Insert.Columns) trv.insert.cols = listToMap(trc.Insert.Columns)
trv.insert.psmap = parsePresets(trc.Insert.Presets) trv.insert.psmap = trc.Insert.Presets
trv.insert.pslist = mapToList(trv.insert.psmap) trv.insert.pslist = mapToList(trv.insert.psmap)
trv.insert.block = trc.Insert.Block
// update config // update config
trv.update.fil, trv.update.filNU, err = compileFilter(trc.Update.Filters) trv.update.fil, trv.update.filNU, err = compileFilter(trc.Update.Filters)
@ -234,8 +248,9 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
return err return err
} }
trv.update.cols = listToMap(trc.Update.Columns) trv.update.cols = listToMap(trc.Update.Columns)
trv.update.psmap = parsePresets(trc.Update.Presets) trv.update.psmap = trc.Update.Presets
trv.update.pslist = mapToList(trv.update.psmap) trv.update.pslist = mapToList(trv.update.psmap)
trv.update.block = trc.Update.Block
// delete config // delete config
trv.delete.fil, trv.delete.filNU, err = compileFilter(trc.Delete.Filters) trv.delete.fil, trv.delete.filNU, err = compileFilter(trc.Delete.Filters)
@ -243,6 +258,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
return err return err
} }
trv.delete.cols = listToMap(trc.Delete.Columns) trv.delete.cols = listToMap(trc.Delete.Columns)
trv.delete.block = trc.Delete.Block
singular := flect.Singularize(table) singular := flect.Singularize(table)
plural := flect.Pluralize(table) plural := flect.Pluralize(table)
@ -271,6 +287,7 @@ func (com *Compiler) Compile(query []byte, role string) (*QCode, error) {
return nil, err return nil, err
} }
freeNodes(op)
opPool.Put(op) opPool.Put(op)
return &qc, nil return &qc, nil
@ -328,17 +345,76 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
} }
trv := com.getRole(role, field.Name) trv := com.getRole(role, field.Name)
skipRender := false
if trv != nil {
switch action {
case QTQuery:
if trv.query.block {
skipRender = true
}
case QTInsert:
if trv.insert.block {
return fmt.Errorf("%s, insert blocked: %s", role, field.Name)
}
case QTUpdate:
if trv.update.block {
return fmt.Errorf("%s, update blocked: %s", role, field.Name)
}
case QTDelete:
if trv.delete.block {
return fmt.Errorf("%s, delete blocked: %s", role, field.Name)
}
}
} else if role == "anon" {
skipRender = com.defBlock
}
selects = append(selects, Select{ selects = append(selects, Select{
ID: id, ID: id,
ParentID: parentID, ParentID: parentID,
Name: field.Name, Name: field.Name,
Children: make([]int32, 0, 5), SkipRender: skipRender,
Allowed: trv.allowedColumns(action),
Functions: true,
}) })
s := &selects[(len(selects) - 1)] s := &selects[(len(selects) - 1)]
if field.Union {
s.Type = STUnion
}
if field.Alias != "" {
s.FieldName = field.Alias
} else {
s.FieldName = s.Name
}
if s.ParentID == -1 {
qc.Roots = append(qc.Roots, s.ID)
} else {
p := &selects[s.ParentID]
p.Children = append(p.Children, s.ID)
if p.Type == STUnion {
s.Type = STMember
s.UParentID = p.ParentID
}
}
if skipRender {
id++
continue
}
s.Children = make([]int32, 0, 5)
s.Functions = true
if trv != nil {
s.Allowed = trv.allowedColumns(action)
switch action { switch action {
case QTQuery: case QTQuery:
s.Functions = !trv.query.disable.funcs s.Functions = !trv.query.disable.funcs
@ -352,11 +428,6 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
s.PresetMap = trv.update.psmap s.PresetMap = trv.update.psmap
s.PresetList = trv.update.pslist s.PresetList = trv.update.pslist
} }
if len(field.Alias) != 0 {
s.FieldName = field.Alias
} else {
s.FieldName = s.Name
} }
err := com.compileArgs(qc, s, field.Args, role) err := com.compileArgs(qc, s, field.Args, role)
@ -367,14 +438,8 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
// Order is important AddFilters must come after compileArgs // Order is important AddFilters must come after compileArgs
com.AddFilters(qc, s, role) com.AddFilters(qc, s, role)
if s.ParentID == -1 {
qc.Roots = append(qc.Roots, s.ID)
} else {
p := &selects[s.ParentID]
p.Children = append(p.Children, s.ID)
}
s.Cols = make([]Column, 0, len(field.Children)) s.Cols = make([]Column, 0, len(field.Children))
cm := make(map[string]struct{})
action = QTQuery action = QTQuery
for _, cid := range field.Children { for _, cid := range field.Children {
@ -384,19 +449,27 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
continue continue
} }
var fname string
if f.Alias != "" {
fname = f.Alias
} else {
fname = f.Name
}
if _, ok := cm[fname]; ok {
continue
} else {
cm[fname] = struct{}{}
}
if len(f.Children) != 0 { if len(f.Children) != 0 {
val := f.ID | (s.ID << 16) val := f.ID | (s.ID << 16)
st.Push(val) st.Push(val)
continue continue
} }
col := Column{Name: f.Name} col := Column{Name: f.Name, FieldName: fname}
if len(f.Alias) != 0 {
col.FieldName = f.Alias
} else {
col.FieldName = f.Name
}
s.Cols = append(s.Cols, col) s.Cols = append(s.Cols, col)
} }
@ -408,19 +481,16 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
} }
qc.Selects = selects[:id] qc.Selects = selects[:id]
return nil return nil
} }
func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) { func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) {
var fil *Exp var fil *Exp
var nu bool var nu bool // need user_id (or not) in this filter
if trv, ok := com.tr[role][sel.Name]; ok { if trv, ok := com.tr[role][sel.Name]; ok {
fil, nu = trv.filter(qc.Type) fil, nu = trv.filter(qc.Type)
} else if role == "anon" {
// Tables not defined under the anon role will not be rendered
sel.SkipRender = true
} }
if fil == nil { if fil == nil {
@ -443,50 +513,42 @@ func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) {
func (com *Compiler) compileArgs(qc *QCode, sel *Select, args []Arg, role string) error { func (com *Compiler) compileArgs(qc *QCode, sel *Select, args []Arg, role string) error {
var err error var err error
// don't free this arg either previously done or will be free'd
// in the future like in psql
var df bool
for i := range args { for i := range args {
arg := &args[i] arg := &args[i]
switch arg.Name { switch arg.Name {
case "id": case "id":
err, df = com.compileArgID(sel, arg) err = com.compileArgID(sel, arg)
case "search": case "search":
err, df = com.compileArgSearch(sel, arg) err = com.compileArgSearch(sel, arg)
case "where": case "where":
err, df = com.compileArgWhere(sel, arg, role) err = com.compileArgWhere(sel, arg, role)
case "orderby", "order_by", "order": case "orderby", "order_by", "order":
err, df = com.compileArgOrderBy(sel, arg) err = com.compileArgOrderBy(sel, arg)
case "distinct_on", "distinct": case "distinct_on", "distinct":
err, df = com.compileArgDistinctOn(sel, arg) err = com.compileArgDistinctOn(sel, arg)
case "limit": case "limit":
err, df = com.compileArgLimit(sel, arg) err = com.compileArgLimit(sel, arg)
case "offset": case "offset":
err, df = com.compileArgOffset(sel, arg) err = com.compileArgOffset(sel, arg)
case "first": case "first":
err, df = com.compileArgFirstLast(sel, arg, PtForward) err = com.compileArgFirstLast(sel, arg, PtForward)
case "last": case "last":
err, df = com.compileArgFirstLast(sel, arg, PtBackward) err = com.compileArgFirstLast(sel, arg, PtBackward)
case "after": case "after":
err, df = com.compileArgAfterBefore(sel, arg, PtForward) err = com.compileArgAfterBefore(sel, arg, PtForward)
case "before": case "before":
err, df = com.compileArgAfterBefore(sel, arg, PtBackward) err = com.compileArgAfterBefore(sel, arg, PtBackward)
}
if !df {
FreeNode(arg.Val, 5)
} }
if err != nil { if err != nil {
@ -567,15 +629,13 @@ func (com *Compiler) compileArgNode(st *util.Stack, node *Node, usePool bool) (*
} }
// Objects inside a list // Objects inside a list
if len(node.Name) == 0 { if node.Name == "" {
pushChildren(st, node.exp, node) pushChildren(st, node.exp, node)
continue continue
} else { } else if _, ok := com.bl[node.Name]; ok {
if _, ok := com.bl[node.Name]; ok {
continue continue
} }
}
ex, err := newExp(st, node, usePool) ex, err := newExp(st, node, usePool)
if err != nil { if err != nil {
@ -597,39 +657,20 @@ func (com *Compiler) compileArgNode(st *util.Stack, node *Node, usePool bool) (*
} }
} }
if usePool {
st.Push(node)
for {
if st.Len() == 0 {
break
}
intf := st.Pop()
node, ok := intf.(*Node)
if !ok || node == nil {
continue
}
for i := range node.Children {
st.Push(node.Children[i])
}
FreeNode(node, 1)
}
}
return root, needsUser, nil return root, needsUser, nil
} }
func (com *Compiler) compileArgID(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgID(sel *Select, arg *Arg) error {
if sel.ID != 0 { if sel.ID != 0 {
return nil, false return nil
} }
if sel.Where != nil && sel.Where.Op == OpEqID { if sel.Where != nil && sel.Where.Op == OpEqID {
return nil, false return nil
} }
if arg.Val.Type != NodeVar { if arg.Val.Type != NodeVar {
return argErr("id", "variable"), false return argErr("id", "variable")
} }
ex := expPool.Get().(*Exp) ex := expPool.Get().(*Exp)
@ -640,12 +681,12 @@ func (com *Compiler) compileArgID(sel *Select, arg *Arg) (error, bool) {
ex.Val = arg.Val.Val ex.Val = arg.Val.Val
sel.Where = ex sel.Where = ex
return nil, false return nil
} }
func (com *Compiler) compileArgSearch(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgSearch(sel *Select, arg *Arg) error {
if arg.Val.Type != NodeVar { if arg.Val.Type != NodeVar {
return argErr("search", "variable"), false return argErr("search", "variable")
} }
ex := expPool.Get().(*Exp) ex := expPool.Get().(*Exp)
@ -660,18 +701,19 @@ func (com *Compiler) compileArgSearch(sel *Select, arg *Arg) (error, bool) {
} }
sel.Args[arg.Name] = arg.Val sel.Args[arg.Name] = arg.Val
arg.df = true
AddFilter(sel, ex) AddFilter(sel, ex)
return nil, true return nil
} }
func (com *Compiler) compileArgWhere(sel *Select, arg *Arg, role string) (error, bool) { func (com *Compiler) compileArgWhere(sel *Select, arg *Arg, role string) error {
st := util.NewStack() st := util.NewStack()
var err error var err error
ex, nu, err := com.compileArgObj(st, arg) ex, nu, err := com.compileArgObj(st, arg)
if err != nil { if err != nil {
return err, false return err
} }
if nu && role == "anon" { if nu && role == "anon" {
@ -679,12 +721,12 @@ func (com *Compiler) compileArgWhere(sel *Select, arg *Arg, role string) (error,
} }
AddFilter(sel, ex) AddFilter(sel, ex)
return nil, true return nil
} }
func (com *Compiler) compileArgOrderBy(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgOrderBy(sel *Select, arg *Arg) error {
if arg.Val.Type != NodeObj { if arg.Val.Type != NodeObj {
return fmt.Errorf("expecting an object"), false return fmt.Errorf("expecting an object")
} }
st := util.NewStack() st := util.NewStack()
@ -702,16 +744,15 @@ func (com *Compiler) compileArgOrderBy(sel *Select, arg *Arg) (error, bool) {
node, ok := intf.(*Node) node, ok := intf.(*Node)
if !ok || node == nil { if !ok || node == nil {
return fmt.Errorf("17: unexpected value %v (%t)", intf, intf), false return fmt.Errorf("17: unexpected value %v (%t)", intf, intf)
} }
if _, ok := com.bl[node.Name]; ok { if _, ok := com.bl[node.Name]; ok {
FreeNode(node, 2)
continue continue
} }
if node.Type != NodeStr && node.Type != NodeVar { if node.Type != NodeStr && node.Type != NodeVar {
return fmt.Errorf("expecting a string or variable"), false return fmt.Errorf("expecting a string or variable")
} }
ob := &OrderBy{} ob := &OrderBy{}
@ -730,25 +771,24 @@ func (com *Compiler) compileArgOrderBy(sel *Select, arg *Arg) (error, bool) {
case "desc_nulls_last": case "desc_nulls_last":
ob.Order = OrderDescNullsLast ob.Order = OrderDescNullsLast
default: default:
return fmt.Errorf("valid values include asc, desc, asc_nulls_first and desc_nulls_first"), false return fmt.Errorf("valid values include asc, desc, asc_nulls_first and desc_nulls_first")
} }
setOrderByColName(ob, node) setOrderByColName(ob, node)
sel.OrderBy = append(sel.OrderBy, ob) sel.OrderBy = append(sel.OrderBy, ob)
FreeNode(node, 3)
} }
return nil, false return nil
} }
func (com *Compiler) compileArgDistinctOn(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgDistinctOn(sel *Select, arg *Arg) error {
node := arg.Val node := arg.Val
if _, ok := com.bl[node.Name]; ok { if _, ok := com.bl[node.Name]; ok {
return nil, false return nil
} }
if node.Type != NodeList && node.Type != NodeStr { if node.Type != NodeList && node.Type != NodeStr {
return fmt.Errorf("expecting a list of strings or just a string"), false return fmt.Errorf("expecting a list of strings or just a string")
} }
if node.Type == NodeStr { if node.Type == NodeStr {
@ -757,68 +797,70 @@ func (com *Compiler) compileArgDistinctOn(sel *Select, arg *Arg) (error, bool) {
for i := range node.Children { for i := range node.Children {
sel.DistinctOn = append(sel.DistinctOn, node.Children[i].Val) sel.DistinctOn = append(sel.DistinctOn, node.Children[i].Val)
FreeNode(node.Children[i], 5)
} }
return nil, false return nil
} }
func (com *Compiler) compileArgLimit(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgLimit(sel *Select, arg *Arg) error {
node := arg.Val node := arg.Val
if node.Type != NodeInt { if node.Type != NodeInt {
return argErr("limit", "number"), false return argErr("limit", "number")
} }
sel.Paging.Limit = node.Val sel.Paging.Limit = node.Val
return nil, false return nil
} }
func (com *Compiler) compileArgOffset(sel *Select, arg *Arg) (error, bool) { func (com *Compiler) compileArgOffset(sel *Select, arg *Arg) error {
node := arg.Val node := arg.Val
if node.Type != NodeVar { if node.Type != NodeVar {
return argErr("offset", "variable"), false return argErr("offset", "variable")
} }
sel.Paging.Offset = node.Val sel.Paging.Offset = node.Val
return nil, false return nil
} }
func (com *Compiler) compileArgFirstLast(sel *Select, arg *Arg, pt PagingType) (error, bool) { func (com *Compiler) compileArgFirstLast(sel *Select, arg *Arg, pt PagingType) error {
node := arg.Val node := arg.Val
if node.Type != NodeInt { if node.Type != NodeInt {
return argErr(arg.Name, "number"), false return argErr(arg.Name, "number")
} }
sel.Paging.Type = pt sel.Paging.Type = pt
sel.Paging.Limit = node.Val sel.Paging.Limit = node.Val
return nil, false return nil
} }
func (com *Compiler) compileArgAfterBefore(sel *Select, arg *Arg, pt PagingType) (error, bool) { func (com *Compiler) compileArgAfterBefore(sel *Select, arg *Arg, pt PagingType) error {
node := arg.Val node := arg.Val
if node.Type != NodeVar || node.Val != "cursor" { if node.Type != NodeVar || node.Val != "cursor" {
return fmt.Errorf("value for argument '%s' must be a variable named $cursor", arg.Name), false return fmt.Errorf("value for argument '%s' must be a variable named $cursor", arg.Name)
} }
sel.Paging.Type = pt sel.Paging.Type = pt
sel.Paging.Cursor = true sel.Paging.Cursor = true
return nil, false return nil
} }
var zeroTrv = &trval{} // var zeroTrv = &trval{}
func (com *Compiler) getRole(role, field string) *trval { func (com *Compiler) getRole(role, field string) *trval {
if trv, ok := com.tr[role][field]; ok { if trv, ok := com.tr[role][field]; ok {
return trv return trv
} else {
return zeroTrv
} }
return nil
// } else {
// return zeroTrv
// }
} }
func AddFilter(sel *Select, fil *Exp) { func AddFilter(sel *Select, fil *Exp) {
@ -945,6 +987,9 @@ func newExp(st *util.Stack, node *Node, usePool bool) (*Exp, error) {
ex.Op = OpDistinct ex.Op = OpDistinct
ex.Val = node.Val ex.Val = node.Val
default: default:
if len(node.Children) == 0 {
return nil, fmt.Errorf("[Where] invalid operation: %s", name)
}
pushChildren(st, node.exp, node) pushChildren(st, node.exp, node)
return nil, nil // skip node return nil, nil // skip node
} }
@ -964,8 +1009,9 @@ func newExp(st *util.Stack, node *Node, usePool bool) (*Exp, error) {
case NodeVar: case NodeVar:
ex.Type = ValVar ex.Type = ValVar
default: default:
return nil, fmt.Errorf("[Where] valid values include string, int, float, boolean and list: %s", node.Type) return nil, fmt.Errorf("[Where] invalid values for: %s", name)
} }
setWhereColName(ex, node) setWhereColName(ex, node)
} }
@ -984,10 +1030,15 @@ func setListVal(ex *Exp, node *Node) {
case NodeFloat: case NodeFloat:
ex.ListType = ValFloat ex.ListType = ValFloat
} }
} else {
ex.Val = node.Val
return
} }
for i := range node.Children { for i := range node.Children {
ex.ListVal = append(ex.ListVal, node.Children[i].Val) ex.ListVal = append(ex.ListVal, node.Children[i].Val)
} }
} }
func setWhereColName(ex *Exp, node *Node) { func setWhereColName(ex *Exp, node *Node) {
@ -997,7 +1048,7 @@ func setWhereColName(ex *Exp, node *Node) {
if n.Type != NodeObj { if n.Type != NodeObj {
continue continue
} }
if len(n.Name) != 0 { if n.Name != "" {
k := n.Name k := n.Name
if k == "and" || k == "or" || k == "not" || if k == "and" || k == "or" || k == "not" ||
k == "_and" || k == "_or" || k == "_not" { k == "_and" || k == "_or" || k == "_not" {
@ -1014,6 +1065,7 @@ func setWhereColName(ex *Exp, node *Node) {
ex.Col = list[listlen-1] ex.Col = list[listlen-1]
ex.NestedCols = list[:listlen] ex.NestedCols = list[:listlen]
} }
} }
func setOrderByColName(ob *OrderBy, node *Node) { func setOrderByColName(ob *OrderBy, node *Node) {
@ -1175,3 +1227,81 @@ func FreeExp(ex *Exp) {
func argErr(name, ty string) error { func argErr(name, ty string) error {
return fmt.Errorf("value for argument '%s' must be a %s", name, ty) return fmt.Errorf("value for argument '%s' must be a %s", name, ty)
} }
func freeNodes(op *Operation) {
var st *util.Stack
fm := make(map[*Node]struct{})
for i := range op.Args {
arg := op.Args[i]
if arg.df {
continue
}
for i := range arg.Val.Children {
if st == nil {
st = util.NewStack()
}
c := arg.Val.Children[i]
if _, ok := fm[c]; !ok {
st.Push(c)
}
}
if _, ok := fm[arg.Val]; !ok {
nodePool.Put(arg.Val)
fm[arg.Val] = struct{}{}
}
}
for i := range op.Fields {
f := op.Fields[i]
for j := range f.Args {
arg := f.Args[j]
if arg.df {
continue
}
for k := range arg.Val.Children {
if st == nil {
st = util.NewStack()
}
c := arg.Val.Children[k]
if _, ok := fm[c]; !ok {
st.Push(c)
}
}
if _, ok := fm[arg.Val]; !ok {
nodePool.Put(arg.Val)
fm[arg.Val] = struct{}{}
}
}
}
if st == nil {
return
}
for {
if st.Len() == 0 {
break
}
intf := st.Pop()
node, ok := intf.(*Node)
if !ok || node == nil {
continue
}
for i := range node.Children {
st.Push(node.Children[i])
}
if _, ok := fm[node]; !ok {
nodePool.Put(node)
fm[node] = struct{}{}
}
}
}

View File

@ -1,12 +1,17 @@
package qcode package qcode
func GetQType(gql string) QType { func GetQType(gql string) QType {
ic := false
for i := range gql { for i := range gql {
b := gql[i] b := gql[i]
if b == '{' { switch {
case b == '#':
ic = true
case b == '\n':
ic = false
case !ic && b == '{':
return QTQuery return QTQuery
} case !ic && al(b):
if al(b) {
switch b { switch b {
case 'm', 'M': case 'm', 'M':
return QTMutation return QTMutation
@ -24,6 +29,8 @@ func al(b byte) bool {
func (qt QType) String() string { func (qt QType) String() string {
switch qt { switch qt {
case QTUnknown:
return "unknown"
case QTQuery: case QTQuery:
return "query" return "query"
case QTMutation: case QTMutation:

View File

@ -0,0 +1,50 @@
package qcode
import "testing"
func TestGetQType(t *testing.T) {
type args struct {
gql string
}
type ts struct {
name string
args args
want QType
}
tests := []ts{
ts{
name: "query",
args: args{gql: " query {"},
want: QTQuery,
},
ts{
name: "mutation",
args: args{gql: " mutation {"},
want: QTMutation,
},
ts{
name: "default query",
args: args{gql: " {"},
want: QTQuery,
},
ts{
name: "default query with comment",
args: args{gql: `# query is good
{`},
want: QTQuery,
},
ts{
name: "failed query with comment",
args: args{gql: `# query is good query {`},
want: -1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := GetQType(tt.args.gql); got != tt.want {
t.Errorf("GetQType() = %v, want %v", got, tt.want)
}
})
}
}

490
core/introspec.go Normal file
View File

@ -0,0 +1,490 @@
package core
import (
"strings"
"github.com/chirino/graphql"
"github.com/chirino/graphql/resolvers"
"github.com/chirino/graphql/schema"
"github.com/dosco/super-graph/core/internal/psql"
)
var typeMap map[string]string = map[string]string{
"smallint": "Int",
"integer": "Int",
"bigint": "Int",
"smallserial": "Int",
"serial": "Int",
"bigserial": "Int",
"decimal": "Float",
"numeric": "Float",
"real": "Float",
"double precision": "Float",
"money": "Float",
"boolean": "Boolean",
}
func (sg *SuperGraph) initGraphQLEgine() error {
engine := graphql.New()
engineSchema := engine.Schema
dbSchema := sg.schema
if err := engineSchema.Parse(`enum OrderDirection { asc desc }`); err != nil {
return err
}
gqltype := func(col psql.DBColumn) schema.Type {
typeName := typeMap[strings.ToLower(col.Type)]
if typeName == "" {
typeName = "String"
}
var t schema.Type = &schema.TypeName{Name: typeName}
if col.NotNull {
t = &schema.NonNull{OfType: t}
}
return t
}
query := &schema.Object{
Name: "Query",
Fields: schema.FieldList{},
}
mutation := &schema.Object{
Name: "Mutation",
Fields: schema.FieldList{},
}
engineSchema.Types[query.Name] = query
engineSchema.Types[mutation.Name] = mutation
engineSchema.EntryPoints[schema.Query] = query
engineSchema.EntryPoints[schema.Mutation] = mutation
//validGraphQLIdentifierRegex := regexp.MustCompile(`^[A-Za-z_][A-Za-z_0-9]*$`)
scalarExpressionTypesNeeded := map[string]bool{}
tableNames := dbSchema.GetTableNames()
funcs := dbSchema.GetFunctions()
for _, table := range tableNames {
ti, err := dbSchema.GetTable(table)
if err != nil {
return err
}
if !ti.IsSingular {
continue
}
singularName := ti.Singular
// if !validGraphQLIdentifierRegex.MatchString(singularName) {
// return errors.New("table name is not a valid GraphQL identifier: " + singularName)
// }
pluralName := ti.Plural
// if !validGraphQLIdentifierRegex.MatchString(pluralName) {
// return errors.New("table name is not a valid GraphQL identifier: " + pluralName)
// }
outputType := &schema.Object{
Name: singularName + "Output",
Fields: schema.FieldList{},
}
engineSchema.Types[outputType.Name] = outputType
inputType := &schema.InputObject{
Name: singularName + "Input",
Fields: schema.InputValueList{},
}
engineSchema.Types[inputType.Name] = inputType
orderByType := &schema.InputObject{
Name: singularName + "OrderBy",
Fields: schema.InputValueList{},
}
engineSchema.Types[orderByType.Name] = orderByType
expressionTypeName := singularName + "Expression"
expressionType := &schema.InputObject{
Name: expressionTypeName,
Fields: schema.InputValueList{
&schema.InputValue{
Name: "and",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "or",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "not",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
for _, col := range ti.Columns {
colName := col.Name
// if !validGraphQLIdentifierRegex.MatchString(colName) {
// return errors.New("column name is not a valid GraphQL identifier: " + colName)
// }
colType := gqltype(col)
nullableColType := ""
if x, ok := colType.(*schema.NonNull); ok {
nullableColType = x.OfType.(*schema.TypeName).Name
} else {
nullableColType = colType.(*schema.TypeName).Name
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: colName,
Type: colType,
})
for _, f := range funcs {
if col.Type != f.Params[0].Type {
continue
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: f.Name + "_" + colName,
Type: colType,
})
}
// If it's a numeric type...
if nullableColType == "Float" || nullableColType == "Int" {
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "avg_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "count_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "max_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "min_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_samp_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "variance_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_samp_" + colName,
Type: colType,
})
}
inputType.Fields = append(inputType.Fields, &schema.InputValue{
Name: colName,
Type: colType,
})
orderByType.Fields = append(orderByType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "OrderDirection"}},
})
scalarExpressionTypesNeeded[nullableColType] = true
expressionType.Fields = append(expressionType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: nullableColType + "Expression"}},
})
}
outputTypeName := &schema.TypeName{Name: outputType.Name}
inputTypeName := &schema.TypeName{Name: inputType.Name}
pluralOutputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: outputType.Name}}}}
pluralInputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: inputType.Name}}}}
args := schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: "To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."},
Name: "order_by",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: orderByType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "where",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "limit",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "offset",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "first",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "last",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "before",
Type: &schema.TypeName{Name: "String"},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "after",
Type: &schema.TypeName{Name: "String"},
},
}
if ti.PrimaryCol != nil {
t := gqltype(*ti.PrimaryCol)
if _, ok := t.(*schema.NonNull); !ok {
t = &schema.NonNull{OfType: t}
}
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Finds the record by the primary key"},
Name: "id",
Type: t,
})
}
if ti.TSVCol != nil {
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Performs full text search using a TSV index"},
Name: "search",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
})
}
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: singularName,
Type: outputTypeName,
Args: args,
})
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: pluralName,
Type: pluralOutputTypeName,
Args: args,
})
mutationArgs := append(args, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "insert",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "update",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upsert",
Type: inputTypeName,
},
}...)
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: singularName,
Args: mutationArgs,
Type: outputType,
})
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: pluralName,
Args: append(mutationArgs, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "inserts",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "updates",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upserts",
Type: pluralInputTypeName,
},
}...),
Type: outputType,
})
}
for typeName := range scalarExpressionTypesNeeded {
expressionType := &schema.InputObject{
Name: typeName + "Expression",
Fields: schema.InputValueList{
&schema.InputValue{
Name: "eq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "neq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "not_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "nin",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "not_in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nlike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nsimilar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "has_key",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "has_key_any",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "has_key_all",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contains",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contained_in",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "is_null",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Boolean"}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
}
if err := engineSchema.ResolveTypes(); err != nil {
return err
}
engine.Resolver = resolvers.Func(func(request *resolvers.ResolveRequest, next resolvers.Resolution) resolvers.Resolution {
resolver := resolvers.MetadataResolver.Resolve(request, next)
if resolver != nil {
return resolver
}
resolver = resolvers.MethodResolver.Resolve(request, next) // needed by the MetadataResolver
if resolver != nil {
return resolver
}
return nil
})
sg.ge = engine
return nil
}

View File

@ -2,197 +2,118 @@ package core
import ( import (
"bytes" "bytes"
"context"
"crypto/sha1"
"database/sql" "database/sql"
"encoding/hex"
"fmt" "fmt"
"hash/maphash"
"io" "io"
"strings" "strings"
"sync"
"github.com/dosco/super-graph/core/internal/allow" "github.com/dosco/super-graph/core/internal/allow"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/valyala/fasttemplate"
) )
type preparedItem struct { type query struct {
sync.Once
sd *sql.Stmt sd *sql.Stmt
args [][]byte ai allow.Item
qt qcode.QType
err error
st stmt st stmt
roleArg bool roleArg bool
} }
func (sg *SuperGraph) initPrepared() error { func (sg *SuperGraph) prepare(q *query, role string) {
ct := context.Background() var stmts []stmt
var err error
qb := []byte(q.ai.Query)
vars := []byte(q.ai.Vars)
switch q.qt {
case qcode.QTQuery:
if sg.abacEnabled {
stmts, err = sg.buildMultiStmt(qb, vars)
} else {
stmts, err = sg.buildRoleStmt(qb, vars, role)
}
case qcode.QTMutation:
stmts, err = sg.buildRoleStmt(qb, vars, role)
}
if err != nil {
sg.log.Printf("WRN %s %s: %v", q.qt, q.ai.Name, err)
}
q.st = stmts[0]
q.roleArg = len(stmts) > 1
q.sd, err = sg.db.Prepare(q.st.sql)
if err != nil {
q.err = fmt.Errorf("prepare failed: %v: %s", err, q.st.sql)
}
}
func (sg *SuperGraph) initPrepared() error {
if sg.allowList.IsPersist() { if sg.allowList.IsPersist() {
return nil return nil
} }
sg.prepared = make(map[string]*preparedItem)
tx, err := sg.db.BeginTx(ct, nil) if err := sg.prepareRoleStmt(); err != nil {
if err != nil { return fmt.Errorf("role query: %w", err)
return err
}
defer tx.Rollback() //nolint: errcheck
if err = sg.prepareRoleStmt(tx); err != nil {
return fmt.Errorf("prepareRoleStmt: %w", err)
} }
if err := tx.Commit(); err != nil { sg.queries = make(map[uint64]*query)
return err
}
success := 0
list, err := sg.allowList.Load() list, err := sg.allowList.Load()
if err != nil { if err != nil {
return err return err
} }
h := maphash.Hash{}
h.SetSeed(sg.hashSeed)
for _, v := range list { for _, v := range list {
if len(v.Query) == 0 { if v.Query == "" {
continue continue
} }
err := sg.prepareStmt(v) qt := qcode.GetQType(v.Query)
if err == nil {
success++
continue
}
// if len(v.Vars) == 0 {
// logger.Warn().Err(err).Msg(v.Query)
// } else {
// logger.Warn().Err(err).Msgf("%s %s", v.Vars, v.Query)
// }
}
// logger.Info().
// Msgf("Registered %d of %d queries from allow.list as prepared statements",
// success, len(list))
return nil
}
func (sg *SuperGraph) prepareStmt(item allow.Item) error {
query := item.Query
qb := []byte(query)
vars := item.Vars
qt := qcode.GetQType(query)
ct := context.Background()
tx, err := sg.db.BeginTx(ct, nil)
if err != nil {
return err
}
defer tx.Rollback() //nolint: errcheck
switch qt { switch qt {
case qcode.QTQuery: case qcode.QTQuery:
var stmts1 []stmt sg.queries[queryID(&h, v.Name, "user")] = &query{ai: v, qt: qt}
var err error sg.queries[queryID(&h, v.Name, "anon")] = &query{ai: v, qt: qt}
if sg.abacEnabled {
stmts1, err = sg.buildMultiStmt(qb, vars)
} else {
stmts1, err = sg.buildRoleStmt(qb, vars, "user")
}
if err != nil {
return err
}
//logger.Debug().Msgf("Prepared statement 'query %s' (user)", item.Name)
err = sg.prepare(ct, tx, stmts1, stmtHash(item.Name, "user"))
if err != nil {
return err
}
if sg.anonExists {
// logger.Debug().Msgf("Prepared statement 'query %s' (anon)", item.Name)
stmts2, err := sg.buildRoleStmt(qb, vars, "anon")
if err == psql.ErrAllTablesSkipped {
return nil
}
if err != nil {
return err
}
err = sg.prepare(ct, tx, stmts2, stmtHash(item.Name, "anon"))
if err != nil {
return err
}
}
case qcode.QTMutation: case qcode.QTMutation:
for _, role := range sg.conf.Roles { for _, role := range sg.conf.Roles {
// logger.Debug().Msgf("Prepared statement 'mutation %s' (%s)", item.Name, role.Name) sg.queries[queryID(&h, v.Name, role.Name)] = &query{ai: v, qt: qt}
stmts, err := sg.buildRoleStmt(qb, vars, role.Name)
if err != nil {
// if len(item.Vars) == 0 {
// logger.Warn().Err(err).Msg(item.Query)
// } else {
// logger.Warn().Err(err).Msgf("%s %s", item.Vars, item.Query)
// }
continue
}
err = sg.prepare(ct, tx, stmts, stmtHash(item.Name, role.Name))
if err != nil {
return err
} }
} }
} }
if err := tx.Commit(); err != nil {
return err
}
return nil
}
func (sg *SuperGraph) prepare(ct context.Context, tx *sql.Tx, st []stmt, key string) error {
finalSQL, am := processTemplate(st[0].sql)
sd, err := tx.Prepare(finalSQL)
if err != nil {
return err
}
sg.prepared[key] = &preparedItem{
sd: sd,
args: am,
st: st[0],
roleArg: len(st) > 1,
}
return nil return nil
} }
// nolint: errcheck // nolint: errcheck
func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error { func (sg *SuperGraph) prepareRoleStmt() error {
var err error var err error
if !sg.abacEnabled { if !sg.abacEnabled {
return nil return nil
} }
rq := strings.ReplaceAll(sg.conf.RolesQuery, "$user_id", "$1")
w := &bytes.Buffer{} w := &bytes.Buffer{}
io.WriteString(w, `SELECT (CASE WHEN EXISTS (`) io.WriteString(w, `SELECT (CASE WHEN EXISTS (`)
io.WriteString(w, sg.conf.RolesQuery) io.WriteString(w, rq)
io.WriteString(w, `) THEN `) io.WriteString(w, `) THEN `)
io.WriteString(w, `(SELECT (CASE`) io.WriteString(w, `(SELECT (CASE`)
for _, role := range sg.conf.Roles { for _, role := range sg.conf.Roles {
if len(role.Match) == 0 { if role.Match == "" {
continue continue
} }
io.WriteString(w, ` WHEN `) io.WriteString(w, ` WHEN `)
@ -202,14 +123,12 @@ func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error {
io.WriteString(w, `'`) io.WriteString(w, `'`)
} }
io.WriteString(w, ` ELSE {{role}} END) FROM (`) io.WriteString(w, ` ELSE $2 END) FROM (`)
io.WriteString(w, sg.conf.RolesQuery) io.WriteString(w, rq)
io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `) io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `)
io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler" LIMIT 1; `) io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler" LIMIT 1; `)
roleSQL, _ := processTemplate(w.String()) sg.getRole, err = sg.db.Prepare(w.String())
sg.getRole, err = tx.Prepare(roleSQL)
if err != nil { if err != nil {
return err return err
} }
@ -217,47 +136,18 @@ func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error {
return nil return nil
} }
func processTemplate(tmpl string) (string, [][]byte) {
st := struct {
vmap map[string]int
am [][]byte
i int
}{
vmap: make(map[string]int),
am: make([][]byte, 0, 5),
i: 0,
}
execFunc := func(w io.Writer, tag string) (int, error) {
if n, ok := st.vmap[tag]; ok {
return w.Write([]byte(fmt.Sprintf("$%d", n)))
}
st.am = append(st.am, []byte(tag))
st.i++
st.vmap[tag] = st.i
return w.Write([]byte(fmt.Sprintf("$%d", st.i)))
}
t1 := fasttemplate.New(tmpl, `'{{`, `}}'`)
ts1 := t1.ExecuteFuncString(execFunc)
t2 := fasttemplate.New(ts1, `{{`, `}}`)
ts2 := t2.ExecuteFuncString(execFunc)
return ts2, st.am
}
func (sg *SuperGraph) initAllowList() error { func (sg *SuperGraph) initAllowList() error {
var ac allow.Config var ac allow.Config
var err error var err error
if len(sg.conf.AllowListFile) == 0 { if sg.conf.AllowListFile == "" {
sg.conf.UseAllowList = false sg.conf.AllowListFile = "allow.list"
sg.log.Printf("WRN allow list disabled no file specified")
} }
if sg.conf.UseAllowList { // When list is not eabled it is still created and
ac = allow.Config{CreateIfNotExists: true, Persist: true} // and new queries are saved to it.
if !sg.conf.UseAllowList {
ac = allow.Config{CreateIfNotExists: true, Persist: true, Log: sg.log}
} }
sg.allowList, err = allow.New(sg.conf.AllowListFile, ac) sg.allowList, err = allow.New(sg.conf.AllowListFile, ac)
@ -269,9 +159,11 @@ func (sg *SuperGraph) initAllowList() error {
} }
// nolint: errcheck // nolint: errcheck
func stmtHash(name string, role string) string { func queryID(h *maphash.Hash, name, role string) uint64 {
h := sha1.New() h.WriteString(name)
io.WriteString(h, strings.ToLower(name)) h.WriteString(role)
io.WriteString(h, role) v := h.Sum64()
return hex.EncodeToString(h.Sum(nil)) h.Reset()
return v
} }

View File

@ -1,253 +1,252 @@
package core package core
// import ( import (
// "bytes" "bytes"
// "errors" "errors"
// "fmt" "fmt"
// "net/http" "hash/maphash"
// "sync" "net/http"
"sync"
// "github.com/cespare/xxhash/v2" "github.com/dosco/super-graph/core/internal/qcode"
// "github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
// "github.com/dosco/super-graph/core/internal/qcode" )
// )
// func execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) { func (sg *SuperGraph) execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) {
// var err error var err error
// if len(data) == 0 || st.skipped == 0 { sel := st.qc.Selects
// return data, nil h := maphash.Hash{}
// } h.SetSeed(sg.hashSeed)
// sel := st.qc.Selects // fetch the field name used within the db response json
// h := xxhash.New() // that are used to mark insertion points and the mapping between
// those field names and their select objects
fids, sfmap := sg.parentFieldIds(&h, sel, st.md.Skipped())
// // fetch the field name used within the db response json // fetch the field values of the marked insertion points
// // that are used to mark insertion points and the mapping between // these values contain the id to be used with fetching remote data
// // those field names and their select objects from := jsn.Get(data, fids)
// fids, sfmap := parentFieldIds(h, sel, st.skipped) var to []jsn.Field
// // fetch the field values of the marked insertion points switch {
// // these values contain the id to be used with fetching remote data case len(from) == 1:
// from := jsn.Get(data, fids) to, err = sg.resolveRemote(hdr, &h, from[0], sel, sfmap)
// var to []jsn.Field
// switch { case len(from) > 1:
// case len(from) == 1: to, err = sg.resolveRemotes(hdr, &h, from, sel, sfmap)
// to, err = resolveRemote(hdr, h, from[0], sel, sfmap)
// case len(from) > 1: default:
// to, err = resolveRemotes(hdr, h, from, sel, sfmap) return nil, errors.New("something wrong no remote ids found in db response")
}
// default: if err != nil {
// return nil, errors.New("something wrong no remote ids found in db response") return nil, err
// } }
// if err != nil { var ob bytes.Buffer
// return nil, err
// }
// var ob bytes.Buffer err = jsn.Replace(&ob, data, from, to)
if err != nil {
return nil, err
}
// err = jsn.Replace(&ob, data, from, to) return ob.Bytes(), nil
// if err != nil { }
// return nil, err
// }
// return ob.Bytes(), nil func (sg *SuperGraph) resolveRemote(
// } hdr http.Header,
h *maphash.Hash,
field jsn.Field,
sel []qcode.Select,
sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// func resolveRemote( // replacement data for the marked insertion points
// hdr http.Header, // key and value will be replaced by whats below
// h *xxhash.Digest, toA := [1]jsn.Field{}
// field jsn.Field, to := toA[:1]
// sel []qcode.Select,
// sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// // replacement data for the marked insertion points // use the json key to find the related Select object
// // key and value will be replaced by whats below _, _ = h.Write(field.Key)
// toA := [1]jsn.Field{} k1 := h.Sum64()
// to := toA[:1]
// // use the json key to find the related Select object s, ok := sfmap[k1]
// k1 := xxhash.Sum64(field.Key) if !ok {
return nil, nil
}
p := sel[s.ParentID]
// s, ok := sfmap[k1] // then use the Table nme in the Select and it's parent
// if !ok { // to find the resolver to use for this relationship
// return nil, nil k2 := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID]
// // then use the Table nme in the Select and it's parent r, ok := sg.rmap[k2]
// // to find the resolver to use for this relationship if !ok {
// k2 := mkkey(h, s.Name, p.Name) return nil, nil
}
// r, ok := rmap[k2] id := jsn.Value(field.Value)
// if !ok { if len(id) == 0 {
// return nil, nil return nil, nil
// } }
// id := jsn.Value(field.Value) //st := time.Now()
// if len(id) == 0 {
// return nil, nil
// }
// //st := time.Now() b, err := r.Fn(hdr, id)
if err != nil {
return nil, err
}
// b, err := r.Fn(hdr, id) if len(r.Path) != 0 {
// if err != nil { b = jsn.Strip(b, r.Path)
// return nil, err }
// }
// if len(r.Path) != 0 { var ob bytes.Buffer
// b = jsn.Strip(b, r.Path)
// }
// var ob bytes.Buffer if len(s.Cols) != 0 {
err = jsn.Filter(&ob, b, colsToList(s.Cols))
if err != nil {
return nil, err
}
// if len(s.Cols) != 0 { } else {
// err = jsn.Filter(&ob, b, colsToList(s.Cols)) ob.WriteString("null")
// if err != nil { }
// return nil, err
// }
// } else { to[0] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()}
// ob.WriteString("null") return to, nil
// } }
// to[0] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()} func (sg *SuperGraph) resolveRemotes(
// return to, nil hdr http.Header,
// } h *maphash.Hash,
from []jsn.Field,
sel []qcode.Select,
sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// func resolveRemotes( // replacement data for the marked insertion points
// hdr http.Header, // key and value will be replaced by whats below
// h *xxhash.Digest, to := make([]jsn.Field, len(from))
// from []jsn.Field,
// sel []qcode.Select,
// sfmap map[uint64]*qcode.Select) ([]jsn.Field, error) {
// // replacement data for the marked insertion points var wg sync.WaitGroup
// // key and value will be replaced by whats below wg.Add(len(from))
// to := make([]jsn.Field, len(from))
// var wg sync.WaitGroup var cerr error
// wg.Add(len(from))
// var cerr error for i, id := range from {
// for i, id := range from { // use the json key to find the related Select object
_, _ = h.Write(id.Key)
k1 := h.Sum64()
// // use the json key to find the related Select object s, ok := sfmap[k1]
// k1 := xxhash.Sum64(id.Key) if !ok {
return nil, nil
}
p := sel[s.ParentID]
// s, ok := sfmap[k1] // then use the Table nme in the Select and it's parent
// if !ok { // to find the resolver to use for this relationship
// return nil, nil k2 := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID]
// // then use the Table nme in the Select and it's parent r, ok := sg.rmap[k2]
// // to find the resolver to use for this relationship if !ok {
// k2 := mkkey(h, s.Name, p.Name) return nil, nil
}
// r, ok := rmap[k2] id := jsn.Value(id.Value)
// if !ok { if len(id) == 0 {
// return nil, nil return nil, nil
// } }
// id := jsn.Value(id.Value) go func(n int, id []byte, s *qcode.Select) {
// if len(id) == 0 { defer wg.Done()
// return nil, nil
// }
// go func(n int, id []byte, s *qcode.Select) { //st := time.Now()
// defer wg.Done()
// //st := time.Now() b, err := r.Fn(hdr, id)
if err != nil {
cerr = fmt.Errorf("%s: %s", s.Name, err)
return
}
// b, err := r.Fn(hdr, id) if len(r.Path) != 0 {
// if err != nil { b = jsn.Strip(b, r.Path)
// cerr = fmt.Errorf("%s: %s", s.Name, err) }
// return
// }
// if len(r.Path) != 0 { var ob bytes.Buffer
// b = jsn.Strip(b, r.Path)
// }
// var ob bytes.Buffer if len(s.Cols) != 0 {
err = jsn.Filter(&ob, b, colsToList(s.Cols))
if err != nil {
cerr = fmt.Errorf("%s: %s", s.Name, err)
return
}
// if len(s.Cols) != 0 { } else {
// err = jsn.Filter(&ob, b, colsToList(s.Cols)) ob.WriteString("null")
// if err != nil { }
// cerr = fmt.Errorf("%s: %s", s.Name, err)
// return
// }
// } else { to[n] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()}
// ob.WriteString("null") }(i, id, s)
// } }
wg.Wait()
// to[n] = jsn.Field{Key: []byte(s.FieldName), Value: ob.Bytes()} return to, cerr
// }(i, id, s) }
// }
// wg.Wait()
// return to, cerr func (sg *SuperGraph) parentFieldIds(h *maphash.Hash, sel []qcode.Select, skipped uint32) (
// } [][]byte,
map[uint64]*qcode.Select) {
// func parentFieldIds(h *xxhash.Digest, sel []qcode.Select, skipped uint32) ( c := 0
// [][]byte, for i := range sel {
// map[uint64]*qcode.Select) { s := &sel[i]
if isSkipped(skipped, uint32(s.ID)) {
c++
}
}
// c := 0 // list of keys (and it's related value) to extract from
// for i := range sel { // the db json response
// s := &sel[i] fm := make([][]byte, c)
// if isSkipped(skipped, uint32(s.ID)) {
// c++
// }
// }
// // list of keys (and it's related value) to extract from // mapping between the above extracted key and a Select
// // the db json response // object
// fm := make([][]byte, c) sm := make(map[uint64]*qcode.Select, c)
n := 0
// // mapping between the above extracted key and a Select for i := range sel {
// // object s := &sel[i]
// sm := make(map[uint64]*qcode.Select, c)
// n := 0
// for i := range sel { if !isSkipped(skipped, uint32(s.ID)) {
// s := &sel[i] continue
}
// if !isSkipped(skipped, uint32(s.ID)) { p := sel[s.ParentID]
// continue k := mkkey(h, s.Name, p.Name)
// }
// p := sel[s.ParentID] if r, ok := sg.rmap[k]; ok {
// k := mkkey(h, s.Name, p.Name) fm[n] = r.IDField
n++
// if r, ok := rmap[k]; ok { _, _ = h.Write(r.IDField)
// fm[n] = r.IDField sm[h.Sum64()] = s
// n++ }
}
// k := xxhash.Sum64(r.IDField) return fm, sm
// sm[k] = s }
// }
// }
// return fm, sm func isSkipped(n, pos uint32) bool {
// } return ((n & (1 << pos)) != 0)
}
// func isSkipped(n uint32, pos uint32) bool { func colsToList(cols []qcode.Column) []string {
// return ((n & (1 << pos)) != 0) var f []string
// }
// func colsToList(cols []qcode.Column) []string { for i := range cols {
// var f []string f = append(f, cols[i].Name)
}
// for i := range cols { return f
// f = append(f, cols[i].Name) }
// }
// return f
// }

View File

@ -2,94 +2,96 @@ package core
import ( import (
"fmt" "fmt"
"hash/maphash"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"strings" "strings"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
) )
var (
rmap map[uint64]*resolvFn
)
type resolvFn struct { type resolvFn struct {
IDField []byte IDField []byte
Path [][]byte Path [][]byte
Fn func(h http.Header, id []byte) ([]byte, error) Fn func(h http.Header, id []byte) ([]byte, error)
} }
// func initResolvers() { func (sg *SuperGraph) initResolvers() error {
// var err error var err error
// rmap = make(map[uint64]*resolvFn) sg.rmap = make(map[uint64]resolvFn)
// for _, t := range conf.Tables { for _, t := range sg.conf.Tables {
// err = initRemotes(t) err = sg.initRemotes(t)
// if err != nil { if err != nil {
// break break
// } }
// } }
// if err != nil { if err != nil {
// errlog.Fatal().Err(err).Msg("failed to initialize resolvers") return fmt.Errorf("failed to initialize resolvers: %v", err)
// } }
// }
// func initRemotes(t Table) error { return nil
// h := xxhash.New() }
// for _, r := range t.Remotes { func (sg *SuperGraph) initRemotes(t Table) error {
// // defines the table column to be used as an id in the h := maphash.Hash{}
// // remote request h.SetSeed(sg.hashSeed)
// idcol := r.ID
// // if no table column specified in the config then for _, r := range t.Remotes {
// // use the primary key of the table as the id // defines the table column to be used as an id in the
// if len(idcol) == 0 { // remote request
// pcol, err := pcompile.IDColumn(t.Name) idcol := r.ID
// if err != nil {
// return err
// }
// idcol = pcol.Key
// }
// idk := fmt.Sprintf("__%s_%s", t.Name, idcol)
// // register a relationship between the remote data // if no table column specified in the config then
// // and the database table // use the primary key of the table as the id
if idcol == "" {
pcol, err := sg.pc.IDColumn(t.Name)
if err != nil {
return err
}
idcol = pcol.Key
}
idk := fmt.Sprintf("__%s_%s", t.Name, idcol)
// val := &psql.DBRel{Type: psql.RelRemote} // register a relationship between the remote data
// val.Left.Col = idcol // and the database table
// val.Right.Col = idk
// err := pcompile.AddRelationship(strings.ToLower(r.Name), t.Name, val) val := &psql.DBRel{Type: psql.RelRemote}
// if err != nil { val.Left.Col = idcol
// return err val.Right.Col = idk
// }
// // the function thats called to resolve this remote err := sg.pc.AddRelationship(sanitize(r.Name), t.Name, val)
// // data request if err != nil {
// fn := buildFn(r) return err
}
// path := [][]byte{} // the function thats called to resolve this remote
// for _, p := range strings.Split(r.Path, ".") { // data request
// path = append(path, []byte(p)) fn := buildFn(r)
// }
// rf := &resolvFn{ path := [][]byte{}
// IDField: []byte(idk), for _, p := range strings.Split(r.Path, ".") {
// Path: path, path = append(path, []byte(p))
// Fn: fn, }
// }
// // index resolver obj by parent and child names rf := resolvFn{
// rmap[mkkey(h, r.Name, t.Name)] = rf IDField: []byte(idk),
Path: path,
Fn: fn,
}
// // index resolver obj by IDField // index resolver obj by parent and child names
// rmap[xxhash.Sum64(rf.IDField)] = rf sg.rmap[mkkey(&h, r.Name, t.Name)] = rf
// }
// return nil // index resolver obj by IDField
// } _, _ = h.Write(rf.IDField)
sg.rmap[h.Sum64()] = rf
}
return nil
}
func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) { func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) {
reqURL := strings.Replace(r.URL, "$id", "%s", 1) reqURL := strings.Replace(r.URL, "$id", "%s", 1)
@ -114,16 +116,13 @@ func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) {
req.Header.Set(v, hdr.Get(v)) req.Header.Set(v, hdr.Get(v))
} }
// logger.Debug().Str("uri", uri).Msg("Remote Join")
res, err := client.Do(req) res, err := client.Do(req)
if err != nil { if err != nil {
// errlog.Error().Err(err).Msgf("Failed to connect to: %s", uri) return nil, fmt.Errorf("failed to connect to '%s': %v", uri, err)
return nil, err
} }
defer res.Body.Close() defer res.Body.Close()
if r.Debug { // if r.Debug {
// reqDump, err := httputil.DumpRequestOut(req, true) // reqDump, err := httputil.DumpRequestOut(req, true)
// if err != nil { // if err != nil {
// return nil, err // return nil, err
@ -136,7 +135,7 @@ func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) {
// logger.Debug().Msgf("Remote Request Debug:\n%s\n%s", // logger.Debug().Msgf("Remote Request Debug:\n%s\n%s",
// reqDump, resDump) // reqDump, resDump)
} // }
if res.StatusCode != 200 { if res.StatusCode != 200 {
return nil, return nil,

13
core/utils.go Normal file
View File

@ -0,0 +1,13 @@
package core
import "hash/maphash"
// nolint: errcheck
func mkkey(h *maphash.Hash, k1, k2 string) uint64 {
h.WriteString(k1)
h.WriteString(k2)
v := h.Sum64()
h.Reset()
return v
}

1
debian/compat vendored Normal file
View File

@ -0,0 +1 @@
9

14
debian/control vendored Normal file
View File

@ -0,0 +1,14 @@
Source: super-graph
Section: unknown
Priority: optional
Maintainer: William Petit <wpetit@cadoles.com>
Build-Depends: debhelper (>= 8.0.0), wget, ca-certificates, tar, curl
Standards-Version: 3.9.4
Homepage: http://forge.cadoles.com/wpetit/super-graph
Vcs-Git: http://forge.cadoles.com/wpetit/super-graph.git
Vcs-Browser: http://forge.cadoles.com/wpetit/super-graph
Package: super-graph
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Application type "Kanboard" connectée à Gitea

36
debian/rules vendored Normal file
View File

@ -0,0 +1,36 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
export DH_VERBOSE=1
GO_VERSION := 1.14.4
OS := linux
ARCH := amd64
GOPATH=$(HOME)/go
ifeq (, $(shell which go 2>/dev/null))
override_dh_auto_build: install-go
endif
%:
dh $@
override_dh_auto_build: $(GOPATH)
GOPATH=$(GOPATH) PATH="$(PATH):/usr/local/go/bin:$(GOPATH)/bin" make linux
$(GOPATH):
mkdir -p $(GOPATH)
install-go:
wget https://dl.google.com/go/go$(GO_VERSION).$(OS)-$(ARCH).tar.gz
tar -C /usr/local -xzf go$(GO_VERSION).$(OS)-$(ARCH).tar.gz
override_dh_auto_install:
mkdir -p debian/super-graph/usr/bin
cp release/super-graph-*-linux-amd64 debian/super-graph/usr/bin/super-graph
install -d debian/gengitkan
override_dh_strip:
override_dh_auto_test:

1
debian/source/format vendored Normal file
View File

@ -0,0 +1 @@
3.0 (native)

View File

@ -1,5 +1,5 @@
<template> <template>
<div class="shadow bg-white p-4 flex items-start" :class="className"> <div class="shadow p-4 flex items-start" :class="className">
<slot name="image"></slot> <slot name="image"></slot>
<div class="pl-4"> <div class="pl-4">
<h2 class="p-0"> <h2 class="p-0">

View File

@ -2,33 +2,33 @@
<div> <div>
<main aria-labelledby="main-title" > <main aria-labelledby="main-title" >
<Navbar /> <Navbar />
<div style="height: 3.6rem"></div>
<div class="container mx-auto mt-24"> <div class="container mx-auto pt-4">
<div class="text-center"> <div class="text-center">
<div class="text-center text-4xl text-gray-800 leading-tight"> <div class="text-center text-3xl md:text-4xl text-black leading-tight font-semibold">
Fetch data without code Fetch data without code
</div> </div>
<NavLink <NavLink
class="inline-block px-4 py-3 my-8 bg-blue-600 text-blue-100 font-bold rounded" class="inline-block px-4 py-3 my-8 bg-blue-600 text-white font-bold rounded"
:item="actionLink" :item="actionLink"
/> />
<a <a
class="px-4 py-3 my-8 border-2 border-gray-500 text-gray-600 font-bold rounded" class="px-4 py-3 my-8 border-2 border-blue-600 text-blue-600 font-bold rounded"
href="https://github.com/dosco/super-graph" href="https://github.com/dosco/super-graph"
target="_blank" target="_blank"
>Github</a> >Github</a>
</div> </div>
</div> </div>
<div class="container mx-auto mb-8">
<div class="container mx-auto mb-8 mt-0 md:mt-20 bg-green-100">
<div class="flex flex-wrap"> <div class="flex flex-wrap">
<div class="w-100 md:w-1/2 bg-indigo-300 text-indigo-800 text-lg p-4"> <div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-center text-2xl font-bold pb-2">Before, struggle with SQL</div> <div class="text-xl font-bold pb-4">Before, struggle with SQL</div>
<pre> <pre>
type User struct { type User struct {
gorm.Model gorm.Model
Profile Profile Profile Profile
@ -50,8 +50,9 @@ db.Model(&user).
and more ... and more ...
</pre> </pre>
</div> </div>
<div class="w-100 md:w-1/2 bg-green-300 text-black text-lg p-4">
<div class="text-center text-2xl font-bold pb-2">With Super Graph, just ask.</div> <div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">With Super Graph, just ask.</div>
<pre> <pre>
query { query {
user(id: 5) { user(id: 5) {
@ -59,26 +60,24 @@ query {
first_name first_name
last_name last_name
picture_url picture_url
}
posts(first: 20, order_by: { score: desc }) { posts(first: 20, order_by: { score: desc }) {
slug slug
title title
created_at created_at
cached_votes_total votes_total
vote(where: { user_id: { eq: $user_id } }) { votes { created_at }
id
}
author { id name } author { id name }
tags { id name } tags { id name }
} }
posts_cursor posts_cursor
}
} }
</pre> </pre>
</div> </div>
</div> </div>
</div> </div>
<div> <div class="mt-0 md:mt-20">
<div <div
class="flex flex-wrap mx-2 md:mx-20" class="flex flex-wrap mx-2 md:mx-20"
v-if="data.features && data.features.length" v-if="data.features && data.features.length"
@ -89,42 +88,29 @@ query {
:key="index" :key="index"
> >
<div class="p-8"> <div class="p-8">
<h2 class="md:text-xl text-blue-800 font-medium border-0 mb-1">{{ feature.title }}</h2> <h2 class="text-lg uppercase border-0">{{ feature.title }}</h2>
<p class="md:text-xl text-gray-700 leading-snug">{{ feature.details }}</p> <div class="text-xl text-gray-900 leading-snug">{{ feature.details }}</div>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
<div class="bg-gray-100 mt-10"> <div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto p-10">
<div class="pb-8 hidden md:flex justify-center"> <div class="flex justify-center pb-20">
<img src="arch-basic.svg"> <img src="arch-basic.svg">
</div> </div>
<h1 class="uppercase font-semibold text-xl text-blue-800 text-center mb-4">
What is Super Graph?
</h1>
<div class="text-2xl md:text-3xl"> <div class="text-2xl md:text-3xl">
Super Graph is a library and service that fetches data from any Postgres database using just GraphQL. No more struggling with ORMs and SQL to wrangle data out of the database. No more having to figure out the right joins or making ineffiient queries. However complex the GraphQL, Super Graph will always generate just one single efficient SQL query. The goal is to save you time and money so you can focus on you're apps core value. Super Graph is a library and service that fetches data from any Postgres database using just GraphQL. No more struggling with ORMs and SQL to wrangle data out of the database. No more having to figure out the right joins or making inefficient queries. However complex the GraphQL, Super Graph will always generate just one single efficient SQL query. The goal is to save you time and money so you can focus on you're apps core value.
</div> </div>
</div> </div>
</div> </div>
<div class="container mx-auto flex flex-wrap"> <div class="pt-20">
<div class="md:w-1/2">
<img src="/graphql.png">
</div>
<div class="md:w-1/2">
<img src="/json.png">
</div>
</div>
<div class="mt-10 py-10 md:py-20">
<div class="container mx-auto px-10 md:px-0"> <div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-2xl text-blue-800 text-center"> <h1 class="uppercase font-semibold text-2xl text-blue-800 text-center">
Try Super Graph Try Super Graph
@ -133,20 +119,18 @@ query {
<h1 class="uppercase font-semibold text-lg text-gray-800"> <h1 class="uppercase font-semibold text-lg text-gray-800">
Deploy as a service using docker Deploy as a service using docker
</h1> </h1>
<div class="bg-gray-800 text-indigo-300 p-4 rounded"> <div class="p-4 rounded bg-black text-white">
<pre>$ git clone https://github.com/dosco/super-graph && cd super-graph && make install</pre> <pre>$ git clone https://github.com/dosco/super-graph && cd super-graph && make install</pre>
<pre>$ super-graph new blog; cd blog</pre> <pre>$ super-graph new blog; cd blog</pre>
<pre>$ docker-compose run blog_api ./super-graph db:setup</pre> <pre>$ docker-compose run blog_api ./super-graph db:setup</pre>
<pre>$ docker-compose up</pre> <pre>$ docker-compose up</pre>
</div> </div>
<div class="border-t mt-4 pb-4"></div>
<h1 class="uppercase font-semibold text-lg text-gray-800"> <h1 class="uppercase font-semibold text-lg text-gray-800">
Or use it with your own code Or use it with your own code
</h1> </h1>
<div class="text-md"> <div class="text-md">
<pre class="bg-gray-800 text-indigo-300 p-4 rounded"> <pre class="p-4 rounded bg-black text-white">
package main package main
import ( import (
@ -161,17 +145,12 @@ import (
func main() { func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db") db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
conf, err := config.NewConfig("./config") sg, err := core.NewSuperGraph(nil, db)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
}
sg, err = core.NewSuperGraph(conf, db)
if err != nil {
log.Fatalf(err)
} }
graphqlQuery := ` graphqlQuery := `
@ -184,7 +163,7 @@ func main() {
res, err := sg.GraphQL(context.Background(), graphqlQuery, nil) res, err := sg.GraphQL(context.Background(), graphqlQuery, nil)
if err != nil { if err != nil {
log.Fatalf(err) log.Fatal(err)
} }
fmt.Println(string(res.Data)) fmt.Println(string(res.Data))
@ -194,7 +173,7 @@ func main() {
</div> </div>
</div> </div>
<div class="bg-gray-100 mt-10"> <div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
The story of {{ data.heroText }} The story of {{ data.heroText }}
@ -245,7 +224,7 @@ func main() {
</div> </div>
--> -->
<div class="border-t py-10"> <div class="pt-0 md:pt-20">
<div class="block md:hidden w-100"> <div class="block md:hidden w-100">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;"> <iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;">
</iframe> </iframe>
@ -267,7 +246,101 @@ func main() {
</div> </div>
</div> </div>
<div class="bg-gray-200 mt-10"> <div class="container mx-auto pt-0 md:pt-20">
<div class="flex flex-wrap bg-green-100">
<div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">No more joins joins, json, orms, just use GraphQL. Fetch all the data want in the structure you need.</div>
<pre>
query {
thread {
slug
title
published
createdAt : created_at
totalVotes : cached_votes_total
totalPosts : cached_posts_total
vote : thread_vote(where: { user_id: { eq: $user_id } }) {
created_at
}
topics {
slug
name
}
author : me {
slug
}
posts(first: 1, order_by: { score: desc }) {
slug
body
published
createdAt : created_at
totalVotes : cached_votes_total
totalComments : cached_comments_total
vote {
created_at
}
author : user {
slug
firstName : first_name
lastName : last_name
}
}
posts_cursor
}
}
</pre>
</div>
<div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">Instant results using a single highly optimized SQL. It's just that simple.</div>
<pre>
{
"data": {
"thread": {
"slug": "eveniet-ex-24",
"vote": null,
"posts": [
{
"body": "Dolor laborum harum sed sit est ducimus temporibus velit non nobis repudiandae nobis suscipit commodi voluptatem debitis sed voluptas sequi officia.",
"slug": "illum-in-voluptas-1418",
"vote": null,
"author": {
"slug": "sigurd-kemmer",
"lastName": "Effertz",
"firstName": "Brandt"
},
"createdAt": "2020-04-07T04:22:42.115874+00:00",
"published": true,
"totalVotes": 0,
"totalComments": 2
}
],
"title": "In aut qui deleniti quia dolore quasi porro tenetur voluptatem ut adita alias fugit explicabo.",
"author": null,
"topics": [
{
"name": "CloudRun",
"slug": "cloud-run"
},
{
"name": "Postgres",
"slug": "postgres"
}
],
"createdAt": "2020-04-07T04:22:38.099482+00:00",
"published": true,
"totalPosts": 24,
"totalVotes": 0,
"posts_cursor": "mpeBl6L+QfJHc3cmLkLDj9pOdEZYTt5KQtLsazG3TLITB3hJhg=="
}
}
}
</pre>
</div>
</div>
</div>
<div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Build Secure Apps Build Secure Apps
@ -292,8 +365,8 @@ func main() {
</div> </div>
</div> </div>
<div class=""> <div class="pt-0 md:py -20">
<div class="container mx-auto px-10 md:px-0 py-32"> <div class="container mx-auto">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4"> <h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
More Features More Features
</h1> </h1>

View File

@ -15,7 +15,7 @@ module.exports = {
{ text: 'Internals', link: '/internals' }, { text: 'Internals', link: '/internals' },
{ text: 'Github', link: 'https://github.com/dosco/super-graph' }, { text: 'Github', link: 'https://github.com/dosco/super-graph' },
{ text: 'Docker', link: 'https://hub.docker.com/r/dosco/super-graph/builds' }, { text: 'Docker', link: 'https://hub.docker.com/r/dosco/super-graph/builds' },
{ text: 'Join Chat', link: 'https://discord.gg/NKdXBc' }, { text: 'Join Chat', link: 'https://discord.com/invite/23Wh7c' },
], ],
serviceWorker: { serviceWorker: {
updatePopup: true updatePopup: true

View File

@ -1,6 +1,9 @@
@tailwind base; @tailwind base;
@css { @css {
body, .navbar, .navbar .links {
@apply bg-white text-black border-0 !important;
}
h1 { h1 {
@apply font-semibold text-3xl border-0 py-4 @apply font-semibold text-3xl border-0 py-4
} }

View File

@ -10,7 +10,7 @@ longTagline: Get an instant high performance GraphQL API for Postgres. No code n
actionText: Get Started, Free, Open Source → actionText: Get Started, Free, Open Source →
actionLink: /guide actionLink: /guide
description: Super Graph can automatically learn a Postgres database and instantly serve it as a fast and secured GraphQL API. It comes with tools to create a new app and manage it's database. You get it all, a very productive developer and a highly scalable app backend. It's designed to work well on serverless platforms by Google, AWS, Microsoft, etc. The goal is to save you a ton of time and money so you can focus on you're apps core value. description: Super Graph can automatically learn a Postgres database and instantly serve it as a fast and secured GraphQL API. It comes with tools to create a new app and manage it's database. You get it all, a very productive developer and a highly scalable app backend. It's designed to work well on serverless platforms by Google, AWS, Microsoft, etc. The goal is to save you a ton of time and money so you can focus on your apps core value.
features: features:
- title: Simple - title: Simple

View File

@ -32,7 +32,7 @@ For this to work you have to ensure that the option `:domain => :all` is added t
### With an NGINX loadbalancer ### With an NGINX loadbalancer
If you're infrastructure is fronted by NGINX then it should be configured so that all requests to your GraphQL API path are proxyed to Super Graph. In the example NGINX config below all requests to the path `/api/v1/graphql` are routed to wherever you have Super Graph installed within your architecture. This example is derived from the config file example at [/microservices-nginx-gateway/nginx.conf](https://github.com/launchany/microservices-nginx-gateway/blob/master/nginx.conf) If your infrastructure is fronted by NGINX then it should be configured so that all requests to your GraphQL API path are proxyed to Super Graph. In the example NGINX config below all requests to the path `/api/v1/graphql` are routed to wherever you have Super Graph installed within your architecture. This example is derived from the config file example at [/microservices-nginx-gateway/nginx.conf](https://github.com/launchany/microservices-nginx-gateway/blob/master/nginx.conf)
::: tip NGINX with sub-domain ::: tip NGINX with sub-domain
Yes, NGINX is very flexible and you can configure it to keep Super Graph a subdomain instead of on the same top level domain. I'm sure a little Googleing will get you some great example configs for that. Yes, NGINX is very flexible and you can configure it to keep Super Graph a subdomain instead of on the same top level domain. I'm sure a little Googleing will get you some great example configs for that.

View File

@ -347,12 +347,10 @@ beer_style
beer_yeast beer_yeast
// Cars // Cars
vehicle car
vehicle_type car_type
car_maker car_maker
car_model car_model
fuel_type
transmission_gear_type
// Text // Text
word word
@ -438,8 +436,8 @@ hipster_paragraph
hipster_sentence hipster_sentence
// File // File
extension file_extension
mine_type file_mine_type
// Numbers // Numbers
number number
@ -463,11 +461,18 @@ mac_address
digit digit
letter letter
lexify lexify
rand_string
shuffle_strings shuffle_strings
numerify numerify
``` ```
Other utility functions
```
shuffle_strings(string_array)
make_slug(text)
make_slug_lang(text, lang)
```
### Migrations ### Migrations
Easy database migrations is the most important thing when building products backend by a relational database. We make it super easy to manage and migrate your database. Easy database migrations is the most important thing when building products backend by a relational database. We make it super easy to manage and migrate your database.
@ -725,6 +730,32 @@ query {
} }
``` ```
### Custom Functions
Any function defined in the database like the below `add_five` that adds 5 to any number given to it can be used
within your query. The one limitation is that it should be a function that only accepts a single argument. The function is used within you're GraphQL in similar way to how aggregrations are used above. Example below
```grahql
query {
thread(id: 5) {
id
total_votes
add_five_total_votes
}
}
```
Postgres user-defined function `add_five`
```
CREATE OR REPLACE FUNCTION add_five(a integer) RETURNS integer AS $$
BEGIN
RETURN a + 5;
END;
$$ LANGUAGE plpgsql;
```
In GraphQL mutations is the operation type for when you need to modify data. Super Graph supports the `insert`, `update`, `upsert` and `delete`. You can also do complex nested inserts and updates. In GraphQL mutations is the operation type for when you need to modify data. Super Graph supports the `insert`, `update`, `upsert` and `delete`. You can also do complex nested inserts and updates.
When using mutations the data must be passed as variables since Super Graphs compiles the query into an prepared statement in the database for maximum speed. Prepared statements are are functions in your code when called they accept arguments and your variables are passed in as those arguments. When using mutations the data must be passed as variables since Super Graphs compiles the query into an prepared statement in the database for maximum speed. Prepared statements are are functions in your code when called they accept arguments and your variables are passed in as those arguments.
@ -1038,7 +1069,7 @@ mutation {
### Pagination ### Pagination
This is a must have feature of any API. When you want your users to go thought a list page by page or implement some fancy infinite scroll you're going to need pagination. There are two ways to paginate in Super Graph. This is a must have feature of any API. When you want your users to go through a list page by page or implement some fancy infinite scroll you're going to need pagination. There are two ways to paginate in Super Graph.
Limit-Offset Limit-Offset
This is simple enough but also inefficient when working with a large number of total items. Limit, limits the number of items fetched and offset is the point you want to fetch from. The below query will fetch 10 results at a time starting with the 100th item. You will have to keep updating offset (110, 120, 130, etc ) to walk thought the results so make offset a variable. This is simple enough but also inefficient when working with a large number of total items. Limit, limits the number of items fetched and offset is the point you want to fetch from. The below query will fetch 10 results at a time starting with the 100th item. You will have to keep updating offset (110, 120, 130, etc ) to walk thought the results so make offset a variable.
@ -1054,7 +1085,7 @@ query {
``` ```
#### Cursor #### Cursor
This is a powerful and highly efficient way to paginate though a large number of results. Infact it does not matter how many total results there are this will always be lighting fast. You can use a cursor to walk forward of backward though the results. If you plan to implement infinite scroll this is the option you should choose. This is a powerful and highly efficient way to paginate a large number of results. Infact it does not matter how many total results there are this will always be lighting fast. You can use a cursor to walk forward or backward through the results. If you plan to implement infinite scroll this is the option you should choose.
When going this route the results will contain a cursor value this is an encrypted string that you don't have to worry about just pass this back in to the next API call and you'll received the next set of results. The cursor value is encrypted since its contents should only matter to Super Graph and not the client. Also since the primary key is used for this feature it's possible you might not want to leak it's value to clients. When going this route the results will contain a cursor value this is an encrypted string that you don't have to worry about just pass this back in to the next API call and you'll received the next set of results. The cursor value is encrypted since its contents should only matter to Super Graph and not the client. Also since the primary key is used for this feature it's possible you might not want to leak it's value to clients.
@ -1704,7 +1735,7 @@ reload_on_config_change: true
# seed_file: seed.js # seed_file: seed.js
# Path pointing to where the migrations can be found # Path pointing to where the migrations can be found
migrations_path: ./config/migrations migrations_path: ./migrations
# Postgres related environment Variables # Postgres related environment Variables
# SG_DATABASE_HOST # SG_DATABASE_HOST
@ -1790,12 +1821,31 @@ database:
# Enable this if you need the user id in triggers, etc # Enable this if you need the user id in triggers, etc
set_user_id: false set_user_id: false
# Define additional variables here to be used with filters # database ping timeout is used for db health checking
variables: ping_timeout: 1m
# Set up an secure tls encrypted db connection
enable_tls: false
# Required for tls. For example with Google Cloud SQL it's
# <gcp-project-id>:<cloud-sql-instance>"
# server_name: blah
# Required for tls. Can be a file path or the contents of the pem file
# server_cert: ./server-ca.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_cert: ./client-cert.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_key: ./client-key.pem
# Define additional variables here to be used with filters
variables:
admin_account_id: "5" admin_account_id: "5"
# Field and table names that you wish to block # Field and table names that you wish to block
blocklist: blocklist:
- ar_internal_metadata - ar_internal_metadata
- schema_migrations - schema_migrations
- secret - secret

20
docs/website/.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

33
docs/website/README.md Normal file
View File

@ -0,0 +1,33 @@
# Website
This website is built using [Docusaurus 2](https://v2.docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and open up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
```
$ GIT_USER=<Your GitHub username> USE_SSH=true yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.

View File

@ -0,0 +1,207 @@
---
id: advanced
title: Full Text Search & More
sidebar_label: More Features
---
## Database Relationships
In most cases you don't need this configuration, Super Graph will discover and learn
the relationship graph within your database automatically. It does this using `Foreign Key` relationships that you have defined in your database schema.
The below configs are only needed in special cases such as when you don't use foreign keys or when you want to create a relationship between two tables where a foreign key is not defined or cannot be defined.
For example in the sample below a relationship is defined between the `tags` column on the `posts` table with the `slug` column on the `tags` table. This cannot be defined as using foreign keys since the `tags` column is of type array `text[]` and Postgres for one does not allow foreign keys with array columns.
```yaml
tables:
- name: posts
columns:
- name: tags
related_to: tags.slug
```
## Advanced Columns
The ablity to have `JSON/JSONB` and `Array` columns is often considered in the top most useful features of Postgres. There are many cases where using an array or a json column saves space and reduces complexity in your app. The only issue with these columns is the really that your SQL queries can get harder to write and maintain.
Super Graph steps in here to help you by supporting these columns right out of the box. It allows you to work with these columns just like you would with tables. Joining data against or modifying array columns using the `connect` or `disconnect` keywords in mutations is fully supported. Another very useful feature is the ability to treat `json` or `binary json (jsonb)` columns as seperate tables, even using them in nested queries joining against related tables. To replicate these features on your own will take a lot of complex SQL. Using Super Graph means you don't have to deal with any of this it just works.
### Array Columns
Configure a relationship between an array column `tag_ids` which contains integer id's for tags and the column `id` in the table `tags`.
```yaml
tables:
- name: posts
columns:
- name: tag_ids
related_to: tags.id
```
```graphql
query {
posts {
title
tags {
name
image
}
}
}
```
### JSON Column
Configure a JSON column called `tag_count` in the table `products` into a seperate table. This JSON column contains a json array of objects each with a tag id and a count of the number of times the tag was used. As a seperate table you can nest it into your GraphQL query and treat it like table using any of the standard features like `order_by`, `limit`, `where clauses`, etc.
The configuration below tells Super Graph to create a synthetic table called `tag_count` using the column `tag_count` from the `products` table. And that this new table has two columns `tag_id` and `count` of the listed types and with the defined relationships.
```yaml
tables:
- name: tag_count
table: products
columns:
- name: tag_id
type: bigint
related_to: tags.id
- name: count
type: int
```
```graphql
query {
products {
name
tag_counts {
count
tag {
name
}
}
}
}
```
## Remote Joins
It often happens that after fetching some data from the DB we need to call another API to fetch some more data and all this combined into a single JSON response. For example along with a list of users you need their last 5 payments from Stripe. This requires you to query your DB for the users and Stripe for the payments. Super Graph handles all this for you also only the fields you requested from the Stripe API are returned.
:::info Is this fast?
Super Graph is able fetch remote data and merge it with the DB response in an efficient manner. Several optimizations such as parallel HTTP requests and a zero-allocation JSON merge algorithm makes this very fast. All of this without you having to write a line of code.
:::
For example you need to list the last 3 payments made by a user. You will first need to look up the user in the database and then call the Stripe API to fetch his last 3 payments. For this to work your user table in the db has a `customer_id` column that contains his Stripe customer ID.
Similiarly you could also fetch the users last tweet, lead info from Salesforce or whatever else you need. It's fine to mix up several different `remote joins` into a single GraphQL query.
### Stripe Example
The configuration is self explanatory. A `payments` field has been added under the `customers` table. This field is added to the `remotes` subsection that defines fields associated with `customers` that are remote and not real database columns.
The `id` parameter maps a column from the `customers` table to the `$id` variable. In this case it maps `$id` to the `customer_id` column.
```yaml
tables:
- name: customers
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# debug: true
# pass_headers:
# - cookie
# - host
set_headers:
- name: Authorization
value: Bearer <stripe_api_key>
```
#### How do I make use of this?
Just include `payments` like you would any other GraphQL selector under the `customers` selector. Super Graph will call the configured API for you and stitch (merge) the JSON the API sends back with the JSON generated from the database query. GraphQL features like aliases and fields all work.
```graphql
query {
customers {
id
email
payments {
customer_id
amount
billing_details
}
}
}
```
And voila here is the result. You get all of this advanced and honestly complex querying capability without writing a single line of code.
```json
"data": {
"customers": [
{
"id": 1,
"email": "linseymertz@reilly.co",
"payments": [
{
"customer_id": "cus_YCj3ndB5Mz",
"amount": 100,
"billing_details": {
"address": "1 Infinity Drive",
"zipcode": "94024"
}
},
...
```
Even tracing data is availble in the Super Graph web UI if tracing is enabled in the config. By default it is enabled in development. Additionally there you can set `debug: true` to enable http request / response dumping to help with debugging.
![Query Tracing](/tracing.png "Super Graph Web UI Query Tracing")
## Full text search
Every app these days needs search. Enought his often means reaching for something heavy like Solr. While this will work why add complexity to your infrastructure when Postgres has really great
and fast full text search built-in. And since it's part of Postgres it's also available in Super Graph.
```graphql
query {
products(
# Search for all products that contain 'ale' or some version of it
search: "ale"
# Return only matches where the price is less than 10
where: { price: { lt: 10 } }
# Use the search_rank to order from the best match to the worst
order_by: { search_rank: desc }
) {
id
name
search_rank
search_headline_description
}
}
```
This query will use the `tsvector` column in your database table to search for products that contain the query phrase or some version of it. To get the internal relevance ranking for the search results using the `search_rank` field. And to get the highlighted context within any of the table columns you can use the `search_headline_` field prefix. For example `search_headline_name` will return the contents of the products name column which contains the matching query marked with the `<b></b>` html tags.
```json
{
"data": {
"products": [
{
"id": 11,
"name": "Maharaj",
"search_rank": 0.243171,
"search_headline_description": "Blue Moon, Vegetable Beer, Willamette, 1007 - German <b>Ale</b>, 48 IBU, 7.9%, 11.8°Blg"
},
{
"id": 12,
"name": "Schneider Aventinus",
"search_rank": 0.243171,
"search_headline_description": "Dos Equis, Wood-aged Beer, Magnum, 1099 - Whitbread <b>Ale</b>, 15 IBU, 9.5%, 13.0°Blg"
},
...
```

335
docs/website/docs/config.md Normal file
View File

@ -0,0 +1,335 @@
---
id: config
title: Configuration
sidebar_label: Configuration
---
Configuration files can either be in YAML or JSON their names are derived from the `GO_ENV` variable, for example `GO_ENV=prod` will cause the `prod.yaml` config file to be used. or `GO_ENV=dev` will use the `dev.yaml`. A path to look for the config files in can be specified using the `-path <folder>` command line argument.
We're tried to ensure that the config file is self documenting and easy to work with.
```yaml
# Inherit config from this other config file
# so I only need to overwrite some values
inherits: base
app_name: "Super Graph Development"
host_port: 0.0.0.0:8080
web_ui: true
# debug, error, warn, info
log_level: "debug"
# enable or disable http compression (uses gzip)
http_compress: true
# When production mode is 'true' only queries
# from the allow list are permitted.
# When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list
production: false
# Throw a 401 on auth failure for queries that need auth
auth_fail_block: false
# Latency tracing for database queries and remote joins
# the resulting latency information is returned with the
# response
enable_tracing: true
# Watch the config folder and reload Super Graph
# with the new configs when a change is detected
reload_on_config_change: true
# File that points to the database seeding script
# seed_file: seed.js
# Path pointing to where the migrations can be found
migrations_path: ./migrations
# Postgres related environment Variables
# SG_DATABASE_HOST
# SG_DATABASE_PORT
# SG_DATABASE_USER
# SG_DATABASE_PASSWORD
# Auth related environment Variables
# SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
# SG_AUTH_RAILS_REDIS_URL
# SG_AUTH_RAILS_REDIS_PASSWORD
# SG_AUTH_JWT_PUBLIC_KEY_FILE
# inflections:
# person: people
# sheep: sheep
auth:
# Can be 'rails' or 'jwt'
type: rails
cookie: _app_session
# Comment this out if you want to disable setting
# the user_id via a header for testing.
# Disable in production
creds_in_header: true
rails:
# Rails version this is used for reading the
# various cookies formats.
version: 5.2
# Found in 'Rails.application.config.secret_key_base'
secret_key_base: 0a248500a64c01184edb4d7ad3a805488f8097ac761b76aaa6c17c01dcb7af03a2f18ba61b2868134b9c7b79a122bc0dadff4367414a2d173297bfea92be5566
# Remote cookie store. (memcache or redis)
# url: redis://redis:6379
# password: ""
# max_idle: 80
# max_active: 12000
# In most cases you don't need these
# salt: "encrypted cookie"
# sign_salt: "signed encrypted cookie"
# auth_salt: "authenticated encrypted cookie"
# jwt:
# provider: auth0
# secret: abc335bfcfdb04e50db5bb0a4d67ab9
# public_key_file: /secrets/public_key.pem
# public_key_type: ecdsa #rsa
# header:
# name: dnt
# exists: true
# value: localhost:8080
# You can add additional named auths to use with actions
# In this example actions using this auth can only be
# called from the Google Appengine Cron service that
# sets a special header to all it's requests
auths:
- name: from_taskqueue
type: header
header:
name: X-Appengine-Cron
exists: true
database:
type: postgres
host: db
port: 5432
dbname: app_development
user: postgres
password: postgres
#schema: "public"
#pool_size: 10
#max_retries: 0
#log_level: "debug"
# Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc
set_user_id: false
# database ping timeout is used for db health checking
ping_timeout: 1m
# Set up an secure tls encrypted db connection
enable_tls: false
# Required for tls. For example with Google Cloud SQL it's
# <gcp-project-id>:<cloud-sql-instance>"
# server_name: blah
# Required for tls. Can be a file path or the contents of the pem file
# server_cert: ./server-ca.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_cert: ./client-cert.pem
# Required for tls. Can be a file path or the contents of the pem file
# client_key: ./client-key.pem
# Define additional variables here to be used with filters
variables:
admin_account_id: "5"
# Field and table names that you wish to block
blocklist:
- ar_internal_metadata
- schema_migrations
- secret
- password
- encrypted
- token
# Create custom actions with their own api endpoints
# For example the below action will be available at /api/v1/actions/refresh_leaderboard_users
# A request to this url will execute the configured SQL query
# which in this case refreshes a materialized view in the database.
# The auth_name is from one of the configured auths
actions:
- name: refresh_leaderboard_users
sql: REFRESH MATERIALIZED VIEW CONCURRENTLY "leaderboard_users"
auth_name: from_taskqueue
tables:
- name: customers
remotes:
- name: payments
id: stripe_id
url: http://rails_app:3000/stripe/$id
path: data
# debug: true
pass_headers:
- cookie
set_headers:
- name: Host
value: 0.0.0.0
# - name: Authorization
# value: Bearer <stripe_api_key>
- # You can create new fields that have a
# real db table backing them
name: me
table: users
roles_query: "SELECT * FROM users WHERE id = $user_id"
roles:
- name: anon
tables:
- name: products
limit: 10
query:
columns: ["id", "name", "description"]
aggregation: false
insert:
allow: false
update:
allow: false
delete:
allow: false
- name: user
tables:
- name: users
query:
filters: ["{ id: { _eq: $user_id } }"]
- name: products
query:
limit: 50
filters: ["{ user_id: { eq: $user_id } }"]
columns: ["id", "name", "description"]
disable_functions: false
insert:
filters: ["{ user_id: { eq: $user_id } }"]
columns: ["id", "name", "description"]
set:
- created_at: "now"
update:
filters: ["{ user_id: { eq: $user_id } }"]
columns:
- id
- name
set:
- updated_at: "now"
delete:
block: true
- name: admin
match: id = 1000
tables:
- name: users
filters: []
```
If deploying into environments like Kubernetes it's useful to be able to configure things like secrets and hosts though environment variables therfore we expose the below environment variables. This is escpecially useful for secrets since they are usually injected in via a secrets management framework ie. Kubernetes Secrets
Keep in mind any value can be overwritten using environment variables for example `auth.jwt.public_key_type` converts to `SG_AUTH_JWT_PUBLIC_KEY_TYPE`. In short prefix `SG_`, upper case and all `.` should changed to `_`.
#### Postgres environment variables
```bash
SG_DATABASE_HOST
SG_DATABASE_PORT
SG_DATABASE_USER
SG_DATABASE_PASSWORD
```
#### Auth environment variables
```bash
SG_AUTH_RAILS_COOKIE_SECRET_KEY_BASE
SG_AUTH_RAILS_REDIS_URL
SG_AUTH_RAILS_REDIS_PASSWORD
SG_AUTH_JWT_PUBLIC_KEY_FILE
```
## YugabyteDB
Yugabyte is an open-source, geo-distrubuted cloud-native relational DB that scales horizontally. Super Graph works with Yugabyte right out of the box. If you think you're data needs will outgrow Postgres and you don't really want to deal with sharding then Yugabyte is the way to go. Just point Super Graph to your Yugabyte DB and everything will just work including running migrations, seeding, querying, mutations, etc.
To use Yugabyte in your local development flow just uncomment the following lines in the `docker-compose.yml` file that is part of your Super Graph app. Also remember to comment out the originl postgres `db` config.
```yaml
# Postgres DB
# db:
# image: postgres:latest
# ports:
# - "5432:5432"
#Standard config to run a single node of Yugabyte
yb-master:
image: yugabytedb/yugabyte:latest
container_name: yb-master-n1
command: [ "/home/yugabyte/bin/yb-master",
"--fs_data_dirs=/mnt/disk0,/mnt/disk1",
"--master_addresses=yb-master-n1:7100",
"--replication_factor=1",
"--enable_ysql=true"]
ports:
- "7000:7000"
environment:
SERVICE_7000_NAME: yb-master
db:
image: yugabytedb/yugabyte:latest
container_name: yb-tserver-n1
command: [ "/home/yugabyte/bin/yb-tserver",
"--fs_data_dirs=/mnt/disk0,/mnt/disk1",
"--start_pgsql_proxy",
"--tserver_master_addrs=yb-master-n1:7100"]
ports:
- "9042:9042"
- "6379:6379"
- "5433:5433"
- "9000:9000"
environment:
SERVICE_5433_NAME: ysql
SERVICE_9042_NAME: ycql
SERVICE_6379_NAME: yedis
SERVICE_9000_NAME: yb-tserver
depends_on:
- yb-master
# Environment variables to point Super Graph to Yugabyte
# This is required since it uses a different user and port number
yourapp_api:
image: dosco/super-graph:latest
environment:
GO_ENV: "development"
Uncomment below for Yugabyte DB
SG_DATABASE_PORT: 5433
SG_DATABASE_USER: yugabyte
SG_DATABASE_PASSWORD: yugabyte
volumes:
- ./config:/config
ports:
- "8080:8080"
depends_on:
- db
```

159
docs/website/docs/deploy.md Normal file
View File

@ -0,0 +1,159 @@
---
id: deploy
title: How to deploy Super Graph
sidebar_label: Deploy to Prod.
---
Since you're reading this you're probably considering deploying Super Graph. You're in luck it's really easy and there are several ways to choose from. Keep in mind Super Graph can be used as a pre-built docker image or you can easily customize it and build your own docker image.
:::info JWT tokens (Auth0, etc)
When deploying on a subdomain and configure this service to use JWT authentication. You will need the public key file or secret key. Ensure your web app passes the JWT token with every GraphQL request (Cookie recommended). You have to add the web domain to the `cors_allowed_origins` config option so CORS can allow the browser to do cross-domain ajax requests.
:::
## Google Cloud Run (Fully Managed)
Cloud Run is a fully managed compute platform for deploying and scaling containerized applications quickly and securely.
Your Super Graph app comes with a `cloudbuild.yaml` file so it's really easy to use Google Cloud Build to build and deploy your Super Graph app to Google Cloud Run.
:::note
Remember to give Cloud Build permission to deploy to Cloud Run first this can be done in the Cloud Build settings screen. Also the service account you use with Cloud Run must have the IAM permissions to connect to CloudSQL. https://cloud.google.com/sql/docs/postgres/connect-run
:::
Use the command below to tell Cloud Build to build and deploy your app.
```bash
gcloud build submit --substitutions=SERVICE_ACCOUNT=admin@my-project.iam.gserviceaccount.com,REGION=us-central1 .
```
:::info Secrets Management
Your secrets like the database password should be managed by the Mozilla SOPS app. This is a secrets management app that encrypts all your secrets and stores them in a file to be decrypted in production using the Cloud KMS (Google Cloud KMS Or Amazon KMS). Our cloud build file above expects the secrets file to be `config/prod.secrets.yml`. You can find more information on Mozilla SOPS on their site. https://github.com/mozilla/sops
:::
## Build Docker Image Locally
If for whatever reason you decide to build your own Docker images then just use the command below.
```bash
docker build -t your-api-service-name .
```
## With a Rails app
Super Graph can read Rails session cookies, like those created by authentication gems (Devise or Warden). Based on how you've configured your Rails app the cookie can be signed, encrypted, both, include the user ID or just have the ID of the session. If you have choosen to use Redis or Memcache as your session store then Super Graph can read the session cookie and then lookup the user in the session store. In short it works really well with almost all Rails apps.
For any of this to work Super Graph must be deployed in a way that make the browser send the apps cookie to it along with the GraphQL query. That means Super Graph needs to be either on the same domain as your app or on a subdomain.
:::info I need an example
Say your Rails app runs on `myrailsapp.com` then Super Graph should be on the same domain or on a subdomain like `graphql.myrailsapp.com`. If you choose subdomain then remeber read the [Deploy under a subdomain](#deploy-under-a-subdomain) section.
:::
## Deploy under a subdomain
For this to work you have to ensure that the option `:domain => :all` is added to your Rails app config `Application.config.session_store` this will cause your rails app to create session cookies that can be shared with sub-domains. More info here [/sharing-a-devise-user-session-across-subdomains-with-rails](http://excid3.com/blog/sharing-a-devise-user-session-across-subdomains-with-rails-3/)
## With NGINX
If your infrastructure is fronted by NGINX then it should be configured so that all requests to your GraphQL API path are proxyed to Super Graph. In the example NGINX config below all requests to the path `/api/v1/graphql` are routed to wherever you have Super Graph installed within your architecture. This example is derived from the config file example at [/microservices-nginx-gateway/nginx.conf](https://github.com/launchany/microservices-nginx-gateway/blob/master/nginx.conf)
:::info NGINX with sub-domain
Yes, NGINX is very flexible and you can configure it to keep Super Graph a subdomain instead of on the same top level domain. I'm sure a little Googleing will get you some great example configs for that.
:::
```nginx
# Configuration for the server
server {
# Running port
listen 80;
# Proxy the graphql api path to Super Graph
location /api/v1/graphql {
proxy_pass http://super-graph-service:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
# Proxying all other paths to your Rails app
location / {
proxy_pass http://your-rails-app:3000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
```
## On Kubernetes
If your Rails app runs on Kubernetes then ensure you have an ingress config deployed that points the path to the service that you have deployed Super Graph under.
### Ingress config
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-rails-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myrailsapp.com
http:
paths:
- path: /api/v1/graphql
backend:
serviceName: graphql-service
servicePort: 8080
- path: /
backend:
serviceName: rails-app
servicePort: 3000
```
### Service and deployment config
```yaml
apiVersion: v1
kind: Service
metadata:
name: graphql-service
labels:
run: super-graph
spec:
ports:
- port: 8080
protocol: TCP
selector:
run: super-graph
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: super-graph
spec:
selector:
matchLabels:
run: super-graph
replicas: 2
template:
metadata:
labels:
run: super-graph
spec:
containers:
- name: super-graph
image: docker.io/dosco/super-graph:latest
ports:
- containerPort: 8080
```

View File

@ -0,0 +1,678 @@
---
id: graphql
title: How to GraphQL
sidebar_label: GraphQL Syntax
---
GraphQL (GQL) is a simple query syntax that's fast replacing REST APIs. GQL is great since it allows web developers to fetch the exact data that they need without depending on changes to backend code. Also if you squint hard enough it looks a little bit like JSON :smiley:
The below query will fetch an `users` name, email and avatar image (renamed as picture). If you also need the users `id` then just add it to the query.
```graphql
query {
user {
full_name
email
picture: avatar
}
}
```
Multiple tables can also be fetched using a single GraphQL query. This is very fast since the entire query is converted into a single SQL query which the database can efficiently run.
```graphql
query {
user {
full_name
email
}
products {
name
description
}
}
```
### Fetching data
To fetch a specific `product` by it's ID you can use the `id` argument. The real name id field will be resolved automatically so this query will work even if your id column is named something like `product_id`.
```graphql
query {
products(id: 3) {
name
}
}
```
Postgres also supports full text search using a TSV index. Super Graph makes it easy to use this full text search capability using the `search` argument.
```graphql
query {
products(search: "ale") {
name
}
}
```
### Fragments
Fragments make it easy to build large complex queries with small composible and re-usable fragment blocks.
```graphql
query {
users {
...userFields2
...userFields1
picture_url
}
}
fragment userFields1 on user {
id
email
}
fragment userFields2 on user {
first_name
last_name
}
```
### Sorting
To sort or ordering results just use the `order_by` argument. This can be combined with `where`, `search`, etc to build complex queries to fit you needs.
```graphql
query {
products(order_by: { cached_votes_total: desc }) {
id
name
}
}
```
### Filtering
Super Graph support complex queries where you can add filters, ordering,offsets and limits on the query. For example the below query will list all products where the price is greater than 10 and the id is not 5.
```graphql
query {
products(where: { and: { price: { gt: 10 }, not: { id: { eq: 5 } } } }) {
name
price
}
}
```
#### Nested where clause targeting related tables
Sometimes you need to query a table based on a condition that applies to a related table. For example say you need to list all users who belong to an account. This query below will fetch the id and email or all users who belong to the account with id 3.
```graphql
query {
users(where: {
accounts: { id: { eq: 3 } }
}) {
id
email
}
}`
```
#### Logical Operators
| Name | Example | Explained |
| ---- | ------------------------------------------------------------ | ------------------------------- |
| and | price : { and : { gt: 10.5, lt: 20 } | price > 10.5 AND price < 20 |
| or | or : { price : { greater_than : 20 }, quantity: { gt : 0 } } | price >= 20 OR quantity > 0 |
| not | not: { or : { quantity : { eq: 0 }, price : { eq: 0 } } } | NOT (quantity = 0 OR price = 0) |
#### Other conditions
| Name | Example | Explained |
| ---------------------- | -------------------------------------- | -------------------------------------------------------------------------------------------------------- |
| eq, equals | id : { eq: 100 } | id = 100 |
| neq, not_equals | id: { not_equals: 100 } | id != 100 |
| gt, greater_than | id: { gt: 100 } | id > 100 |
| lt, lesser_than | id: { gt: 100 } | id < 100 |
| gte, greater_or_equals | id: { gte: 100 } | id >= 100 |
| lte, lesser_or_equals | id: { lesser_or_equals: 100 } | id <= 100 |
| in | status: { in: [ "A", "B", "C" ] } | status IN ('A', 'B', 'C) |
| nin, not_in | status: { in: [ "A", "B", "C" ] } | status IN ('A', 'B', 'C) |
| like | name: { like "phil%" } | Names starting with 'phil' |
| nlike, not_like | name: { nlike "v%m" } | Not names starting with 'v' and ending with 'm' |
| ilike | name: { ilike "%wOn" } | Names ending with 'won' case-insensitive |
| nilike, not_ilike | name: { nilike "%wOn" } | Not names ending with 'won' case-insensitive |
| similar | name: { similar: "%(b\|d)%" } | [Similar Docs](https://www.postgresql.org/docs/9/functions-matching.html#FUNCTIONS-SIMILARTO-REGEXP) |
| nsimilar, not_similar | name: { nsimilar: "%(b\|d)%" } | [Not Similar Docs](https://www.postgresql.org/docs/9/functions-matching.html#FUNCTIONS-SIMILARTO-REGEXP) |
| has_key | column: { has_key: 'b' } | Does JSON column contain this key |
| has_key_any | column: { has_key_any: [ a, b ] } | Does JSON column contain any of these keys |
| has_key_all | column: [ a, b ] | Does JSON column contain all of this keys |
| contains | column: { contains: [1, 2, 4] } | Is this array/json column a subset of value |
| contained_in | column: { contains: "{'a':1, 'b':2}" } | Is this array/json column a subset of these value |
| is_null | column: { is_null: true } | Is column value null or not |
### Aggregations
You will often find the need to fetch aggregated values from the database such as `count`, `max`, `min`, etc. This is simple to do with GraphQL, just prefix the aggregation name to the field name that you want to aggregrate like `count_id`. The below query will group products by name and find the minimum price for each group. Notice the `min_price` field we're adding `min_` to price.
```graphql
query {
products {
name
min_price
}
}
```
| Name | Explained |
| ----------- | ---------------------------------------------------------------------- |
| avg | Average value |
| count | Count the values |
| max | Maximum value |
| min | Minimum value |
| stddev | [Standard Deviation](https://en.wikipedia.org/wiki/Standard_deviation) |
| stddev_pop | Population Standard Deviation |
| stddev_samp | Sample Standard Deviation |
| variance | [Variance](https://en.wikipedia.org/wiki/Variance) |
| var_pop | Population Standard Variance |
| var_samp | Sample Standard variance |
All kinds of queries are possible with GraphQL. Below is an example that uses a lot of the features available. Comments `# hello` are also valid within queries.
```graphql
query {
products(
# returns only 30 items
limit: 30
# starts from item 10, commented out for now
# offset: 10,
# orders the response items by highest price
order_by: { price: desc }
# no duplicate prices returned
distinct: [price]
# only items with an id >= 30 and < 30 are returned
where: { id: { and: { greater_or_equals: 20, lt: 28 } } }
) {
id
name
price
}
}
```
### Custom Functions
Any function defined in the database like the below `add_five` that adds 5 to any number given to it can be used
within your query. The one limitation is that it should be a function that only accepts a single argument. The function is used within you're GraphQL in similar way to how aggregrations are used above. Example below
```grahql
query {
thread(id: 5) {
id
total_votes
add_five_total_votes
}
}
```
Postgres user-defined function `add_five`
```
CREATE OR REPLACE FUNCTION add_five(a integer) RETURNS integer AS $$
BEGIN
RETURN a + 5;
END;
$$ LANGUAGE plpgsql;
```
In GraphQL mutations is the operation type for when you need to modify data. Super Graph supports the `insert`, `update`, `upsert` and `delete`. You can also do complex nested inserts and updates.
When using mutations the data must be passed as variables since Super Graphs compiles the query into an prepared statement in the database for maximum speed. Prepared statements are are functions in your code when called they accept arguments and your variables are passed in as those arguments.
### Insert
```json
{
"data": {
"name": "Art of Computer Programming",
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
"price": 30.5
}
}
```
```graphql
mutation {
product(insert: $data) {
id
name
}
}
```
#### Bulk insert
```json
{
"data": [
{
"name": "Art of Computer Programming",
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
"price": 30.5
},
{
"name": "Compilers: Principles, Techniques, and Tools",
"description": "Known to professors, students, and developers worldwide as the 'Dragon Book' is available in a new edition",
"price": 93.74
}
]
}
```
```graphql
mutation {
product(insert: $data) {
id
name
}
}
```
### Update
```json
{
"data": {
"price": 200.0
},
"product_id": 5
}
```
```graphql
mutation {
product(update: $data, id: $product_id) {
id
name
}
}
```
#### Bulk update
```json
{
"data": {
"price": 500.0
},
"gt_product_id": 450.0,
"lt_product_id:": 550.0
}
```
```graphql
mutation {
product(
update: $data
where: { price: { gt: $gt_product_id, lt: lt_product_id } }
) {
id
name
}
}
```
### Delete
```json
{
"data": {
"price": 500.0
},
"product_id": 5
}
```
```graphql
mutation {
product(delete: true, id: $product_id) {
id
name
}
}
```
#### Bulk delete
```json
{
"data": {
"price": 500.0
}
}
```
```graphql
mutation {
product(delete: true, where: { price: { eq: { 500.0 } } }) {
id
name
}
}
```
### Upsert
```json
{
"data": {
"id": 5,
"name": "Art of Computer Programming",
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
"price": 30.5
}
}
```
```graphql
mutation {
product(upsert: $data) {
id
name
}
}
```
#### Bulk upsert
```json
{
"data": [
{
"id": 5,
"name": "Art of Computer Programming",
"description": "The Art of Computer Programming (TAOCP) is a comprehensive monograph written by computer scientist Donald Knuth",
"price": 30.5
},
{
"id": 6,
"name": "Compilers: Principles, Techniques, and Tools",
"description": "Known to professors, students, and developers worldwide as the 'Dragon Book' is available in a new edition",
"price": 93.74
}
]
}
```
```graphql
mutation {
product(upsert: $data) {
id
name
}
}
```
Often you will need to create or update multiple related items at the same time. This can be done using nested mutations. For example you might need to create a product and assign it to a user, or create a user and his products at the same time. You just have to use simple json to define you mutation and Super Graph takes care of the rest.
### Nested Insert
Create a product item first and then assign it to a user
```json
{
"data": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now",
"user": {
"connect": { "id": 5 }
}
}
}
```
```graphql
mutation {
product(insert: $data) {
id
name
user {
id
full_name
email
}
}
}
```
Or it's reverse, create the user first and then his product
```json
{
"data": {
"email": "thedude@rug.com",
"full_name": "The Dude",
"created_at": "now",
"updated_at": "now",
"product": {
"name": "Apple",
"price": 1.25,
"created_at": "now",
"updated_at": "now"
}
}
}
```
```graphql
mutation {
user(insert: $data) {
id
full_name
email
product {
id
name
price
}
}
}
```
### Nested Update
Update a product item first and then assign it to a user
```json
{
"data": {
"name": "Apple",
"price": 1.25,
"user": {
"connect": { "id": 5 }
}
}
}
```
```graphql
mutation {
product(update: $data, id: 5) {
id
name
user {
id
full_name
email
}
}
}
```
Or it's reverse, update a user first and then his product
```json
{
"data": {
"email": "newemail@me.com",
"full_name": "The Dude",
"product": {
"name": "Banana",
"price": 1.25
}
}
}
```
```graphql
mutation {
user(update: $data, id: 1) {
id
full_name
email
product {
id
name
price
}
}
}
```
### Pagination
This is a must have feature of any API. When you want your users to go through a list page by page or implement some fancy infinite scroll you're going to need pagination. There are two ways to paginate in Super Graph.
Limit-Offset
This is simple enough but also inefficient when working with a large number of total items. Limit, limits the number of items fetched and offset is the point you want to fetch from. The below query will fetch 10 results at a time starting with the 100th item. You will have to keep updating offset (110, 120, 130, etc ) to walk thought the results so make offset a variable.
```graphql
query {
products(limit: 10, offset: 100) {
id
slug
name
}
}
```
#### Cursor
This is a powerful and highly efficient way to paginate a large number of results. Infact it does not matter how many total results there are this will always be lighting fast. You can use a cursor to walk forward or backward through the results. If you plan to implement infinite scroll this is the option you should choose.
When going this route the results will contain a cursor value this is an encrypted string that you don't have to worry about just pass this back in to the next API call and you'll received the next set of results. The cursor value is encrypted since its contents should only matter to Super Graph and not the client. Also since the primary key is used for this feature it's possible you might not want to leak it's value to clients.
You will need to set this config value to ensure the encrypted cursor data is secure. If not set a random value is used which will change with each deployment breaking older cursor values that clients might be using so best to set it.
```yaml
# Secret key for general encryption operations like
# encrypting the cursor data
secret_key: supercalifajalistics
```
Paginating forward through your results
```json
{
"variables": {
"cursor": "MJoTLbQF4l0GuoDsYmCrpjPeaaIlNpfm4uFU4PQ="
}
}
```
```graphql
query {
products(first: 10, after: $cursor) {
slug
name
}
}
```
Paginating backward through your results
```graphql
query {
products(last: 10, before: $cursor) {
slug
name
}
}
```
```graphql
"data": {
"products": [
{
"slug": "eius-nulla-et-8",
"name" "Pale Ale"
},
{
"slug": "sapiente-ut-alias-12",
"name" "Brown Ale"
}
...
],
"products_cursor": "dJwHassm5+d82rGydH2xQnwNxJ1dcj4/cxkh5Cer"
}
```
Nested tables can also have cursors. Requesting multiple cursors are supported on a single request but when paginating using a cursor only one table is currently supported. To explain this better, you can only use a `before` or `after` argument with a cursor value to paginate a single table in a query.
```graphql
query {
products(last: 10) {
slug
name
customers(last: 5) {
email
full_name
}
}
}
```
Multiple order-by arguments are supported. Super Graph is smart enough to allow cursor based pagination when you also need complex sort order like below.
```graphql
query {
products(
last: 10
before: $cursor
order_by: [ price: desc, total_customers: asc ]) {
slug
name
}
}
```
## Using Variables
Variables (`$product_id`) and their values (`"product_id": 5`) can be passed along side the GraphQL query. Using variables makes for better client side code as well as improved server side SQL query caching. The built-in web-ui also supports setting variables. Not having to manipulate your GraphQL query string to insert values into it makes for cleaner
and better client side code.
```javascript
// Define the request object keeping the query and the variables seperate
var req = {
query: "{ product(id: $product_id) { name } }",
variables: { product_id: 5 },
};
// Use the fetch api to make the query
fetch("http://localhost:8080/api/v1/graphql", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(req),
})
.then((res) => res.json())
.then((res) => console.log(res.data));
```

Some files were not shown because too many files have changed in this diff Show More