Compare commits

..

69 Commits

Author SHA1 Message Date
1344246287 fix: add http tracing so end-to-end tracing is possible 2020-05-24 02:24:24 -04:00
7b25873438 fix: update embedded static assets 2020-05-23 16:53:39 -04:00
d572b4f753 fix: allow unauthenticated operations in seed script 2020-05-23 16:37:27 -04:00
cd69b5a78f docs: add opencensus support to the README 2020-05-23 11:50:38 -04:00
01ad9b71ba feat: add opencensus tracing and metrics support 2020-05-23 11:43:57 -04:00
b64daaf034 fix: issue with breaking changes in gofakeit 2020-05-22 16:52:42 -04:00
c7837bf758 feat: add open opencensus telemetry support 2020-05-22 16:49:54 -04:00
448e6bb72a fix: add config for per role operation blocking by type 2020-05-22 02:24:22 -04:00
f7d3760af7 feat: re-format graphql queries saved in allow.list 2020-05-22 02:24:22 -04:00
2acb05741e fix: few typos (#67) 2020-05-21 01:07:54 -04:00
8104ee9df2 fix: update description in the README 2020-05-20 09:42:10 -04:00
ab8566df03 fix: postgres schema name config value is not used 2020-05-20 00:03:05 -04:00
94fa51ffb2 fix: add color to logo for dark mode 2020-05-18 00:32:53 -04:00
1c823e4353 feat: add cloudbuild.yaml generation for new apps 2020-05-17 19:16:40 -04:00
f6ce0c102b docs: new website 2020-05-17 03:12:09 -04:00
a1a47c905d fix: update discord link (#66) 2020-05-16 02:05:59 -04:00
d3e32f944a fix: json content type breaks web ui 2020-05-11 21:09:12 -04:00
3bf9f02a9f fix: bug with reading config file by name 2020-05-10 11:26:48 -04:00
533c767e1d fix: benchmark was failing. Also added a benchmark for the chirino/graphql version gql parser to compare results. (#62) 2020-05-07 10:48:01 -04:00
84d55dbc8a feat: remove data from variables saved to allow.list 2020-05-07 10:27:40 -04:00
5aafff6310 chore: add InteliJ editor project files to the .gitignore list. (#61) 2020-05-07 10:24:29 -04:00
840aaf64ff fix: return response as application/json (#59) 2020-05-07 10:24:12 -04:00
7bbb56a328 fix get functions parameters without name (#60) 2020-05-07 03:04:37 -04:00
394b08b2fe chore: update changelog 2020-05-03 21:01:16 -04:00
842252f9e2 fix: fix issue with skipping prepared statements for some roles on error 2020-05-03 20:52:26 -04:00
279f5616d1 fix: fix for issues reported by deepsource 2020-05-03 16:08:34 -04:00
04bb88f74b Add .deepsource.toml 2020-05-03 19:57:42 +00:00
38ed6dbc5f fix: bug with single quote ecape in production mode 2020-05-01 02:20:45 -04:00
ec2f8d0c58 chore: pickup latest version of chirino/graphql module for it’s schema api simplifications. (#58) 2020-05-01 02:03:35 -04:00
9b51065414 fix: grammatical errors (#57) 2020-04-25 09:57:59 -04:00
1a70603b1a feat: add option to set the cache-control header 2020-04-24 20:45:03 -04:00
505335d872 feat: add config to set api endpoint prefix 2020-04-24 01:23:35 -04:00
bdc8c65a09 fix: fix issues with code examples 2020-04-23 21:25:09 -04:00
03fe29b088 fix: improve documentation of the config object 2020-04-23 21:25:09 -04:00
5857efdd70 fix: correct spellings and language in README.md (#55)
* Update README.md

* Code review: fix go get again
2020-04-23 21:01:00 -04:00
bdffe7b14e fix: add a benchmark around the GraphQL api function 2020-04-23 01:42:16 -04:00
ae7cde0433 feat: add support for single argument Postgres functions 2020-04-22 20:51:14 -04:00
6293d37e73 fix: upgrade packages in the web ui 2020-04-21 21:05:14 -04:00
7a3fe5a1df fix: Only include the bulk update arguments on the plur… (#54)
* introspection fix: Only include the bulk update arguments on the plural versions of the fields.

* Fixes error graphql: Unknown type "String!"
2020-04-21 10:41:28 -04:00
2a32c179ba feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. (#53) 2020-04-21 10:03:05 -04:00
0a02bde219 fix: block introspection queries in production mode 2020-04-20 02:06:58 -04:00
966aa9ce8c feat: add some initial introspection support. (#52) 2020-04-19 23:48:49 -04:00
6f18d56ca0 fix: update queries generate invalid sql 2020-04-19 13:40:14 -04:00
c400461835 fix: prepared statements not working in prod mode 2020-04-19 12:54:37 -04:00
a6691de1b7 fix: remove multi-line graphql query in log 2020-04-19 02:50:09 -04:00
e6934cda02 fix: vars not sanitized in roles_query 2020-04-18 17:46:40 -04:00
4cf7956ff5 feat: add cockroachdb support. (#50)
This PR changes the generated SQL so that it's also compatible with CockroachDB.
Notable changes:
* use `SELECT to_jsonb("__sr_0".*)`  instead of `SELECT to_jsonb("__sr_0")`
* don't use `json_populate_record`, use the `CAST` and `->>` instead.  For example:

  instead of: `SELECT "t"."full_name", "t"."email" FROM "_sg_input" i, json_populate_record(NULL::users, i.j) t`

  do: `CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i`

This PR also adds some integration tests against an actual database instance.  If you have the cockroachdb binary installed on your PATH,
the test suite will startup a temporary cockroachdb instance on a random port to test against.  It is stopped and the tmp data files are deleted once the test ends.  It will also run the integration tests against database
pointed at by your `SG_POSTGRESQL_TEST_URL` environment variable if it’s set.

Also includes some small formatting changes introduced by `gofmt -w .`
2020-04-18 17:42:17 -04:00
5356455904 Fix issue with relative paths and config files 2020-04-17 10:56:26 -04:00
074aded5c0 Upgrade UI and app templates 2020-04-16 10:27:10 -04:00
c7557f761f Fix broken build 2020-04-16 01:28:55 -04:00
09d6460a13 Make go get to install work. 2020-04-16 00:26:32 -04:00
40c99e9ef3 Fix issue with missing build variables 2020-04-13 00:50:54 -04:00
75ff5510d4 Fix issue with failing db cmds 2020-04-13 00:43:18 -04:00
1370d24985 Fix issue with make install 2020-04-12 20:35:31 -04:00
ef50c1957b Fix CloudRun connection issue 2020-04-12 10:09:37 -04:00
41ea6ef6f5 Fix readme add library usage 2020-04-11 16:41:10 -04:00
a266517d17 Remove config package 2020-04-11 02:45:06 -04:00
7831d27345 Refactor Super Graph into a library #26 2020-04-10 02:27:43 -04:00
e102da839e Fix issue with Postgres FUNC_MAX_ARGS by moving to row_to_json 2020-04-01 21:25:50 -04:00
68a378c00f Fix issue with prepared statements skipped on error 2020-03-31 01:28:39 -04:00
d96eaf14f4 Fix bugs with escape char handling 2020-03-30 10:03:47 -04:00
01e488b69d Fix for bug blocking anon queries 2020-03-21 20:11:04 -04:00
7a450b16ba Fix issue with detecting many to many relationships 2020-03-18 20:19:56 -04:00
1ad8cbf15b Fix minor parser bug 2020-03-17 23:03:41 -04:00
f69f1c67d5 Fix to remove left over debug log 2020-03-16 01:43:26 -04:00
a172193955 Fix to ensure cursor fields can be defined in the query 2020-03-16 01:40:47 -04:00
81338b6123 Fix issues blocking Apollo client 2020-03-14 01:35:42 -04:00
265b93b203 Fix for encrypted cursor in production mode bug 2020-03-06 21:38:01 +05:30
6c240e21b4 Fix bug related to 'anon' role prepared statements 2020-03-06 15:39:15 +05:30
258 changed files with 23146 additions and 6514 deletions

View File

@ -5,18 +5,18 @@ info:
repository_url: https://github.com/dosco/super-graph repository_url: https://github.com/dosco/super-graph
options: options:
commits: commits:
# filters: filters:
# Type: Type:
# - feat - feat
# - fix - fix
# - perf - perf
# - refactor - refactor
commit_groups: commit_groups:
# title_maps: title_maps:
# feat: Features feat: Features
# fix: Bug Fixes fix: Bug Fixes
# perf: Performance Improvements perf: Performance Improvements
# refactor: Code Refactoring refactor: Code Refactoring
header: header:
pattern: "^((\\w+)\\s.*)$" pattern: "^((\\w+)\\s.*)$"
pattern_maps: pattern_maps:

8
.deepsource.toml Normal file
View File

@ -0,0 +1,8 @@
version = 1
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_path = "github.com/dosco/super-graph"

6
.gitignore vendored
View File

@ -23,17 +23,19 @@
/tmp/runner-build /tmp/runner-build
/demo/tmp /demo/tmp
.idea
*.iml
.vscode .vscode
main
.DS_Store .DS_Store
.swp .swp
.release .release
main main
super-graph super-graph
supergraph
*-fuzz.zip *-fuzz.zip
crashers crashers
suppressions suppressions
release release
.gofuzz .gofuzz
*-fuzz.zip *-fuzz.zip
*.test

View File

@ -1,401 +1,371 @@
<a name="unreleased"></a> <a name="unreleased"></a>
## [Unreleased] ## [Unreleased]
### Add
- Add config driven custom table relationships
- Add support for `websearch_to_tsquery` in PG 11
### Create <a name="v0.13.22"></a>
- Create CODE_OF_CONDUCT.md ## [v0.13.22] - 2020-05-01
### Fix <a name="v0.13.21"></a>
- Fix bug with remote join example ## [v0.13.21] - 2020-04-24
- Fix grammer / syntax
### Update <a name="v0.13.20"></a>
- Update issue templates ## [v0.13.20] - 2020-04-24
- Update CONTRIBUTING.md
- Update issue templates <a name="v0.13.19"></a>
- Update feature_request.md ## [v0.13.19] - 2020-04-23
<a name="v0.13.18"></a>
## [v0.13.18] - 2020-04-23
<a name="v0.13.17"></a>
## [v0.13.17] - 2020-04-22
<a name="v0.13.16"></a>
## [v0.13.16] - 2020-04-21
### Features
- feat : improve the generated introspection schema and avoid the chirino/graphql api leaking through the core api. ([#53](https://github.com/dosco/super-graph/issues/53))
<a name="v0.13.15"></a>
## [v0.13.15] - 2020-04-20
<a name="v0.13.14"></a>
## [v0.13.14] - 2020-04-19
<a name="v0.13.13"></a>
## [v0.13.13] - 2020-04-19
<a name="v0.13.12"></a>
## [v0.13.12] - 2020-04-19
<a name="v0.13.11"></a>
## [v0.13.11] - 2020-04-18
<a name="v0.13.10"></a>
## [v0.13.10] - 2020-04-17
<a name="v0.13.9"></a>
## [v0.13.9] - 2020-04-16
<a name="v0.13.8"></a>
## [v0.13.8] - 2020-04-16
<a name="v0.13.7"></a>
## [v0.13.7] - 2020-04-16
<a name="v0.13.6"></a>
## [v0.13.6] - 2020-04-13
<a name="v0.13.5"></a>
## [v0.13.5] - 2020-04-13
<a name="v0.13.4"></a>
## [v0.13.4] - 2020-04-12
<a name="v0.13.3"></a>
## [v0.13.3] - 2020-04-12
<a name="v0.13.2"></a>
## [v0.13.2] - 2020-04-11
<a name="v0.13.1"></a>
## [v0.13.1] - 2020-04-11
<a name="v0.13.0"></a>
## [v0.13.0] - 2020-04-10
<a name="v0.12.49"></a>
## [v0.12.49] - 2020-04-01
<a name="v0.12.48"></a>
## [v0.12.48] - 2020-03-31
<a name="v0.12.47"></a>
## [v0.12.47] - 2020-03-30
<a name="v0.12.46"></a>
## [v0.12.46] - 2020-03-21
<a name="v0.12.45"></a>
## [v0.12.45] - 2020-03-18
<a name="v0.12.44"></a>
## [v0.12.44] - 2020-03-16
<a name="v0.12.43"></a>
## [v0.12.43] - 2020-03-16
<a name="v0.12.42"></a>
## [v0.12.42] - 2020-03-14
<a name="v0.12.41"></a>
## [v0.12.41] - 2020-03-06
<a name="v0.12.40"></a>
## [v0.12.40] - 2020-03-06
<a name="v0.12.39"></a>
## [v0.12.39] - 2020-03-06
<a name="v0.12.38"></a>
## [v0.12.38] - 2020-03-05
<a name="v0.12.37"></a>
## [v0.12.37] - 2020-03-04
<a name="v0.12.36"></a>
## [v0.12.36] - 2020-03-04
<a name="v0.12.35"></a>
## [v0.12.35] - 2020-03-03
<a name="v0.12.34"></a>
## [v0.12.34] - 2020-03-03
<a name="v0.12.33"></a>
## [v0.12.33] - 2020-02-29
<a name="v0.12.32"></a>
## [v0.12.32] - 2020-02-24
### Bug Fixes
- fix "Try the demo app" in docs ([#38](https://github.com/dosco/super-graph/issues/38))
<a name="v0.12.31"></a>
## [v0.12.31] - 2020-02-23
<a name="v0.12.30"></a>
## [v0.12.30] - 2020-02-23
<a name="v0.12.29"></a>
## [v0.12.29] - 2020-02-21
<a name="v0.12.28"></a>
## [v0.12.28] - 2020-02-20
<a name="v0.12.27"></a>
## [v0.12.27] - 2020-02-19
<a name="v0.12.26"></a>
## [v0.12.26] - 2020-02-11
<a name="v0.12.25"></a>
## [v0.12.25] - 2020-02-10
<a name="v0.12.24"></a>
## [v0.12.24] - 2020-02-03
<a name="v0.12.23"></a>
## [v0.12.23] - 2020-02-02
<a name="v0.12.22"></a>
## [v0.12.22] - 2020-02-01
<a name="v0.12.21"></a>
## [v0.12.21] - 2020-01-31
<a name="v0.12.20"></a>
## [v0.12.20] - 2020-01-28
<a name="v0.12.19"></a>
## [v0.12.19] - 2020-01-26
<a name="v0.12.18"></a>
## [v0.12.18] - 2020-01-20
<a name="v0.12.17"></a>
## [v0.12.17] - 2020-01-20
<a name="v0.12.16"></a>
## [v0.12.16] - 2020-01-19
<a name="v0.12.15"></a>
## [v0.12.15] - 2020-01-17
<a name="v0.12.14"></a>
## [v0.12.14] - 2020-01-17
<a name="v0.12.13"></a>
## [v0.12.13] - 2020-01-16
<a name="v0.12.12"></a>
## [v0.12.12] - 2020-01-15
<a name="v0.12.11"></a>
## [v0.12.11] - 2020-01-14
<a name="v0.12.10"></a>
## [v0.12.10] - 2020-01-14
<a name="v0.12.9"></a>
## [v0.12.9] - 2020-01-14
<a name="v0.12.8"></a>
## [v0.12.8] - 2020-01-13
<a name="v0.12.7"></a>
## [v0.12.7] - 2020-01-11
### Pull Requests
- Merge pull request [#22](https://github.com/dosco/super-graph/issues/22) from bhaskarmurthy/fix-grammer-syntax
<a name="v0.12.6"></a> <a name="v0.12.6"></a>
## [v0.12.6] - 2019-12-02 ## [v0.12.6] - 2019-12-02
### Add
- Add support for `websearch_to_tsquery` in PG 11
<a name="v0.12.5"></a> <a name="v0.12.5"></a>
## [v0.12.5] - 2019-11-30 ## [v0.12.5] - 2019-11-30
### Add
- Add a guide to the internals of the codebase
- Add a CONTRIBUTING.md guide for contributors
- Add a CHANGLOG.md
- Add issue templates
### Fix
- Fix for missing filters on nested selectors
### Refactor
- Refactor rename 'Select.Table` to `Select.Name`
<a name="v0.12.4"></a> <a name="v0.12.4"></a>
## [v0.12.4] - 2019-11-28 ## [v0.12.4] - 2019-11-28
### Move
- Move license from MIT to Apache 2.0. Add Makefile
<a name="v0.12.3"></a> <a name="v0.12.3"></a>
## [v0.12.3] - 2019-11-26 ## [v0.12.3] - 2019-11-26
### Added
- Added support for query names to the allow.list
<a name="v0.12.2"></a> <a name="v0.12.2"></a>
## [v0.12.2] - 2019-11-25 ## [v0.12.2] - 2019-11-25
### Fix
- Fix bug with compiling anon queries
<a name="v0.12.1"></a> <a name="v0.12.1"></a>
## [v0.12.1] - 2019-11-22 ## [v0.12.1] - 2019-11-22
### Move
- Move sql query logging from info to debug
<a name="v0.12.0"></a> <a name="v0.12.0"></a>
## [v0.12.0] - 2019-11-22 ## [v0.12.0] - 2019-11-22
### Use
- Use logger error instead of panic in goja handlers
<a name="v0.11.9"></a> <a name="v0.11.9"></a>
## [v0.11.9] - 2019-11-22 ## [v0.11.9] - 2019-11-22
### Add
- Add a db:reset command only for dev mode
<a name="v0.11.8"></a> <a name="v0.11.8"></a>
## [v0.11.8] - 2019-11-21 ## [v0.11.8] - 2019-11-21
### Optimize
- Optimize db queries limit use of transactions
<a name="v0.11.7"></a> <a name="v0.11.7"></a>
## [v0.11.7] - 2019-11-19 ## [v0.11.7] - 2019-11-19
### Added
- Added support for multi-root queries
<a name="v0.11.6"></a> <a name="v0.11.6"></a>
## [v0.11.6] - 2019-11-15 ## [v0.11.6] - 2019-11-15
### Fix
- Fix issues with JWT auth
- Fix bug with migration filename generation
- Fix bug with migration file name
<a name="v0.11.5"></a> <a name="v0.11.5"></a>
## [v0.11.5] - 2019-11-10 ## [v0.11.5] - 2019-11-10
### Fix
- Fix bug with migration template name
<a name="v0.11.4"></a> <a name="v0.11.4"></a>
## [v0.11.4] - 2019-11-10 ## [v0.11.4] - 2019-11-10
### Fix
- Fix bug with creating new migrations
<a name="v0.11.3"></a> <a name="v0.11.3"></a>
## [v0.11.3] - 2019-11-09 ## [v0.11.3] - 2019-11-09
### Fix
- Fix macro syntax bug in app templates
<a name="v0.11.2"></a> <a name="v0.11.2"></a>
## [v0.11.2] - 2019-11-07 ## [v0.11.2] - 2019-11-07
### Fix
- Fix bugs and add new production mode
<a name="v0.11.1"></a> <a name="v0.11.1"></a>
## [v0.11.1] - 2019-11-05 ## [v0.11.1] - 2019-11-05
### Add
- Add nested where clause to filter based on related tables
### Block
- Block unauthorized requests when 'anon' role is not defined
### Update
- Update docs and website with new features
<a name="v0.11"></a> <a name="v0.11"></a>
## [v0.11] - 2019-11-01 ## [v0.11] - 2019-11-01
### Add
- Add config driven presets for insert, update and upsert
- Add config driven presets for insert, update and upserta
- Add RBAC option to disable functions eg. count
- Add fuzz testing to 'serv' for the GQL hash parser
- Add fuzz testing to 'jsn' and 'qcode'
- Add ability to block queries and mutations by role
- Add built in 'anon' and 'user' roles
- Add role based access control
### Allow
- Allow config files to inherit from other config files
### Change
- Change config key inherit to inherits
### Get
- Get RBAC working for queries and mutations
### Optimize
- Optimize prepared statement flow for RBAC
### Preserve
- Preserve allow.list ordering on save
### Update
- Update filters section in guide
### Pull Requests ### Pull Requests
- Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac - Merge pull request [#11](https://github.com/dosco/super-graph/issues/11) from dosco/rbac
<a name="v0.10.1"></a> <a name="v0.10.1"></a>
## [v0.10.1] - 2019-10-06 ## [v0.10.1] - 2019-10-06
### Add
- Add ability to set filters per operation / action
- Add upsert mutation
### Pull Requests ### Pull Requests
- Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder - Merge pull request [#10](https://github.com/dosco/super-graph/issues/10) from FourSigma/sm-examples-folder
<a name="v0.10"></a> <a name="v0.10"></a>
## [v0.10] - 2019-10-04 ## [v0.10] - 2019-10-04
### Fix
- Fix return values for bulk mutations and delete
- Fix issues with mutation SQL
- Fix broken demo app
- Fix typo in 'across'
### Remove
- Remove extra link from README
### Update
- Update docs, getting started guide and mutations
### Pull Requests ### Pull Requests
- Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes - Merge pull request [#6](https://github.com/dosco/super-graph/issues/6) from muesli/typo-fixes
<a name="v0.9"></a> <a name="v0.9"></a>
## [v0.9] - 2019-10-01 ## [v0.9] - 2019-10-01
### Fix
- Fix demo rails app broken build
<a name="v0.8"></a> <a name="v0.8"></a>
## [v0.8] - 2019-09-30 ## [v0.8] - 2019-09-30
### Fix
- Fix invalid import bug
### Update
- Update documentation site
<a name="v0.7"></a> <a name="v0.7"></a>
## [v0.7] - 2019-09-29 ## [v0.7] - 2019-09-29
### Failure
- Failure to prepare statements should be a warning
### Fix
- Fix duplicte column bug
<a name="v0.6"></a> <a name="v0.6"></a>
## [v0.6] - 2019-09-29 ## [v0.6] - 2019-09-29
### Add
- Add database setup commands
- Add binary compression back to Dockerfile
- Add initialization command to setup new apps
- Add migrate command
- Add database seeding capability
- Add session variable for user id
- Add delete mutation
- Add update mutation
- Add insert mutation with bulk insert
- Add GoTO Aug, 19 presentation
- Add support for prepared statements
- Add end-to-end benchmaking
- Add object pooling for parser expressions
- Add request / response debugging for remote joins
- Add a presentation about GraphQL
- Add validation for remote JSON
- Add tracing for API stitching
- Add REST API stitching
- Add SQL query cacheing
- Add support for GraphQL variables
- Add fuzz testing to qcode
- Add test for Rails Redis cookie store integration
- Add an install guide
### Change
- Change fuzz test name to qcode
- Change logo from PNG to SVG
### Enabke
- Enabke reload on config change
### Fix
- Fix missing config name bug
- Fix new app templates
- Fix help message for migrate
- Fix session variable bug
- Fix test failures in `psql` and `serv`
- Fix demo docker services startup order
- Fix wrong value for false token bug. Reported by [@ThisIsMissEm](https://github.com/ThisIsMissEm)
- Fix allow.list file discovery bug
- Fix bug with allow list path
- Fix wrong value for use_allow_list in dev config
- Fix startup bug in demo script
- Fix url bug in allow list
- Fix bug [#676](https://github.com/dosco/super-graph/issues/676) found by fuzzer
- Fix race-condition in remote joins
- Fix cookie passing in web ui
- Fix bug with passing cookies in web ui
- Fix null pointer with invalid argument values
- Fix infinite loop bug in lexer
- Fix null pointer issue found by fuzz test
- Fix issue with fuzzbuzz config
- Fix demo to run as memory only
- Fix auth documentation
- Fix issue with web ui sizing
- Fix issue preventing docker-compose deploy
- Fix try demo documentation
### Futher
- Futher reduce allocations across hot paths
- Futher reduce allocations on the compiler hot path
- Futher optimize json parsing and editing performance
### Highlight
- Highlight top features better on the site
### Improve
- Improve readability of json parser code
- Improve the motivation section in the readme
- Improve the demo experience
### Make
- Make remote joins use parallel http requests
### Merge
- Merge branch 'master' into optimize-psql
### New
- New low allocation fast json parsing and editing library
### Optimize
- Optimize lexer and fix bugs
- Optimize the sql generator hot path
### Reduce
- Reduce alllocations done by the stack
- Reduce steps to run the demo
- Reduce allocations and improve perf over 50%
### Remove
- Remove unused packages
- Remove the 'hello' test app folder
- Remove other allocations in psql
### Use
- Use hash's as ids for table relationships
### Watch
- Watch and reload on config changes
<a name="v0.5"></a> <a name="v0.5"></a>
## [v0.5] - 2019-04-10 ## [v0.5] - 2019-04-10
### Add
- Add supprt for new Rails 5.2 aes-256-gcm cookies
- Add query support for ts_rank and ts_headline
- Add full text search support using TSV indexes
- Add missing assets folder
- Add fetch by ID feature
- Add documentation
### Cleanup
- Cleanup and redesign config files
### Fix
- Fix bug with auth config parsing
### Redesign
- Redesign config file architecture
### Reduce
- Reduce realloc of maps and slices
### Update
- Update docs with full-text search information
<a name="v0.4"></a> <a name="v0.4"></a>
## [v0.4] - 2019-04-01 ## [v0.4] - 2019-04-01
<a name="v0.3"></a> <a name="v0.3"></a>
## [v0.3] - 2019-04-01 ## [v0.3] - 2019-04-01
### Add
- Add SQL execution timing and tracing
- Add support for HAVING with aggregate queries
- Add aggregrate functions to GQL queries
- Add Auth0 JWT support
- Add React UI building to the docker build flow
- Add compiler profiling
- Add bechmarks for GQL to SQL compile
- Add tests for gql to sql compile
### Cleanup
- Cleanup Dockerfile
### Fix
- Fix recurring packer issue docker hub builds
- Fix issue with asset packer breaking Docker builds
- Fix missing git package in Dockerfile
- Fix docker ignore values
- Fix image build failure on docker hub
- Fix build issue in Dockerfile
- Fix bugs and document the 'where' clause
- Fix perf issue with inflections
### Optimize
- Optimize docker image
### Pack
- Pack web UI with app into a single binary
### Upgrade
- Upgrade web UI packages
<a name="0.3"></a> <a name="0.3"></a>
## 0.3 - 2019-03-24 ## 0.3 - 2019-03-24
### First
- First commit
### Fix [Unreleased]: https://github.com/dosco/super-graph/compare/v0.13.22...HEAD
- Fix license to MIT [v0.13.22]: https://github.com/dosco/super-graph/compare/v0.13.21...v0.13.22
[v0.13.21]: https://github.com/dosco/super-graph/compare/v0.13.20...v0.13.21
[v0.13.20]: https://github.com/dosco/super-graph/compare/v0.13.19...v0.13.20
[Unreleased]: https://github.com/dosco/super-graph/compare/v0.12.6...HEAD [v0.13.19]: https://github.com/dosco/super-graph/compare/v0.13.18...v0.13.19
[v0.13.18]: https://github.com/dosco/super-graph/compare/v0.13.17...v0.13.18
[v0.13.17]: https://github.com/dosco/super-graph/compare/v0.13.16...v0.13.17
[v0.13.16]: https://github.com/dosco/super-graph/compare/v0.13.15...v0.13.16
[v0.13.15]: https://github.com/dosco/super-graph/compare/v0.13.14...v0.13.15
[v0.13.14]: https://github.com/dosco/super-graph/compare/v0.13.13...v0.13.14
[v0.13.13]: https://github.com/dosco/super-graph/compare/v0.13.12...v0.13.13
[v0.13.12]: https://github.com/dosco/super-graph/compare/v0.13.11...v0.13.12
[v0.13.11]: https://github.com/dosco/super-graph/compare/v0.13.10...v0.13.11
[v0.13.10]: https://github.com/dosco/super-graph/compare/v0.13.9...v0.13.10
[v0.13.9]: https://github.com/dosco/super-graph/compare/v0.13.8...v0.13.9
[v0.13.8]: https://github.com/dosco/super-graph/compare/v0.13.7...v0.13.8
[v0.13.7]: https://github.com/dosco/super-graph/compare/v0.13.6...v0.13.7
[v0.13.6]: https://github.com/dosco/super-graph/compare/v0.13.5...v0.13.6
[v0.13.5]: https://github.com/dosco/super-graph/compare/v0.13.4...v0.13.5
[v0.13.4]: https://github.com/dosco/super-graph/compare/v0.13.3...v0.13.4
[v0.13.3]: https://github.com/dosco/super-graph/compare/v0.13.2...v0.13.3
[v0.13.2]: https://github.com/dosco/super-graph/compare/v0.13.1...v0.13.2
[v0.13.1]: https://github.com/dosco/super-graph/compare/v0.13.0...v0.13.1
[v0.13.0]: https://github.com/dosco/super-graph/compare/v0.12.49...v0.13.0
[v0.12.49]: https://github.com/dosco/super-graph/compare/v0.12.48...v0.12.49
[v0.12.48]: https://github.com/dosco/super-graph/compare/v0.12.47...v0.12.48
[v0.12.47]: https://github.com/dosco/super-graph/compare/v0.12.46...v0.12.47
[v0.12.46]: https://github.com/dosco/super-graph/compare/v0.12.45...v0.12.46
[v0.12.45]: https://github.com/dosco/super-graph/compare/v0.12.44...v0.12.45
[v0.12.44]: https://github.com/dosco/super-graph/compare/v0.12.43...v0.12.44
[v0.12.43]: https://github.com/dosco/super-graph/compare/v0.12.42...v0.12.43
[v0.12.42]: https://github.com/dosco/super-graph/compare/v0.12.41...v0.12.42
[v0.12.41]: https://github.com/dosco/super-graph/compare/v0.12.40...v0.12.41
[v0.12.40]: https://github.com/dosco/super-graph/compare/v0.12.39...v0.12.40
[v0.12.39]: https://github.com/dosco/super-graph/compare/v0.12.38...v0.12.39
[v0.12.38]: https://github.com/dosco/super-graph/compare/v0.12.37...v0.12.38
[v0.12.37]: https://github.com/dosco/super-graph/compare/v0.12.36...v0.12.37
[v0.12.36]: https://github.com/dosco/super-graph/compare/v0.12.35...v0.12.36
[v0.12.35]: https://github.com/dosco/super-graph/compare/v0.12.34...v0.12.35
[v0.12.34]: https://github.com/dosco/super-graph/compare/v0.12.33...v0.12.34
[v0.12.33]: https://github.com/dosco/super-graph/compare/v0.12.32...v0.12.33
[v0.12.32]: https://github.com/dosco/super-graph/compare/v0.12.31...v0.12.32
[v0.12.31]: https://github.com/dosco/super-graph/compare/v0.12.30...v0.12.31
[v0.12.30]: https://github.com/dosco/super-graph/compare/v0.12.29...v0.12.30
[v0.12.29]: https://github.com/dosco/super-graph/compare/v0.12.28...v0.12.29
[v0.12.28]: https://github.com/dosco/super-graph/compare/v0.12.27...v0.12.28
[v0.12.27]: https://github.com/dosco/super-graph/compare/v0.12.26...v0.12.27
[v0.12.26]: https://github.com/dosco/super-graph/compare/v0.12.25...v0.12.26
[v0.12.25]: https://github.com/dosco/super-graph/compare/v0.12.24...v0.12.25
[v0.12.24]: https://github.com/dosco/super-graph/compare/v0.12.23...v0.12.24
[v0.12.23]: https://github.com/dosco/super-graph/compare/v0.12.22...v0.12.23
[v0.12.22]: https://github.com/dosco/super-graph/compare/v0.12.21...v0.12.22
[v0.12.21]: https://github.com/dosco/super-graph/compare/v0.12.20...v0.12.21
[v0.12.20]: https://github.com/dosco/super-graph/compare/v0.12.19...v0.12.20
[v0.12.19]: https://github.com/dosco/super-graph/compare/v0.12.18...v0.12.19
[v0.12.18]: https://github.com/dosco/super-graph/compare/v0.12.17...v0.12.18
[v0.12.17]: https://github.com/dosco/super-graph/compare/v0.12.16...v0.12.17
[v0.12.16]: https://github.com/dosco/super-graph/compare/v0.12.15...v0.12.16
[v0.12.15]: https://github.com/dosco/super-graph/compare/v0.12.14...v0.12.15
[v0.12.14]: https://github.com/dosco/super-graph/compare/v0.12.13...v0.12.14
[v0.12.13]: https://github.com/dosco/super-graph/compare/v0.12.12...v0.12.13
[v0.12.12]: https://github.com/dosco/super-graph/compare/v0.12.11...v0.12.12
[v0.12.11]: https://github.com/dosco/super-graph/compare/v0.12.10...v0.12.11
[v0.12.10]: https://github.com/dosco/super-graph/compare/v0.12.9...v0.12.10
[v0.12.9]: https://github.com/dosco/super-graph/compare/v0.12.8...v0.12.9
[v0.12.8]: https://github.com/dosco/super-graph/compare/v0.12.7...v0.12.8
[v0.12.7]: https://github.com/dosco/super-graph/compare/v0.12.6...v0.12.7
[v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6 [v0.12.6]: https://github.com/dosco/super-graph/compare/v0.12.5...v0.12.6
[v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5 [v0.12.5]: https://github.com/dosco/super-graph/compare/v0.12.4...v0.12.5
[v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4 [v0.12.4]: https://github.com/dosco/super-graph/compare/v0.12.3...v0.12.4

View File

@ -1,10 +1,12 @@
# stage: 1 # stage: 1
FROM node:10 as react-build FROM node:10 as react-build
WORKDIR /web WORKDIR /web
COPY web/ ./ COPY /internal/serv/web/ ./
RUN yarn RUN yarn
RUN yarn build RUN yarn build
# stage: 2 # stage: 2
FROM golang:1.14-alpine as go-build FROM golang:1.14-alpine as go-build
RUN apk update && \ RUN apk update && \
@ -22,8 +24,8 @@ RUN chmod 755 /usr/local/bin/sops
WORKDIR /app WORKDIR /app
COPY . /app COPY . /app
RUN mkdir -p /app/web/build RUN mkdir -p /app/internal/serv/web/build
COPY --from=react-build /web/build/ ./web/build/ COPY --from=react-build /web/build/ ./internal/serv/web/build
RUN go mod vendor RUN go mod vendor
RUN make build RUN make build
@ -31,6 +33,8 @@ RUN echo "Compressing binary, will take a bit of time..." && \
upx --ultra-brute -qq super-graph && \ upx --ultra-brute -qq super-graph && \
upx -t super-graph upx -t super-graph
# stage: 3 # stage: 3
FROM alpine:latest FROM alpine:latest
WORKDIR / WORKDIR /
@ -41,7 +45,7 @@ RUN mkdir -p /config
COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ COPY --from=go-build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=go-build /app/config/* /config/ COPY --from=go-build /app/config/* /config/
COPY --from=go-build /app/super-graph . COPY --from=go-build /app/super-graph .
COPY --from=go-build /app/scripts/start.sh . COPY --from=go-build /app/internal/scripts/start.sh .
COPY --from=go-build /usr/local/bin/sops . COPY --from=go-build /usr/local/bin/sops .
RUN chmod +x /super-graph RUN chmod +x /super-graph

View File

@ -12,30 +12,30 @@ endif
export GO111MODULE := on export GO111MODULE := on
# Build-time Go variables # Build-time Go variables
version = github.com/dosco/super-graph/serv.version version = github.com/dosco/super-graph/internal/serv.version
gitBranch = github.com/dosco/super-graph/serv.gitBranch gitBranch = github.com/dosco/super-graph/internal/serv.gitBranch
lastCommitSHA = github.com/dosco/super-graph/serv.lastCommitSHA lastCommitSHA = github.com/dosco/super-graph/internal/serv.lastCommitSHA
lastCommitTime = github.com/dosco/super-graph/serv.lastCommitTime lastCommitTime = github.com/dosco/super-graph/internal/serv.lastCommitTime
BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}' BUILD_FLAGS ?= -ldflags '-s -w -X ${lastCommitSHA}=${BUILD} -X "${lastCommitTime}=${BUILD_DATE}" -X "${version}=${BUILD_VERSION}" -X ${gitBranch}=${BUILD_BRANCH}'
.PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS) .PHONY: all build gen clean test run lint changlog release version help $(PLATFORMS)
test: test:
@go test -v ./... @go test -v -short -race ./...
BIN_DIR := $(GOPATH)/bin BIN_DIR := $(GOPATH)/bin
GORICE := $(BIN_DIR)/rice GORICE := $(BIN_DIR)/rice
GOLANGCILINT := $(BIN_DIR)/golangci-lint GOLANGCILINT := $(BIN_DIR)/golangci-lint
GITCHGLOG := $(BIN_DIR)/git-chglog GITCHGLOG := $(BIN_DIR)/git-chglog
WEB_BUILD_DIR := ./web/build/manifest.json WEB_BUILD_DIR := ./internal/serv/web/build/manifest.json
$(GORICE): $(GORICE):
@GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice @GO111MODULE=off go get -u github.com/GeertJohan/go.rice/rice
$(WEB_BUILD_DIR): $(WEB_BUILD_DIR):
@echo "First install Yarn and create a build of the web UI found under ./web" @echo "First install Yarn and create a build of the web UI then re-run make install"
@echo "Command: cd web && yarn && yarn build" @echo "Run this command: yarn --cwd internal/serv/web/ build"
@exit 1 @exit 1
$(GITCHGLOG): $(GITCHGLOG):
@ -45,7 +45,7 @@ changelog: $(GITCHGLOG)
@git-chglog $(ARGS) @git-chglog $(ARGS)
$(GOLANGCILINT): $(GOLANGCILINT):
@GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.21.0 @GO111MODULE=off curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh -s -- -b $(GOPATH)/bin v1.25.1
lint: $(GOLANGCILINT) lint: $(GOLANGCILINT)
@golangci-lint run ./... --skip-dirs-use-default @golangci-lint run ./... --skip-dirs-use-default
@ -57,7 +57,7 @@ os = $(word 1, $@)
$(PLATFORMS): lint test $(PLATFORMS): lint test
@mkdir -p release @mkdir -p release
@GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 @GOOS=$(os) GOARCH=amd64 go build $(BUILD_FLAGS) -o release/$(BINARY)-$(BUILD_VERSION)-$(os)-amd64 main.go
release: windows linux darwin release: windows linux darwin
@ -69,7 +69,7 @@ gen: $(GORICE) $(WEB_BUILD_DIR)
@go generate ./... @go generate ./...
$(BINARY): clean $(BINARY): clean
@go build $(BUILD_FLAGS) -o $(BINARY) @go build $(BUILD_FLAGS) -o $(BINARY) main.go
clean: clean:
@rm -f $(BINARY) @rm -f $(BINARY)
@ -77,11 +77,10 @@ clean:
run: clean run: clean
@go run $(BUILD_FLAGS) main.go $(ARGS) @go run $(BUILD_FLAGS) main.go $(ARGS)
install: gen install: clean build
@echo $(GOPATH)
@echo "Commit Hash: `git rev-parse HEAD`" @echo "Commit Hash: `git rev-parse HEAD`"
@echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`" @echo "Old Hash: `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`"
@go install $(BUILD_FLAGS) @mv $(BINARY) $(GOPATH)/bin/$(BINARY)
@echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32` @echo "New Hash:" `shasum $(GOPATH)/bin/$(BINARY) 2>/dev/null | cut -c -32`
uninstall: clean uninstall: clean

View File

@ -1,73 +1,105 @@
<!-- <a href="https://supergraph.dev"><img src="https://supergraph.dev/hologram.svg" width="100" height="100" align="right" /></a> --> <img src="docs/website/static/img/super-graph-logo.svg" width="80" />
<img src="docs/.vuepress/public/super-graph.png" width="250" /> # Super Graph - Fetch data without code!
### Build web products faster. Secure high performance GraphQL [![GoDoc](https://img.shields.io/badge/godoc-reference-5272B4.svg)](https://pkg.go.dev/github.com/dosco/super-graph/core?tab=doc)
![Apache 2.0](https://img.shields.io/github/license/dosco/super-graph.svg?style=flat-square)
![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg?style=flat-square)
[![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ)
![Apache Public License 2.0](https://img.shields.io/github/license/dosco/super-graph.svg) Super Graph gives you a high performance GraphQL API without you having to write any code. GraphQL is automagically compiled into an efficient SQL query. Use it either as a library or a standalone service.
![Docker build](https://img.shields.io/docker/cloud/build/dosco/super-graph.svg)
![Cloud native](https://img.shields.io/badge/cloud--native-enabled-blue.svg)
[![Discord Chat](https://img.shields.io/discord/628796009539043348.svg)](https://discord.gg/6pSWCTZ)
## Using it as a service
## What is Super Graph ```console
go get github.com/dosco/super-graph
super-graph new <app_name>
```
Is designed to 100x your developer productivity. Super Graph will instantly and without you writing code provide you a high performance and secure GraphQL API for Postgres DB. GraphQL queries are translated into a single fast SQL query. No more writing API code as you develop ## Using it in your own code
your web frontend just make the query you need and Super Graph will do the rest.
Super Graph has a rich feature set like integrating with your existing Ruby on Rails apps, joining your DB with data from remote APIs, role and attribute based access control, support for JWT tokens, built-in DB mutations and seeding, and a lot more. ```console
go get github.com/dosco/super-graph/core
```
![GraphQL](docs/.vuepress/public/graphql.png?raw=true "") ```golang
package main
import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib"
)
## The story of Super Graph? func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil {
log.Fatal(err)
}
sg, err := core.NewSuperGraph(nil, db)
if err != nil {
log.Fatal(err)
}
query := `
query {
posts {
id
title
}
}`
res, err := sg.GraphQL(context.Background(), query, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(res.Data))
}
```
## About Super Graph
After working on several products through my career I found that we spend way too much time on building API backends. Most APIs also need constant updating, and this costs time and money.
After working on several products through my career I find that we spend way too much time on building API backends. Most APIs also require constant updating, this costs real time and money.
It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see. It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see.
I didn't want to write this code anymore, I wanted the computer to do it. Enter GraphQL, to me it sounded great, but it still required me to write all the same database query code. I didn't want to write this code anymore, I wanted the computer to do it. Enter GraphQL, to me it sounded great, but it still required me to write all the same database query code.
Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL. Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL.
This compiler is what sits at the heart of Super Graph with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations and everything else needed for you to build production ready apps with it. This compiler is what sits at the heart of Super Graph, with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations, and everything else needed for you to build production-ready apps with it.
## Features ## Features
- Complex nested queries and mutations - Complex nested queries and mutations
- Auto learns database tables and relationships - Auto learns database tables and relationships
- Role and Attribute based access control - Role and Attribute-based access control
- Full text search and aggregations - Opaque cursor-based efficient pagination
- Full-text search and aggregations
- JWT tokens supported (Auth0, etc) - JWT tokens supported (Auth0, etc)
- Join database queries with remote REST APIs - Join database queries with remote REST APIs
- Also works with existing Ruby-On-Rails apps - Also works with existing Ruby-On-Rails apps
- Rails authentication supported (Redis, Memcache, Cookie) - Rails authentication supported (Redis, Memcache, Cookie)
- A simple config file - A simple config file
- High performance GO codebase - High performance Go codebase
- Tiny docker image and low memory requirements - Tiny docker image and low memory requirements
- Fuzz tested for security - Fuzz tested for security
- Database migrations tool - Database migrations tool
- Database seeding tool - Database seeding tool
- Works with Postgres and YugabyteDB - Works with Postgres and YugabyteDB
- OpenCensus Support: Zipkin, Prometheus, X-Ray, Stackdriver
## Get started
```
git clone https://github.com/dosco/super-graph
cd ./super-graph
make install
super-graph new <app_name>
```
## Documentation ## Documentation
[supergraph.dev](https://supergraph.dev) [supergraph.dev](https://supergraph.dev)
## Contact me ## Reach out
I'm happy to help you deploy Super Graph so feel free to reach out over We're happy to help you leverage Super Graph reach out if you have questions
Twitter or Discord.
[twitter/dosco](https://twitter.com/dosco) [twitter/dosco](https://twitter.com/dosco)
@ -78,5 +110,3 @@ Twitter or Discord.
[Apache Public License 2.0](https://opensource.org/licenses/Apache-2.0) [Apache Public License 2.0](https://opensource.org/licenses/Apache-2.0)
Copyright (c) 2019-present Vikram Rangnekar Copyright (c) 2019-present Vikram Rangnekar

View File

@ -2,13 +2,13 @@ app_name: "Super Graph Development"
host_port: 0.0.0.0:8080 host_port: 0.0.0.0:8080
web_ui: true web_ui: true
# debug, info, warn, error, fatal, panic # debug, error, warn, info, none
log_level: "debug" log_level: "debug"
# enable or disable http compression (uses gzip) # enable or disable http compression (uses gzip)
http_compress: true http_compress: true
# When production mode is 'true' only queries # When production mode is 'true' only queries
# from the allow list are permitted. # from the allow list are permitted.
# When it's 'false' all queries are saved to the # When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list # the allow list in ./config/allow.list
@ -30,21 +30,28 @@ reload_on_config_change: true
# seed_file: seed.js # seed_file: seed.js
# Path pointing to where the migrations can be found # Path pointing to where the migrations can be found
migrations_path: ./config/migrations migrations_path: ./migrations
# Secret key for general encryption operations like # Secret key for general encryption operations like
# encrypting the cursor data # encrypting the cursor data
secret_key: supercalifajalistics secret_key: supercalifajalistics
# CORS: A list of origins a cross-domain request can be executed from. # CORS: A list of origins a cross-domain request can be executed from.
# If the special * value is present in the list, all origins will be allowed. # If the special * value is present in the list, all origins will be allowed.
# An origin may contain a wildcard (*) to replace 0 or more # An origin may contain a wildcard (*) to replace 0 or more
# characters (i.e.: http://*.domain.com). # characters (i.e.: http://*.domain.com).
cors_allowed_origins: ["*"] cors_allowed_origins: ["*"]
# Debug Cross Origin Resource Sharing requests # Debug Cross Origin Resource Sharing requests
cors_debug: true cors_debug: true
# Default API path prefix is /api you can change it if you like
# api_path: "/data"
# Cache-Control header can help cache queries if your CDN supports cache-control
# on POST requests (does not work with not mutations)
# cache_control: "public, max-age=300, s-maxage=600"
# Postgres related environment Variables # Postgres related environment Variables
# SG_DATABASE_HOST # SG_DATABASE_HOST
# SG_DATABASE_PORT # SG_DATABASE_PORT
@ -61,13 +68,25 @@ cors_debug: true
# person: people # person: people
# sheep: sheep # sheep: sheep
# open opencensus tracing and metrics
# telemetry:
# debug: true
# metrics:
# exporter: "prometheus"
# tracing:
# exporter: "zipkin"
# endpoint: "http://zipkin:9411/api/v2/spans"
# sample: 0.2
# include_query: false
# include_params: false
auth: auth:
# Can be 'rails' or 'jwt' # Can be 'rails' or 'jwt'
type: rails type: rails
cookie: _app_session cookie: _app_session
# Comment this out if you want to disable setting # Comment this out if you want to disable setting
# the user_id via a header for testing. # the user_id via a header for testing.
# Disable in production # Disable in production
creds_in_header: true creds_in_header: true
@ -84,7 +103,6 @@ auth:
# password: "" # password: ""
# max_idle: 80 # max_idle: 80
# max_active: 12000 # max_active: 12000
# In most cases you don't need these # In most cases you don't need these
# salt: "encrypted cookie" # salt: "encrypted cookie"
# sign_salt: "signed encrypted cookie" # sign_salt: "signed encrypted cookie"
@ -116,18 +134,18 @@ database:
# database ping timeout is used for db health checking # database ping timeout is used for db health checking
ping_timeout: 1m ping_timeout: 1m
# Define additional variables here to be used with filters # Define additional variables here to be used with filters
variables: variables:
admin_account_id: "5" admin_account_id: "5"
# Field and table names that you wish to block # Field and table names that you wish to block
blocklist: blocklist:
- ar_internal_metadata - ar_internal_metadata
- schema_migrations - schema_migrations
- secret - secret
- password - password
- encrypted - encrypted
- token - token
tables: tables:
- name: customers - name: customers
@ -137,7 +155,7 @@ tables:
url: http://rails_app:3000/stripe/$id url: http://rails_app:3000/stripe/$id
path: data path: data
# debug: true # debug: true
pass_headers: pass_headers:
- cookie - cookie
set_headers: set_headers:
- name: Host - name: Host
@ -158,7 +176,6 @@ tables:
- name: email - name: email
related_to: products.name related_to: products.name
roles_query: "SELECT * FROM users WHERE id = $user_id" roles_query: "SELECT * FROM users WHERE id = $user_id"
roles: roles:
@ -167,12 +184,12 @@ roles:
- name: products - name: products
query: query:
limit: 10 limit: 10
columns: ["id", "name", "description" ] columns: ["id", "name", "description"]
aggregation: false aggregation: false
insert: insert:
block: false block: false
update: update:
block: false block: false

View File

@ -6,13 +6,13 @@ app_name: "Super Graph Production"
host_port: 0.0.0.0:8080 host_port: 0.0.0.0:8080
web_ui: false web_ui: false
# debug, info, warn, error, fatal, panic, disable # debug, error, warn, info, none
log_level: "info" log_level: "info"
# enable or disable http compression (uses gzip) # enable or disable http compression (uses gzip)
http_compress: true http_compress: true
# When production mode is 'true' only queries # When production mode is 'true' only queries
# from the allow list are permitted. # from the allow list are permitted.
# When it's 'false' all queries are saved to the # When it's 'false' all queries are saved to the
# the allow list in ./config/allow.list # the allow list in ./config/allow.list
@ -30,9 +30,9 @@ enable_tracing: true
# seed_file: seed.js # seed_file: seed.js
# Path pointing to where the migrations can be found # Path pointing to where the migrations can be found
# migrations_path: migrations # migrations_path: ./migrations
# Secret key for general encryption operations like # Secret key for general encryption operations like
# encrypting the cursor data # encrypting the cursor data
# secret_key: supercalifajalistics # secret_key: supercalifajalistics
@ -57,11 +57,20 @@ database:
password: postgres password: postgres
#pool_size: 10 #pool_size: 10
#max_retries: 0 #max_retries: 0
#log_level: "debug" #log_level: "debug"
# Set session variable "user.id" to the user id # Set session variable "user.id" to the user id
# Enable this if you need the user id in triggers, etc # Enable this if you need the user id in triggers, etc
set_user_id: false set_user_id: false
# database ping timeout is used for db health checking # database ping timeout is used for db health checking
ping_timeout: 5m ping_timeout: 5m
# open opencensus tracing and metrics
# telemetry:
# debug: false
# metrics:
# exporter: "prometheus"
# tracing:
# exporter: "zipkin"
# endpoint: "http://zipkin:9411/api/v2/spans"
# sample: 0.6

224
core/api.go Normal file
View File

@ -0,0 +1,224 @@
// Package core provides the primary API to include and use Super Graph with your own code.
// For detailed documentation visit https://supergraph.dev
//
// Example usage:
/*
package main
import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib"
)
func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil {
log.Fatal(err)
}
sg, err := core.NewSuperGraph(nil, db)
if err != nil {
log.Fatal(err)
}
query := `
query {
posts {
id
title
}
}`
res, err := sg.GraphQL(context.Background(), query, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(res.Data))
}
*/
package core
import (
"context"
"crypto/sha256"
"database/sql"
"encoding/json"
_log "log"
"os"
"github.com/chirino/graphql"
"github.com/dosco/super-graph/core/internal/allow"
"github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
)
type contextkey int
// Constants to set values on the context passed to the NewSuperGraph function
const (
// Name of the authentication provider. Eg. google, github, etc
UserIDProviderKey contextkey = iota
// User ID value for authenticated users
UserIDKey
// User role if pre-defined
UserRoleKey
)
// SuperGraph struct is an instance of the Super Graph engine it holds all the required information like
// datase schemas, relationships, etc that the GraphQL to SQL compiler would need to do it's job.
type SuperGraph struct {
conf *Config
db *sql.DB
log *_log.Logger
dbinfo *psql.DBInfo
schema *psql.DBSchema
allowList *allow.List
encKey [32]byte
prepared map[string]*preparedItem
roles map[string]*Role
getRole *sql.Stmt
rmap map[uint64]*resolvFn
abacEnabled bool
anonExists bool
qc *qcode.Compiler
pc *psql.Compiler
ge *graphql.Engine
}
// NewSuperGraph creates the SuperGraph struct, this involves querying the database to learn its
// schemas and relationships
func NewSuperGraph(conf *Config, db *sql.DB) (*SuperGraph, error) {
return newSuperGraph(conf, db, nil)
}
// newSuperGraph helps with writing tests and benchmarks
func newSuperGraph(conf *Config, db *sql.DB, dbinfo *psql.DBInfo) (*SuperGraph, error) {
if conf == nil {
conf = &Config{}
}
sg := &SuperGraph{
conf: conf,
db: db,
dbinfo: dbinfo,
log: _log.New(os.Stdout, "", 0),
}
if err := sg.initConfig(); err != nil {
return nil, err
}
if err := sg.initCompilers(); err != nil {
return nil, err
}
if err := sg.initAllowList(); err != nil {
return nil, err
}
if err := sg.initPrepared(); err != nil {
return nil, err
}
if err := sg.initResolvers(); err != nil {
return nil, err
}
if err := sg.initGraphQLEgine(); err != nil {
return nil, err
}
if len(conf.SecretKey) != 0 {
sk := sha256.Sum256([]byte(conf.SecretKey))
conf.SecretKey = ""
sg.encKey = sk
} else {
sg.encKey = crypto.NewEncryptionKey()
}
return sg, nil
}
// Result struct contains the output of the GraphQL function this includes resulting json from the
// database query and any error information
type Result struct {
op qcode.QType
name string
sql string
role string
Error string `json:"message,omitempty"`
Data json.RawMessage `json:"data,omitempty"`
Extensions *extensions `json:"extensions,omitempty"`
}
// GraphQL function is called on the SuperGraph struct to convert the provided GraphQL query into an
// SQL query and execute it on the database. In production mode prepared statements are directly used
// and no query compiling takes places.
//
// In developer mode all names queries are saved into a file `allow.list` and in production mode only
// queries from this file can be run.
func (sg *SuperGraph) GraphQL(c context.Context, query string, vars json.RawMessage) (*Result, error) {
var res Result
res.op = qcode.GetQType(query)
res.name = allow.QueryName(query)
// use the chirino/graphql library for introspection queries
// disabled when allow list is enforced
if !sg.conf.UseAllowList && res.name == "IntrospectionQuery" {
r := sg.ge.ServeGraphQL(&graphql.Request{Query: query})
res.Data = r.Data
if r.Error() != nil {
res.Error = r.Error().Error()
}
return &res, r.Error()
}
ct := scontext{Context: c, sg: sg, query: query, vars: vars, res: res}
if len(vars) <= 2 {
ct.vars = nil
}
if keyExists(c, UserIDKey) {
ct.role = "user"
} else {
ct.role = "anon"
}
data, err := ct.execQuery()
if err != nil {
return &ct.res, err
}
ct.res.Data = json.RawMessage(data)
return &ct.res, nil
}
// GraphQLSchema function return the GraphQL schema for the underlying database connected
// to this instance of Super Graph
func (sg *SuperGraph) GraphQLSchema() (string, error) {
return sg.ge.Schema.String(), nil
}
// Operation function return the operation type from the query. It uses a very fast algorithm to
// extract the operation without having to parse the query.
func Operation(query string) OpType {
return OpType(qcode.GetQType(query))
}
// Name function return the operation name from the query. It uses a very fast algorithm to
// extract the operation name without having to parse the query.
func Name(query string) string {
return allow.QueryName(query)
}

62
core/api_test.go Normal file
View File

@ -0,0 +1,62 @@
package core
import (
"context"
"fmt"
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/dosco/super-graph/core/internal/psql"
)
func BenchmarkGraphQL(b *testing.B) {
ct := context.WithValue(context.Background(), UserIDKey, "1")
db, _, err := sqlmock.New()
if err != nil {
b.Fatal(err)
}
defer db.Close()
// mock.ExpectQuery(`^SELECT jsonb_build_object`).WithArgs()
c := &Config{}
sg, err := newSuperGraph(c, db, psql.GetTestDBInfo())
if err != nil {
b.Fatal(err)
}
query := `
query {
products {
id
name
user {
full_name
phone
email
}
customers {
id
email
}
}
users {
id
name
}
}`
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err = sg.GraphQL(ct, query, nil)
}
})
fmt.Println(err)
//fmt.Println(mock.ExpectationsWereMet())
}

View File

@ -1,8 +1,7 @@
package serv package core
import ( import (
"bytes" "bytes"
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
@ -10,29 +9,31 @@ import (
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
) )
func argMap(ctx context.Context, vars []byte) func(w io.Writer, tag string) (int, error) { // argMap function is used to string replace variables with values by
// the fasttemplate code
func (c *scontext) argMap() func(w io.Writer, tag string) (int, error) {
return func(w io.Writer, tag string) (int, error) { return func(w io.Writer, tag string) (int, error) {
switch tag { switch tag {
case "user_id_provider": case "user_id_provider":
if v := ctx.Value(userIDProviderKey); v != nil { if v := c.Value(UserIDProviderKey); v != nil {
return io.WriteString(w, v.(string)) return io.WriteString(w, v.(string))
} }
return 0, argErr("user_id_provider") return 0, argErr("user_id_provider")
case "user_id": case "user_id":
if v := ctx.Value(userIDKey); v != nil { if v := c.Value(UserIDKey); v != nil {
return io.WriteString(w, v.(string)) return io.WriteString(w, v.(string))
} }
return 0, argErr("user_id") return 0, argErr("user_id")
case "user_role": case "user_role":
if v := ctx.Value(userRoleKey); v != nil { if v := c.Value(UserRoleKey); v != nil {
return io.WriteString(w, v.(string)) return io.WriteString(w, v.(string))
} }
return 0, argErr("user_role") return 0, argErr("user_role")
} }
fields := jsn.Get(vars, [][]byte{[]byte(tag)}) fields := jsn.Get(c.vars, [][]byte{[]byte(tag)})
if len(fields) == 0 { if len(fields) == 0 {
return 0, argErr(tag) return 0, argErr(tag)
@ -49,7 +50,7 @@ func argMap(ctx context.Context, vars []byte) func(w io.Writer, tag string) (int
if bytes.EqualFold(v, []byte("null")) { if bytes.EqualFold(v, []byte("null")) {
return io.WriteString(w, ``) return io.WriteString(w, ``)
} }
v1, err := decrypt(string(fields[0].Value)) v1, err := c.sg.decrypt(string(fields[0].Value))
if err != nil { if err != nil {
return 0, err return 0, err
} }
@ -57,18 +58,21 @@ func argMap(ctx context.Context, vars []byte) func(w io.Writer, tag string) (int
return w.Write(v1) return w.Write(v1)
} }
return w.Write(escQuote(fields[0].Value)) return w.Write(escSQuote(fields[0].Value))
} }
} }
func argList(ctx *coreContext, args [][]byte) ([]interface{}, error) { // argList function is used to create a list of arguments to pass
// to a prepared statement. FYI no escaping of single quotes is
// needed here
func (c *scontext) argList(args [][]byte) ([]interface{}, error) {
vars := make([]interface{}, len(args)) vars := make([]interface{}, len(args))
var fields map[string]json.RawMessage var fields map[string]json.RawMessage
var err error var err error
if len(ctx.req.Vars) != 0 { if len(c.vars) != 0 {
fields, _, err = jsn.Tree(ctx.req.Vars) fields, _, err = jsn.Tree(c.vars)
if err != nil { if err != nil {
return nil, err return nil, err
@ -79,21 +83,21 @@ func argList(ctx *coreContext, args [][]byte) ([]interface{}, error) {
av := args[i] av := args[i]
switch { switch {
case bytes.Equal(av, []byte("user_id")): case bytes.Equal(av, []byte("user_id")):
if v := ctx.Value(userIDKey); v != nil { if v := c.Value(UserIDKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_id") return nil, argErr("user_id")
} }
case bytes.Equal(av, []byte("user_id_provider")): case bytes.Equal(av, []byte("user_id_provider")):
if v := ctx.Value(userIDProviderKey); v != nil { if v := c.Value(UserIDProviderKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_id_provider") return nil, argErr("user_id_provider")
} }
case bytes.Equal(av, []byte("user_role")): case bytes.Equal(av, []byte("user_role")):
if v := ctx.Value(userRoleKey); v != nil { if v := c.Value(UserRoleKey); v != nil {
vars[i] = v.(string) vars[i] = v.(string)
} else { } else {
return nil, argErr("user_role") return nil, argErr("user_role")
@ -101,7 +105,7 @@ func argList(ctx *coreContext, args [][]byte) ([]interface{}, error) {
case bytes.Equal(av, []byte("cursor")): case bytes.Equal(av, []byte("cursor")):
if v, ok := fields["cursor"]; ok && v[0] == '"' { if v, ok := fields["cursor"]; ok && v[0] == '"' {
v1, err := decrypt(string(v[1 : len(v)-1])) v1, err := c.sg.decrypt(string(v[1 : len(v)-1]))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -114,7 +118,7 @@ func argList(ctx *coreContext, args [][]byte) ([]interface{}, error) {
if v, ok := fields[string(av)]; ok { if v, ok := fields[string(av)]; ok {
switch v[0] { switch v[0] {
case '[', '{': case '[', '{':
vars[i] = escQuote(v) vars[i] = v
default: default:
var val interface{} var val interface{}
if err := json.Unmarshal(v, &val); err != nil { if err := json.Unmarshal(v, &val); err != nil {
@ -133,27 +137,25 @@ func argList(ctx *coreContext, args [][]byte) ([]interface{}, error) {
return vars, nil return vars, nil
} }
func escQuote(b []byte) []byte { //
f := false func escSQuote(b []byte) []byte {
for i := range b { var buf *bytes.Buffer
if b[i] == '\'' {
f = true
break
}
}
if !f {
return b
}
buf := &bytes.Buffer{}
s := 0 s := 0
for i := range b { for i := range b {
if b[i] == '\'' { if b[i] == '\'' {
if buf == nil {
buf = &bytes.Buffer{}
}
buf.Write(b[s:i]) buf.Write(b[s:i])
buf.WriteString(`''`) buf.WriteString(`''`)
s = i + 1 s = i + 1
} }
} }
if buf == nil {
return b
}
l := len(b) l := len(b)
if s < (l - 1) { if s < (l - 1) {
buf.Write(b[s:l]) buf.Write(b[s:l])

13
core/args_test.go Normal file
View File

@ -0,0 +1,13 @@
package core
import "testing"
func TestEscQuote(t *testing.T) {
val := "That's the worst, don''t be calling me's again"
exp := "That''s the worst, don''''t be calling me''s again"
ret := escSQuote([]byte(val))
if exp != string(ret) {
t.Errorf("escSQuote failed: %s", string(ret))
}
}

View File

@ -1,4 +1,4 @@
package serv package core
import ( import (
"bytes" "bytes"
@ -7,42 +7,42 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/psql" "github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
type stmt struct { type stmt struct {
role *configRole role *Role
qc *qcode.QCode qc *qcode.QCode
skipped uint32 skipped uint32
sql string sql string
} }
func buildStmt(qt qcode.QType, gql, vars []byte, role string) ([]stmt, error) { func (sg *SuperGraph) buildStmt(qt qcode.QType, query, vars []byte, role string) ([]stmt, error) {
switch qt { switch qt {
case qcode.QTMutation: case qcode.QTMutation:
return buildRoleStmt(gql, vars, role) return sg.buildRoleStmt(query, vars, role)
case qcode.QTQuery: case qcode.QTQuery:
if role == "anon" { if role == "anon" {
return buildRoleStmt(gql, vars, "anon") return sg.buildRoleStmt(query, vars, "anon")
} }
if conf.isABACEnabled() { if sg.abacEnabled {
return buildMultiStmt(gql, vars) return sg.buildMultiStmt(query, vars)
} }
return buildRoleStmt(gql, vars, "user") return sg.buildRoleStmt(query, vars, "user")
default: default:
return nil, fmt.Errorf("unknown query type '%d'", qt) return nil, fmt.Errorf("unknown query type '%d'", qt)
} }
} }
func buildRoleStmt(gql, vars []byte, role string) ([]stmt, error) { func (sg *SuperGraph) buildRoleStmt(query, vars []byte, role string) ([]stmt, error) {
ro, ok := conf.roles[role] ro, ok := sg.roles[role]
if !ok { if !ok {
return nil, fmt.Errorf(`roles '%s' not defined in config`, role) return nil, fmt.Errorf(`roles '%s' not defined in c.sg.config`, role)
} }
var vm map[string]json.RawMessage var vm map[string]json.RawMessage
@ -54,21 +54,15 @@ func buildRoleStmt(gql, vars []byte, role string) ([]stmt, error) {
} }
} }
qc, err := qcompile.Compile(gql, ro.Name) qc, err := sg.qc.Compile(query, ro.Name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
// For the 'anon' role in production only compile
// queries for tables defined in the config file.
if conf.Production && ro.Name == "anon" && !hasTablesWithConfig(qc, ro) {
return nil, errors.New("query contains tables with no 'anon' role config")
}
stmts := []stmt{stmt{role: ro, qc: qc}} stmts := []stmt{stmt{role: ro, qc: qc}}
w := &bytes.Buffer{} w := &bytes.Buffer{}
skipped, err := pcompile.Compile(qc, w, psql.Variables(vm)) skipped, err := sg.pc.Compile(qc, w, psql.Variables(vm))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -79,7 +73,7 @@ func buildRoleStmt(gql, vars []byte, role string) ([]stmt, error) {
return stmts, nil return stmts, nil
} }
func buildMultiStmt(gql, vars []byte) ([]stmt, error) { func (sg *SuperGraph) buildMultiStmt(query, vars []byte) ([]stmt, error) {
var vm map[string]json.RawMessage var vm map[string]json.RawMessage
var err error var err error
@ -89,28 +83,29 @@ func buildMultiStmt(gql, vars []byte) ([]stmt, error) {
} }
} }
if len(conf.RolesQuery) == 0 { if len(sg.conf.RolesQuery) == 0 {
return buildRoleStmt(gql, vars, "user") return nil, errors.New("roles_query not defined")
} }
stmts := make([]stmt, 0, len(conf.Roles)) stmts := make([]stmt, 0, len(sg.conf.Roles))
w := &bytes.Buffer{} w := &bytes.Buffer{}
for i := 0; i < len(conf.Roles); i++ { for i := 0; i < len(sg.conf.Roles); i++ {
role := &conf.Roles[i] role := &sg.conf.Roles[i]
// skip anon as it's not included in the combined multi-statement
if role.Name == "anon" { if role.Name == "anon" {
continue continue
} }
qc, err := qcompile.Compile(gql, role.Name) qc, err := sg.qc.Compile(query, role.Name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
stmts = append(stmts, stmt{role: role, qc: qc}) stmts = append(stmts, stmt{role: role, qc: qc})
skipped, err := pcompile.Compile(qc, w, psql.Variables(vm)) skipped, err := sg.pc.Compile(qc, w, psql.Variables(vm))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -121,7 +116,7 @@ func buildMultiStmt(gql, vars []byte) ([]stmt, error) {
w.Reset() w.Reset()
} }
sql, err := renderUserQuery(stmts, vm) sql, err := sg.renderUserQuery(stmts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -131,8 +126,7 @@ func buildMultiStmt(gql, vars []byte) ([]stmt, error) {
} }
//nolint: errcheck //nolint: errcheck
func renderUserQuery( func (sg *SuperGraph) renderUserQuery(stmts []stmt) (string, error) {
stmts []stmt, vars map[string]json.RawMessage) (string, error) {
w := &bytes.Buffer{} w := &bytes.Buffer{}
io.WriteString(w, `SELECT "_sg_auth_info"."role", (CASE "_sg_auth_info"."role" `) io.WriteString(w, `SELECT "_sg_auth_info"."role", (CASE "_sg_auth_info"."role" `)
@ -150,7 +144,7 @@ func renderUserQuery(
} }
io.WriteString(w, `END) FROM (SELECT (CASE WHEN EXISTS (`) io.WriteString(w, `END) FROM (SELECT (CASE WHEN EXISTS (`)
io.WriteString(w, conf.RolesQuery) io.WriteString(w, sg.conf.RolesQuery)
io.WriteString(w, `) THEN `) io.WriteString(w, `) THEN `)
io.WriteString(w, `(SELECT (CASE`) io.WriteString(w, `(SELECT (CASE`)
@ -166,22 +160,23 @@ func renderUserQuery(
} }
io.WriteString(w, ` ELSE 'user' END) FROM (`) io.WriteString(w, ` ELSE 'user' END) FROM (`)
io.WriteString(w, conf.RolesQuery) io.WriteString(w, sg.conf.RolesQuery)
io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `) io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `)
io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler") AS "_sg_auth_info"(role) LIMIT 1; `) io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler") AS "_sg_auth_info"(role) LIMIT 1; `)
return w.String(), nil return w.String(), nil
} }
func hasTablesWithConfig(qc *qcode.QCode, role *configRole) bool { // func (sg *SuperGraph) hasTablesWithConfig(qc *qcode.QCode, role *Role) bool {
for _, id := range qc.Roots { // for _, id := range qc.Roots {
t, err := schema.GetTable(qc.Selects[id].Name) // t, err := sg.schema.GetTable(qc.Selects[id].Name)
if err != nil { // if err != nil {
return false // return false
} // }
if _, ok := role.tablesMap[t.Name]; !ok {
return false // if r := role.GetTable(t.Name); r == nil {
} // return false
} // }
return true // }
} // return true
// }

213
core/config.go Normal file
View File

@ -0,0 +1,213 @@
package core
import (
"fmt"
"path"
"path/filepath"
"strings"
"github.com/spf13/viper"
)
// Core struct contains core specific config value
type Config struct {
// SecretKey is used to encrypt opaque values such as
// the cursor. Auto-generated if not set
SecretKey string `mapstructure:"secret_key"`
// UseAllowList (aka production mode) when set to true ensures
// only queries lists in the allow.list file can be used. All
// queries are pre-prepared so no compiling happens and things are
// very fast.
UseAllowList bool `mapstructure:"use_allow_list"`
// AllowListFile if the path to allow list file if not set the
// path is assumed to tbe the same as the config path (allow.list)
AllowListFile string `mapstructure:"allow_list_file"`
// SetUserID forces the database session variable `user.id` to
// be set to the user id. This variables can be used by triggers
// or other database functions
SetUserID bool `mapstructure:"set_user_id"`
// DefaultAllow reverses the blocked by default behaviour for queries in
// anonymous mode. (anon role)
// For example if the table `users` is not listed under the anon role then
// access to it would by default for unauthenticated queries this reverses
// this behavior (!!! Use with caution !!!!)
DefaultAllow bool `mapstructure:"default_allow"`
// Vars is a map of hardcoded variables that can be leveraged in your
// queries (eg variable admin_id will be $admin_id in the query)
Vars map[string]string `mapstructure:"variables"`
// Blocklist is a list of tables and columns that should be filtered
// out from any and all queries
Blocklist []string
// Tables contains all table specific configuration such as aliased tables
// creating relationships between tables, etc
Tables []Table
// RolesQuery if set enabled attributed based access control. This query
// is use to fetch the user attributes that then dynamically define the users
// role.
RolesQuery string `mapstructure:"roles_query"`
// Roles contains all the configuration for all the roles you want to support
// `user` and `anon` are two default roles. User role is for when a user ID is
// available and Anon when it's not.
Roles []Role
// Inflections is to add additionally singular to plural mappings
// to the engine (eg. sheep: sheep)
Inflections map[string]string `mapstructure:"inflections"`
// Database schema name. Defaults to 'public'
DBSchema string `mapstructure:"db_schema"`
}
// Table struct defines a database table
type Table struct {
Name string
Table string
Blocklist []string
Remotes []Remote
Columns []Column
}
// Column struct defines a database column
type Column struct {
Name string
Type string
ForeignKey string `mapstructure:"related_to"`
}
// Remote struct defines a remote API endpoint
type Remote struct {
Name string
ID string
Path string
URL string
Debug bool
PassHeaders []string `mapstructure:"pass_headers"`
SetHeaders []struct {
Name string
Value string
} `mapstructure:"set_headers"`
}
// Role struct contains role specific access control values for for all database tables
type Role struct {
Name string
Match string
Tables []RoleTable
tm map[string]*RoleTable
}
// RoleTable struct contains role specific access control values for a database table
type RoleTable struct {
Name string
ReadOnly *bool `mapstructure:"read_only"`
Query Query
Insert Insert
Update Update
Delete Delete
}
// Query struct contains access control values for query operations
type Query struct {
Limit int
Filters []string
Columns []string
DisableFunctions bool `mapstructure:"disable_functions"`
Block *bool
}
// Insert struct contains access control values for insert operations
type Insert struct {
Filters []string
Columns []string
Presets map[string]string
Block *bool
}
// Insert struct contains access control values for update operations
type Update struct {
Filters []string
Columns []string
Presets map[string]string
Block *bool
}
// Delete struct contains access control values for delete operations
type Delete struct {
Filters []string
Columns []string
Block *bool
}
// ReadInConfig function reads in the config file for the environment specified in the GO_ENV
// environment variable. This is the best way to create a new Super Graph config.
func ReadInConfig(configFile string) (*Config, error) {
cpath := path.Dir(configFile)
cfile := path.Base(configFile)
vi := newViper(cpath, cfile)
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
inherits := vi.GetString("inherits")
if inherits != "" {
vi = newViper(cpath, inherits)
if err := vi.ReadInConfig(); err != nil {
return nil, err
}
if vi.IsSet("inherits") {
return nil, fmt.Errorf("inherited config (%s) cannot itself inherit (%s)",
inherits,
vi.GetString("inherits"))
}
vi.SetConfigName(cfile)
if err := vi.MergeInConfig(); err != nil {
return nil, err
}
}
c := &Config{}
if err := vi.Unmarshal(&c); err != nil {
return nil, fmt.Errorf("failed to decode config, %v", err)
}
if c.AllowListFile == "" {
c.AllowListFile = path.Join(cpath, "allow.list")
}
return c, nil
}
func newViper(configPath, configFile string) *viper.Viper {
vi := viper.New()
vi.SetEnvPrefix("SG")
vi.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
vi.AutomaticEnv()
if filepath.Ext(configFile) != "" {
vi.SetConfigFile(configFile)
} else {
vi.SetConfigName(configFile)
vi.AddConfigPath(configPath)
vi.AddConfigPath("./config")
}
return vi
}

19
core/consts.go Normal file
View File

@ -0,0 +1,19 @@
package core
import (
"context"
"errors"
)
const (
openVar = "{{"
closeVar = "}}"
)
var (
errNotFound = errors.New("not found in prepared statements")
)
func keyExists(ct context.Context, key contextkey) bool {
return ct.Value(key) != nil
}

429
core/core.go Normal file
View File

@ -0,0 +1,429 @@
package core
import (
"bytes"
"context"
"database/sql"
"encoding/json"
"fmt"
"time"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/valyala/fasttemplate"
)
type OpType int
const (
OpQuery OpType = iota
OpMutation
)
type extensions struct {
Tracing *trace `json:"tracing,omitempty"`
}
type trace struct {
Version int `json:"version"`
StartTime time.Time `json:"startTime"`
EndTime time.Time `json:"endTime"`
Duration time.Duration `json:"duration"`
Execution execution `json:"execution"`
}
type execution struct {
Resolvers []resolver `json:"resolvers"`
}
type resolver struct {
Path []string `json:"path"`
ParentType string `json:"parentType"`
FieldName string `json:"fieldName"`
ReturnType string `json:"returnType"`
StartOffset int `json:"startOffset"`
Duration time.Duration `json:"duration"`
}
type scontext struct {
context.Context
sg *SuperGraph
query string
vars json.RawMessage
role string
res Result
}
func (sg *SuperGraph) initCompilers() error {
var err error
var schema string
if sg.conf.DBSchema == "" {
schema = "public"
} else {
schema = sg.conf.DBSchema
}
// If sg.di is not null then it's probably set
// for tests
if sg.dbinfo == nil {
sg.dbinfo, err = psql.GetDBInfo(sg.db, schema)
if err != nil {
return err
}
}
if len(sg.dbinfo.Tables) == 0 {
return fmt.Errorf("no tables found in database (schema: %s)", schema)
}
if err = addTables(sg.conf, sg.dbinfo); err != nil {
return err
}
if err = addForeignKeys(sg.conf, sg.dbinfo); err != nil {
return err
}
sg.schema, err = psql.NewDBSchema(sg.dbinfo, getDBTableAliases(sg.conf))
if err != nil {
return err
}
sg.qc, err = qcode.NewCompiler(qcode.Config{
Blocklist: sg.conf.Blocklist,
})
if err != nil {
return err
}
if err := addRoles(sg.conf, sg.qc); err != nil {
return err
}
sg.pc = psql.NewCompiler(psql.Config{
Schema: sg.schema,
Vars: sg.conf.Vars,
})
return nil
}
func (c *scontext) execQuery() ([]byte, error) {
var data []byte
var st *stmt
var err error
if c.sg.conf.UseAllowList {
data, st, err = c.resolvePreparedSQL()
} else {
data, st, err = c.resolveSQL()
}
if err != nil {
return nil, err
}
if len(data) == 0 || st.skipped == 0 {
return data, nil
}
// return c.sg.execRemoteJoin(st, data, c.req.hdr)
return c.sg.execRemoteJoin(st, data, nil)
}
func (c *scontext) resolvePreparedSQL() ([]byte, *stmt, error) {
var tx *sql.Tx
var err error
mutation := (c.res.op == qcode.QTMutation)
useRoleQuery := c.sg.abacEnabled && mutation
useTx := useRoleQuery || c.sg.conf.SetUserID
if useTx {
if tx, err = c.sg.db.BeginTx(c, nil); err != nil {
return nil, nil, err
}
defer tx.Rollback() //nolint: errcheck
}
if c.sg.conf.SetUserID {
if err := setLocalUserID(c, tx); err != nil {
return nil, nil, err
}
}
var role string
if useRoleQuery {
if role, err = c.executeRoleQuery(tx); err != nil {
return nil, nil, err
}
} else if v := c.Value(UserRoleKey); v != nil {
role = v.(string)
} else {
role = c.role
}
c.res.role = role
ps, ok := c.sg.prepared[stmtHash(c.res.name, role)]
if !ok {
return nil, nil, errNotFound
}
c.res.sql = ps.st.sql
var root []byte
var row *sql.Row
varsList, err := c.argList(ps.args)
if err != nil {
return nil, nil, err
}
if useTx {
row = tx.Stmt(ps.sd).QueryRow(varsList...)
} else {
row = ps.sd.QueryRow(varsList...)
}
if ps.roleArg {
err = row.Scan(&role, &root)
} else {
err = row.Scan(&root)
}
if err != nil {
return nil, nil, err
}
c.role = role
if useTx {
if err := tx.Commit(); err != nil {
return nil, nil, err
}
}
if root, err = c.sg.encryptCursor(ps.st.qc, root); err != nil {
return nil, nil, err
}
return root, &ps.st, nil
}
func (c *scontext) resolveSQL() ([]byte, *stmt, error) {
var tx *sql.Tx
var err error
mutation := (c.res.op == qcode.QTMutation)
useRoleQuery := c.sg.abacEnabled && mutation
useTx := useRoleQuery || c.sg.conf.SetUserID
if useTx {
if tx, err = c.sg.db.BeginTx(c, nil); err != nil {
return nil, nil, err
}
defer tx.Rollback() //nolint: errcheck
}
if c.sg.conf.SetUserID {
if err := setLocalUserID(c, tx); err != nil {
return nil, nil, err
}
}
if useRoleQuery {
if c.role, err = c.executeRoleQuery(tx); err != nil {
return nil, nil, err
}
} else if v := c.Value(UserRoleKey); v != nil {
c.role = v.(string)
}
stmts, err := c.sg.buildStmt(c.res.op, []byte(c.query), c.vars, c.role)
if err != nil {
return nil, nil, err
}
st := &stmts[0]
t := fasttemplate.New(st.sql, openVar, closeVar)
buf := &bytes.Buffer{}
_, err = t.ExecuteFunc(buf, c.argMap())
if err != nil {
return nil, nil, err
}
finalSQL := buf.String()
// var stime time.Time
// if c.sg.conf.EnableTracing {
// stime = time.Now()
// }
var root []byte
var role string
var row *sql.Row
// defaultRole := c.role
if useTx {
row = tx.QueryRow(finalSQL)
} else {
row = c.sg.db.QueryRow(finalSQL)
}
if len(stmts) > 1 {
err = row.Scan(&role, &root)
} else {
err = row.Scan(&root)
}
c.res.sql = finalSQL
if len(role) == 0 {
c.res.role = c.role
} else {
c.res.role = role
}
if err != nil {
return nil, nil, err
}
if useTx {
if err := tx.Commit(); err != nil {
return nil, nil, err
}
}
if root, err = c.sg.encryptCursor(st.qc, root); err != nil {
return nil, nil, err
}
if c.sg.allowList.IsPersist() {
if err := c.sg.allowList.Set(c.vars, c.query, ""); err != nil {
return nil, nil, err
}
}
if len(stmts) > 1 {
if st = findStmt(role, stmts); st == nil {
return nil, nil, fmt.Errorf("invalid role '%s' returned", role)
}
}
// if c.sg.conf.EnableTracing {
// for _, id := range st.qc.Roots {
// c.addTrace(st.qc.Selects, id, stime)
// }
// }
return root, st, nil
}
func (c *scontext) executeRoleQuery(tx *sql.Tx) (string, error) {
userID := c.Value(UserIDKey)
if userID == nil {
return "anon", nil
}
var role string
row := c.sg.getRole.QueryRow(userID, c.role)
if err := row.Scan(&role); err != nil {
return "", err
}
return role, nil
}
func (r *Result) Operation() OpType {
switch r.op {
case qcode.QTQuery:
return OpQuery
case qcode.QTMutation, qcode.QTInsert, qcode.QTUpdate, qcode.QTUpsert, qcode.QTDelete:
return OpMutation
default:
return -1
}
}
func (r *Result) OperationName() string {
return r.op.String()
}
func (r *Result) QueryName() string {
return r.name
}
func (r *Result) Role() string {
return r.role
}
func (r *Result) SQL() string {
return r.sql
}
// func (c *scontext) addTrace(sel []qcode.Select, id int32, st time.Time) {
// et := time.Now()
// du := et.Sub(st)
// if c.res.Extensions == nil {
// c.res.Extensions = &extensions{&trace{
// Version: 1,
// StartTime: st,
// Execution: execution{},
// }}
// }
// c.res.Extensions.Tracing.EndTime = et
// c.res.Extensions.Tracing.Duration = du
// n := 1
// for i := id; i != -1; i = sel[i].ParentID {
// n++
// }
// path := make([]string, n)
// n--
// for i := id; ; i = sel[i].ParentID {
// path[n] = sel[i].Name
// if sel[i].ParentID == -1 {
// break
// }
// n--
// }
// tr := resolver{
// Path: path,
// ParentType: "Query",
// FieldName: sel[id].Name,
// ReturnType: "object",
// StartOffset: 1,
// Duration: du,
// }
// c.res.Extensions.Tracing.Execution.Resolvers =
// append(c.res.Extensions.Tracing.Execution.Resolvers, tr)
// }
func findStmt(role string, stmts []stmt) *stmt {
for i := range stmts {
if stmts[i].role.Name != role {
continue
}
return &stmts[i]
}
return nil
}

View File

@ -1,4 +1,4 @@
package serv package core
/* /*

View File

@ -1,15 +1,15 @@
package serv package core
import ( import (
"bytes" "bytes"
"encoding/base64" "encoding/base64"
"github.com/dosco/super-graph/crypto" "github.com/dosco/super-graph/core/internal/crypto"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/qcode"
) )
func encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) { func (sg *SuperGraph) encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) {
var keys [][]byte var keys [][]byte
for _, s := range qc.Selects { for _, s := range qc.Selects {
@ -39,7 +39,7 @@ func encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) {
var buf bytes.Buffer var buf bytes.Buffer
if len(f.Value) > 2 { if len(f.Value) > 2 {
v, err := crypto.Encrypt(f.Value[1:len(f.Value)-1], &internalKey) v, err := crypto.Encrypt(f.Value[1:len(f.Value)-1], &sg.encKey)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -63,10 +63,10 @@ func encryptCursor(qc *qcode.QCode, data []byte) ([]byte, error) {
return buf.Bytes(), nil return buf.Bytes(), nil
} }
func decrypt(data string) ([]byte, error) { func (sg *SuperGraph) decrypt(data string) ([]byte, error) {
v, err := base64.StdEncoding.DecodeString(data) v, err := base64.StdEncoding.DecodeString(data)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return crypto.Decrypt(v, &internalKey) return crypto.Decrypt(v, &sg.encKey)
} }

15
core/db.go Normal file
View File

@ -0,0 +1,15 @@
package core
import (
"context"
"database/sql"
)
func setLocalUserID(c context.Context, tx *sql.Tx) error {
var err error
if v := c.Value(UserIDKey); v != nil {
_, err = tx.Exec(`SET LOCAL "user.id" = ?`, v)
}
return err
}

315
core/init.go Normal file
View File

@ -0,0 +1,315 @@
package core
import (
"fmt"
"regexp"
"strings"
"unicode"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/gobuffalo/flect"
)
func (sg *SuperGraph) initConfig() error {
c := sg.conf
for k, v := range c.Inflections {
flect.AddPlural(k, v)
}
// Variables: Validate and sanitize
for k, v := range c.Vars {
c.Vars[k] = sanitizeVars(v)
}
// Tables: Validate and sanitize
tm := make(map[string]struct{})
for i := 0; i < len(c.Tables); i++ {
t := &c.Tables[i]
t.Name = flect.Pluralize(strings.ToLower(t.Name))
if _, ok := tm[t.Name]; ok {
sg.conf.Tables = append(c.Tables[:i], c.Tables[i+1:]...)
sg.log.Printf("WRN duplicate table found: %s", t.Name)
}
tm[t.Name] = struct{}{}
t.Table = flect.Pluralize(strings.ToLower(t.Table))
}
sg.roles = make(map[string]*Role)
for i := 0; i < len(c.Roles); i++ {
role := &c.Roles[i]
role.Name = sanitize(role.Name)
if _, ok := sg.roles[role.Name]; ok {
c.Roles = append(c.Roles[:i], c.Roles[i+1:]...)
sg.log.Printf("WRN duplicate role found: %s", role.Name)
}
role.Match = sanitize(role.Match)
role.tm = make(map[string]*RoleTable)
for n, table := range role.Tables {
role.tm[table.Name] = &role.Tables[n]
}
sg.roles[role.Name] = role
}
// If user role not defined then create it
if _, ok := sg.roles["user"]; !ok {
ur := Role{
Name: "user",
tm: make(map[string]*RoleTable),
}
c.Roles = append(c.Roles, ur)
sg.roles["user"] = &ur
}
// If anon role is not defined then create it
if _, ok := sg.roles["anon"]; !ok {
ur := Role{
Name: "anon",
tm: make(map[string]*RoleTable),
}
c.Roles = append(c.Roles, ur)
sg.roles["anon"] = &ur
}
// Roles: validate and sanitize
c.RolesQuery = sanitizeVars(c.RolesQuery)
if c.RolesQuery == "" {
sg.log.Printf("WRN roles_query not defined: attribute based access control disabled")
}
_, userExists := sg.roles["user"]
_, sg.anonExists = sg.roles["anon"]
sg.abacEnabled = userExists && c.RolesQuery != ""
return nil
}
func getDBTableAliases(c *Config) map[string][]string {
m := make(map[string][]string, len(c.Tables))
for i := range c.Tables {
t := c.Tables[i]
if len(t.Table) == 0 || len(t.Columns) != 0 {
continue
}
m[t.Table] = append(m[t.Table], t.Name)
}
return m
}
func addTables(c *Config, di *psql.DBInfo) error {
for _, t := range c.Tables {
if t.Table == "" || len(t.Columns) == 0 {
continue
}
if err := addTable(di, t.Columns, t); err != nil {
return err
}
}
return nil
}
func addTable(di *psql.DBInfo, cols []Column, t Table) error {
bc, ok := di.GetColumn(t.Table, t.Name)
if !ok {
return fmt.Errorf(
"Column '%s' not found on table '%s'",
t.Name, t.Table)
}
if bc.Type != "json" && bc.Type != "jsonb" {
return fmt.Errorf(
"Column '%s' in table '%s' is of type '%s'. Only JSON or JSONB is valid",
t.Name, t.Table, bc.Type)
}
table := psql.DBTable{
Name: t.Name,
Key: strings.ToLower(t.Name),
Type: bc.Type,
}
columns := make([]psql.DBColumn, 0, len(cols))
for i := range cols {
c := cols[i]
columns = append(columns, psql.DBColumn{
Name: c.Name,
Key: strings.ToLower(c.Name),
Type: c.Type,
})
}
di.AddTable(table, columns)
bc.FKeyTable = t.Name
return nil
}
func addForeignKeys(c *Config, di *psql.DBInfo) error {
for _, t := range c.Tables {
for _, c := range t.Columns {
if c.ForeignKey == "" {
continue
}
if err := addForeignKey(di, c, t); err != nil {
return err
}
}
}
return nil
}
func addForeignKey(di *psql.DBInfo, c Column, t Table) error {
c1, ok := di.GetColumn(t.Name, c.Name)
if !ok {
return fmt.Errorf(
"Invalid table '%s' or column '%s' in Config",
t.Name, c.Name)
}
v := strings.SplitN(c.ForeignKey, ".", 2)
if len(v) != 2 {
return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s",
t.Name, c.Name)
}
fkt, fkc := v[0], v[1]
c2, ok := di.GetColumn(fkt, fkc)
if !ok {
return fmt.Errorf(
"Invalid foreign_key in Config for table '%s' and column '%s",
t.Name, c.Name)
}
c1.FKeyTable = fkt
c1.FKeyColID = []int16{c2.ID}
return nil
}
func addRoles(c *Config, qc *qcode.Compiler) error {
for _, r := range c.Roles {
for _, t := range r.Tables {
if err := addRole(qc, r, t, c.DefaultAllow); err != nil {
return err
}
}
}
return nil
}
func addRole(qc *qcode.Compiler, r Role, t RoleTable, defaultAllow bool) error {
ro := true // read-only
if defaultAllow {
ro = false
}
if r.Name != "anon" {
ro = false
}
if t.ReadOnly != nil {
ro = *t.ReadOnly
}
blocked := struct {
query bool
insert bool
update bool
delete bool
}{false, ro, ro, ro}
if t.Query.Block != nil {
blocked.query = *t.Query.Block
}
if t.Insert.Block != nil {
blocked.insert = *t.Insert.Block
}
if t.Update.Block != nil {
blocked.update = *t.Update.Block
}
if t.Delete.Block != nil {
blocked.delete = *t.Delete.Block
}
query := qcode.QueryConfig{
Limit: t.Query.Limit,
Filters: t.Query.Filters,
Columns: t.Query.Columns,
DisableFunctions: t.Query.DisableFunctions,
Block: blocked.query,
}
insert := qcode.InsertConfig{
Filters: t.Insert.Filters,
Columns: t.Insert.Columns,
Presets: t.Insert.Presets,
Block: blocked.insert,
}
update := qcode.UpdateConfig{
Filters: t.Update.Filters,
Columns: t.Update.Columns,
Presets: t.Update.Presets,
Block: blocked.update,
}
del := qcode.DeleteConfig{
Filters: t.Delete.Filters,
Columns: t.Delete.Columns,
Block: blocked.delete,
}
return qc.AddRole(r.Name, t.Name, qcode.TRConfig{
Query: query,
Insert: insert,
Update: update,
Delete: del,
})
}
func (r *Role) GetTable(name string) *RoleTable {
return r.tm[name]
}
func sanitize(value string) string {
return strings.ToLower(strings.TrimSpace(value))
}
var (
varRe1 = regexp.MustCompile(`(?mi)\$([a-zA-Z0-9_.]+)`)
varRe2 = regexp.MustCompile(`\{\{([a-zA-Z0-9_.]+)\}\}`)
)
func sanitizeVars(s string) string {
s0 := varRe1.ReplaceAllString(s, `{{$1}}`)
s1 := strings.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return ' '
}
return r
}, s0)
return varRe2.ReplaceAllStringFunc(s1, func(m string) string {
return strings.ToLower(m)
})
}

View File

@ -7,9 +7,11 @@ import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os" "os"
"path"
"sort" "sort"
"strings" "strings"
"github.com/chirino/graphql/schema"
"github.com/dosco/super-graph/jsn"
) )
const ( const (
@ -35,11 +37,11 @@ type Config struct {
Persist bool Persist bool
} }
func New(cpath string, conf Config) (*List, error) { func New(filename string, conf Config) (*List, error) {
al := List{} al := List{}
if len(cpath) != 0 { if filename != "" {
fp := path.Join(cpath, "allow.list") fp := filename
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
al.filepath = fp al.filepath = fp
@ -48,7 +50,7 @@ func New(cpath string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
fp := "./allow.list" fp := "./allow.list"
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
@ -58,7 +60,7 @@ func New(cpath string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
fp := "./config/allow.list" fp := "./config/allow.list"
if _, err := os.Stat(fp); err == nil { if _, err := os.Stat(fp); err == nil {
@ -68,15 +70,15 @@ func New(cpath string, conf Config) (*List, error) {
} }
} }
if len(al.filepath) == 0 { if al.filepath == "" {
if !conf.CreateIfNotExists { if !conf.CreateIfNotExists {
return nil, errors.New("allow.list not found") return nil, errors.New("allow.list not found")
} }
if len(cpath) == 0 { if filename == "" {
al.filepath = "./config/allow.list" al.filepath = "./config/allow.list"
} else { } else {
al.filepath = path.Join(cpath, "allow.list") al.filepath = filename
} }
} }
@ -110,7 +112,7 @@ func (al *List) Set(vars []byte, query, comment string) error {
return errors.New("allow.list is read-only") return errors.New("allow.list is read-only")
} }
if len(query) == 0 { if query == "" {
return errors.New("empty query") return errors.New("empty query")
} }
@ -139,6 +141,7 @@ func (al *List) Set(vars []byte, query, comment string) error {
func (al *List) Load() ([]Item, error) { func (al *List) Load() ([]Item, error) {
var list []Item var list []Item
varString := "variables"
b, err := ioutil.ReadFile(al.filepath) b, err := ioutil.ReadFile(al.filepath)
if err != nil { if err != nil {
@ -179,9 +182,9 @@ func (al *List) Load() ([]Item, error) {
s = e s = e
} }
ty = AL_QUERY ty = AL_QUERY
} else if matchPrefix(b, e, "variables") { } else if matchPrefix(b, e, varString) {
if c == 0 { if c == 0 {
s = e + len("variables") + 1 s = e + len(varString) + 1
} }
ty = AL_VARS ty = AL_VARS
} else if b[e] == '{' { } else if b[e] == '{' {
@ -231,10 +234,26 @@ func (al *List) Load() ([]Item, error) {
} }
func (al *List) save(item Item) error { func (al *List) save(item Item) error {
item.Name = QueryName(item.Query) var buf bytes.Buffer
qd := &schema.QueryDocument{}
if err := qd.Parse(item.Query); err != nil {
fmt.Println("##", item.Query)
return err
}
qd.WriteTo(&buf)
query := buf.String()
buf.Reset()
// fmt.Println(">", query)
item.Name = QueryName(query)
item.key = strings.ToLower(item.Name) item.key = strings.ToLower(item.Name)
if len(item.Name) == 0 { if item.Name == "" {
return nil return nil
} }
@ -253,7 +272,7 @@ func (al *List) save(item Item) error {
} }
if index != -1 { if index != -1 {
if len(list[index].Comment) != 0 { if list[index].Comment != "" {
item.Comment = list[index].Comment item.Comment = list[index].Comment
} }
list[index] = item list[index] = item
@ -277,7 +296,7 @@ func (al *List) save(item Item) error {
i := 0 i := 0
for _, c := range cmtLines { for _, c := range cmtLines {
if c = strings.TrimSpace(c); len(c) == 0 { if c = strings.TrimSpace(c); c == "" {
continue continue
} }
@ -299,9 +318,16 @@ func (al *List) save(item Item) error {
} }
if len(v.Vars) != 0 && !bytes.Equal(v.Vars, []byte("{}")) { if len(v.Vars) != 0 && !bytes.Equal(v.Vars, []byte("{}")) {
vj, err := json.MarshalIndent(v.Vars, "", " ") buf.Reset()
if err := jsn.Clear(&buf, v.Vars); err != nil {
return fmt.Errorf("failed to clean vars: %w", err)
}
vj := json.RawMessage(buf.Bytes())
vj, err = json.MarshalIndent(vj, "", " ")
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal vars: %v", err) return fmt.Errorf("failed to marshal vars: %w", err)
} }
_, err = f.WriteString(fmt.Sprintf("variables %s\n\n", vj)) _, err = f.WriteString(fmt.Sprintf("variables %s\n\n", vj))

View File

@ -0,0 +1,88 @@
package cockraochdb_test
import (
"database/sql"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"regexp"
"sync/atomic"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
dir, err := ioutil.TempDir("", "temp-cockraochdb-")
if err != nil {
log.Fatal(err)
}
cmd := exec.Command("cockroach", "start", "--insecure", "--listen-addr", ":0", "--http-addr", ":0", "--store=path="+dir)
finder := &urlFinder{
c: make(chan bool),
}
cmd.Stdout = finder
cmd.Stderr = ioutil.Discard
err = cmd.Start()
if err != nil {
t.Skip("is CockroachDB installed?: " + err.Error())
return
}
fmt.Println("started temporary cockroach db")
stopped := int32(0)
stopDatabase := func() {
fmt.Println("stopping temporary cockroach db")
if atomic.CompareAndSwapInt32(&stopped, 0, 1) {
if err := cmd.Process.Kill(); err != nil {
log.Fatal(err)
}
if _, err := cmd.Process.Wait(); err != nil {
log.Fatal(err)
}
os.RemoveAll(dir)
}
}
defer stopDatabase()
// Wait till we figure out the URL we should connect to...
<-finder.c
db, err := sql.Open("pgx", finder.URL)
if err != nil {
stopDatabase()
require.NoError(t, err)
}
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
if t.Name() == "TestCockroachDB/nested_insert" {
t.Skip("nested inserts currently not working yet on cockroach db")
}
})
}
type urlFinder struct {
c chan bool
done bool
URL string
}
func (finder *urlFinder) Write(p []byte) (n int, err error) {
s := string(p)
urlRegex := regexp.MustCompile(`\nsql:\s+(postgresql:[^\s]+)\n`)
if !finder.done {
submatch := urlRegex.FindAllStringSubmatch(s, -1)
if submatch != nil {
finder.URL = submatch[0][1]
finder.done = true
close(finder.c)
}
}
return len(p), nil
}

View File

@ -0,0 +1,262 @@
package integration_tests
import (
"context"
"database/sql"
"encoding/json"
"io/ioutil"
"testing"
"github.com/dosco/super-graph/core"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func SetupSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`
CREATE TABLE users (
id integer PRIMARY KEY,
full_name text
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE product (
id integer PRIMARY KEY,
name text,
weight float
)`)
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE line_item (
id integer PRIMARY KEY,
product integer REFERENCES product(id),
quantity integer,
price float
)`)
require.NoError(t, err)
}
func DropSchema(t *testing.T, db *sql.DB) {
_, err := db.Exec(`DROP TABLE IF EXISTS line_item`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS product`)
require.NoError(t, err)
_, err = db.Exec(`DROP TABLE IF EXISTS users`)
require.NoError(t, err)
}
func TestSuperGraph(t *testing.T, db *sql.DB, before func(t *testing.T)) {
config := core.Config{}
config.UseAllowList = false
config.AllowListFile = "./allow.list"
config.RolesQuery = `SELECT * FROM users WHERE id = $user_id`
blockFalse := false
config.Roles = []core.Role{
core.Role{
Name: "anon",
Tables: []core.RoleTable{
core.RoleTable{Name: "users", ReadOnly: &blockFalse, Query: core.Query{Limit: 100}},
core.RoleTable{Name: "product", ReadOnly: &blockFalse, Query: core.Query{Limit: 100}},
core.RoleTable{Name: "line_item", ReadOnly: &blockFalse, Query: core.Query{Limit: 100}},
},
},
}
sg, err := core.NewSuperGraph(&config, db)
require.NoError(t, err)
ctx := context.Background()
t.Run("seed fixtures", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { products (insert: $products) { id } }`,
json.RawMessage(`{"products":[
{"id":1, "name":"Charmin Ultra Soft", "weight": 0.5},
{"id":2, "name":"Hand Sanitizer", "weight": 0.2},
{"id":3, "name":"Case of Corona", "weight": 1.2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"products": [{"id": 1}, {"id": 2}, {"id": 3}]}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`mutation { line_items (insert: $line_items) { id } }`,
json.RawMessage(`{"line_items":[
{"id":5001, "product":1, "price":6.95, "quantity":10},
{"id":5002, "product":2, "price":10.99, "quantity":2}
]}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001}, {"id": 5002}]}`, string(res.Data))
})
t.Run("get line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 10}}`, string(res.Data))
})
t.Run("get line items", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 10}, {"id": 5002, "price": 10.99, "quantity": 2}]}`, string(res.Data))
})
t.Run("update line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(update:$update, id:$id) { id } }`,
json.RawMessage(`{"id":5001, "update":{"quantity":20}}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_item(id:$id) { id, price, quantity } }`,
json.RawMessage(`{"id":5001}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5001, "price": 6.95, "quantity": 20}}`, string(res.Data))
})
t.Run("delete line item", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_item(delete:true, id:$id) { id } }`,
json.RawMessage(`{"id":5002}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_item": {"id": 5002}}`, string(res.Data))
res, err = sg.GraphQL(ctx,
`query { line_items { id, price, quantity } }`,
json.RawMessage(`{}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5001, "price": 6.95, "quantity": 20}]}`, string(res.Data))
})
t.Run("nested insert", func(t *testing.T) {
before(t)
res, err := sg.GraphQL(ctx,
`mutation { line_items (insert: $line_item) { id, product { name } } }`,
json.RawMessage(`{"line_item":
{"id":5003, "product": { "connect": { "id": 1} }, "price":10.95, "quantity":15}
}`))
require.NoError(t, err, res.SQL())
require.Equal(t, `{"line_items": [{"id": 5003, "product": {"name": "Charmin Ultra Soft"}}]}`, string(res.Data))
})
t.Run("schema introspection", func(t *testing.T) {
before(t)
schema, err := sg.GraphQLSchema()
require.NoError(t, err)
// Uncomment the following line if you need to regenerate the expected schema.
//ioutil.WriteFile("../introspection.graphql", []byte(schema), 0644)
expected, err := ioutil.ReadFile("../introspection.graphql")
require.NoError(t, err)
assert.Equal(t, string(expected), schema)
})
res, err := sg.GraphQL(ctx, introspectionQuery, json.RawMessage(``))
assert.NoError(t, err)
assert.Contains(t, string(res.Data),
`{"queryType":{"name":"Query"},"mutationType":{"name":"Mutation"},"subscriptionType":null,"types":`)
}
const introspectionQuery = `
query IntrospectionQuery {
__schema {
queryType { name }
mutationType { name }
subscriptionType { name }
types {
...FullType
}
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type { ...TypeRef }
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
`

View File

@ -0,0 +1,319 @@
input FloatExpression {
contained_in:String!
contains:[Float!]!
eq:Float!
equals:Float!
greater_or_equals:Float!
greater_than:Float!
gt:Float!
gte:Float!
has_key:Float!
has_key_all:[Float!]!
has_key_any:[Float!]!
ilike:String!
in:[Float!]!
is_null:Boolean!
lesser_or_equals:Float!
lesser_than:Float!
like:String!
lt:Float!
lte:Float!
neq:Float!
nilike:String!
nin:[Float!]!
nlike:String!
not_equals:Float!
not_ilike:String!
not_in:[Float!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input IntExpression {
contained_in:String!
contains:[Int!]!
eq:Int!
equals:Int!
greater_or_equals:Int!
greater_than:Int!
gt:Int!
gte:Int!
has_key:Int!
has_key_all:[Int!]!
has_key_any:[Int!]!
ilike:String!
in:[Int!]!
is_null:Boolean!
lesser_or_equals:Int!
lesser_than:Int!
like:String!
lt:Int!
lte:Int!
neq:Int!
nilike:String!
nin:[Int!]!
nlike:String!
not_equals:Int!
not_ilike:String!
not_in:[Int!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
type Mutation {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:line_itemInput, update:line_itemInput, upsert:line_itemInput, inserts:[line_itemInput!]!, updates:[line_itemInput!]!, upserts:[line_itemInput!]!
):line_itemOutput
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:productInput, update:productInput, upsert:productInput, inserts:[productInput!]!, updates:[productInput!]!, upserts:[productInput!]!
):productOutput
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!, insert:userInput, update:userInput, upsert:userInput, inserts:[userInput!]!, updates:[userInput!]!, upserts:[userInput!]!
):userOutput
}
enum OrderDirection {
asc
desc
}
type Query {
line_item(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):line_itemOutput
line_items(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:line_itemOrderBy!, where:line_itemExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[line_itemOutput!]!
product(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):productOutput
products(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:productOrderBy!, where:productExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[productOutput!]!
user(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):userOutput
users(
"To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."
order_by:userOrderBy!, where:userExpression!, limit:Int!, offset:Int!, first:Int!, last:Int!, before:String, after:String,
"Finds the record by the primary key"
id:Int!
):[userOutput!]!
}
input StringExpression {
contained_in:String!
contains:[String!]!
eq:String!
equals:String!
greater_or_equals:String!
greater_than:String!
gt:String!
gte:String!
has_key:String!
has_key_all:[String!]!
has_key_any:[String!]!
ilike:String!
in:[String!]!
is_null:Boolean!
lesser_or_equals:String!
lesser_than:String!
like:String!
lt:String!
lte:String!
neq:String!
nilike:String!
nin:[String!]!
nlike:String!
not_equals:String!
not_ilike:String!
not_in:[String!]!
not_like:String!
not_similar:String!
nsimilar:String!
similar:String!
}
input line_itemExpression {
and:line_itemExpression!
id:IntExpression!
not:line_itemExpression!
or:line_itemExpression!
price:FloatExpression!
product:IntExpression!
quantity:IntExpression!
}
input line_itemInput {
id:Int!
price:Float
product:Int
quantity:Int
}
input line_itemOrderBy {
id:OrderDirection!
price:OrderDirection!
product:OrderDirection!
quantity:OrderDirection!
}
type line_itemOutput {
avg_id:Int!
avg_price:Float
avg_product:Int
avg_quantity:Int
count_id:Int!
count_price:Float
count_product:Int
count_quantity:Int
id:Int!
max_id:Int!
max_price:Float
max_product:Int
max_quantity:Int
min_id:Int!
min_price:Float
min_product:Int
min_quantity:Int
price:Float
product:Int
quantity:Int
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_price:Float
stddev_pop_product:Int
stddev_pop_quantity:Int
stddev_price:Float
stddev_product:Int
stddev_quantity:Int
stddev_samp_id:Int!
stddev_samp_price:Float
stddev_samp_product:Int
stddev_samp_quantity:Int
var_pop_id:Int!
var_pop_price:Float
var_pop_product:Int
var_pop_quantity:Int
var_samp_id:Int!
var_samp_price:Float
var_samp_product:Int
var_samp_quantity:Int
variance_id:Int!
variance_price:Float
variance_product:Int
variance_quantity:Int
}
input productExpression {
and:productExpression!
id:IntExpression!
name:StringExpression!
not:productExpression!
or:productExpression!
weight:FloatExpression!
}
input productInput {
id:Int!
name:String
weight:Float
}
input productOrderBy {
id:OrderDirection!
name:OrderDirection!
weight:OrderDirection!
}
type productOutput {
avg_id:Int!
avg_weight:Float
count_id:Int!
count_weight:Float
id:Int!
max_id:Int!
max_weight:Float
min_id:Int!
min_weight:Float
name:String
stddev_id:Int!
stddev_pop_id:Int!
stddev_pop_weight:Float
stddev_samp_id:Int!
stddev_samp_weight:Float
stddev_weight:Float
var_pop_id:Int!
var_pop_weight:Float
var_samp_id:Int!
var_samp_weight:Float
variance_id:Int!
variance_weight:Float
weight:Float
}
input userExpression {
and:userExpression!
full_name:StringExpression!
id:IntExpression!
not:userExpression!
or:userExpression!
}
input userInput {
full_name:String
id:Int!
}
input userOrderBy {
full_name:OrderDirection!
id:OrderDirection!
}
type userOutput {
avg_id:Int!
count_id:Int!
full_name:String
id:Int!
max_id:Int!
min_id:Int!
stddev_id:Int!
stddev_pop_id:Int!
stddev_samp_id:Int!
var_pop_id:Int!
var_samp_id:Int!
variance_id:Int!
}
schema {
mutation: Mutation
query: Query
}

View File

@ -0,0 +1,27 @@
package cockraochdb_test
import (
"database/sql"
"os"
"testing"
integration_tests "github.com/dosco/super-graph/core/internal/integration_tests"
_ "github.com/jackc/pgx/v4/stdlib"
"github.com/stretchr/testify/require"
)
func TestCockroachDB(t *testing.T) {
url, found := os.LookupEnv("SG_POSTGRESQL_TEST_URL")
if !found {
t.Skip("set the SG_POSTGRESQL_TEST_URL env variable if you want to run integration tests against a PostgreSQL database")
} else {
db, err := sql.Open("pgx", url)
require.NoError(t, err)
integration_tests.DropSchema(t, db)
integration_tests.SetupSchema(t, db)
integration_tests.TestSuperGraph(t, db, func(t *testing.T) {
})
}
}

View File

@ -6,7 +6,7 @@ import (
"io" "io"
"strings" "strings"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
func (c *compilerContext) renderBaseColumns( func (c *compilerContext) renderBaseColumns(
@ -35,33 +35,37 @@ func (c *compilerContext) renderBaseColumns(
c.renderComma(i) c.renderComma(i)
realColsRendered = append(realColsRendered, n) realColsRendered = append(realColsRendered, n)
colWithTable(c.w, ti.Name, cn) colWithTable(c.w, ti.Name, cn)
i++
continue
}
if isSearch && !isRealCol { } else {
switch { switch {
case cn == "search_rank": case isSearch && cn == "search_rank":
if err := c.renderColumnSearchRank(sel, ti, col, i); err != nil { if err := c.renderColumnSearchRank(sel, ti, col, i); err != nil {
return nil, false, err return nil, false, err
} }
i++
case strings.HasPrefix(cn, "search_headline_"): case isSearch && strings.HasPrefix(cn, "search_headline_"):
if err := c.renderColumnSearchHeadline(sel, ti, col, i); err != nil { if err := c.renderColumnSearchHeadline(sel, ti, col, i); err != nil {
return nil, false, err return nil, false, err
} }
i++
} case cn == "__typename":
} else { if err := c.renderColumnTypename(sel, ti, col, i); err != nil {
if err := c.renderColumnFunction(sel, ti, col, i); err != nil { return nil, false, err
return nil, false, err }
}
isAgg = true
i++
case strings.HasSuffix(cn, "_cursor"):
continue
default:
if err := c.renderColumnFunction(sel, ti, col, i); err != nil {
return nil, false, err
}
isAgg = true
}
} }
i++
} }
if isCursorPaged { if isCursorPaged {
@ -148,8 +152,22 @@ func (c *compilerContext) renderColumnSearchHeadline(sel *qcode.Select, ti *DBTa
return nil return nil
} }
func (c *compilerContext) renderColumnTypename(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error {
if isColumnBlocked(sel, col.Name) {
return nil
}
c.renderComma(columnsRendered)
io.WriteString(c.w, `(`)
squoted(c.w, ti.Name)
io.WriteString(c.w, ` :: text)`)
alias(c.w, col.Name)
return nil
}
func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error { func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInfo, col qcode.Column, columnsRendered int) error {
pl := funcPrefixLen(col.Name) pl := funcPrefixLen(c.schema.fm, col.Name)
// if pl == 0 { // if pl == 0 {
// //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name) // //fmt.Fprintf(w, `'%s not defined' AS %s`, cn, col.Name)
// io.WriteString(c.w, `'`) // io.WriteString(c.w, `'`)
@ -168,7 +186,7 @@ func (c *compilerContext) renderColumnFunction(sel *qcode.Select, ti *DBTableInf
return nil return nil
} }
fn := cn[0 : pl-1] fn := col.Name[:pl-1]
c.renderComma(columnsRendered) c.renderComma(columnsRendered)

View File

@ -4,13 +4,13 @@ package psql
import ( import (
"encoding/json" "encoding/json"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
) )
var ( var (
qcompileTest, _ = qcode.NewCompiler(qcode.Config{}) qcompileTest, _ = qcode.NewCompiler(qcode.Config{})
schema = getTestSchema() schema = GetTestSchema()
vars = NewVariables(map[string]string{ vars = NewVariables(map[string]string{
"admin_account_id": "5", "admin_account_id": "5",

View File

@ -6,8 +6,8 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/util" "github.com/dosco/super-graph/core/internal/util"
) )
func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer,
@ -21,9 +21,17 @@ func (c *compilerContext) renderInsert(qc *qcode.QCode, w io.Writer,
return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar) return 0, fmt.Errorf("variable '%s' is empty", qc.ActionVar)
} }
io.WriteString(c.w, `WITH "_sg_input" AS (SELECT '{{`) io.WriteString(c.w, `WITH "_sg_input" AS (SELECT `)
if insert[0] == '[' {
io.WriteString(c.w, `json_array_elements(`)
}
io.WriteString(c.w, `'{{`)
io.WriteString(c.w, qc.ActionVar) io.WriteString(c.w, qc.ActionVar)
io.WriteString(c.w, `}}' :: json AS j)`) io.WriteString(c.w, `}}' :: json`)
if insert[0] == '[' {
io.WriteString(c.w, `)`)
}
io.WriteString(c.w, ` AS j)`)
st := util.NewStack() st := util.NewStack()
st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti}) st.Push(kvitem{_type: itemInsert, key: ti.Name, val: insert, ti: ti})
@ -90,26 +98,9 @@ func (c *compilerContext) renderInsertStmt(qc *qcode.QCode, w io.Writer, item re
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) renderInsertUpdateColumns(w, qc, jt, ti, sk, true)
renderNestedInsertRelColumns(w, item.kvitem, true) renderNestedInsertRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedInsertRelTables(w, item.kvitem) renderNestedInsertRelTables(w, item.kvitem)
io.WriteString(w, ` RETURNING *)`)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t RETURNING *)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t RETURNING *)`)
}
return nil return nil
} }
@ -172,21 +163,21 @@ func renderNestedInsertRelColumns(w io.Writer, item kvitem, values bool) error {
func renderNestedInsertRelTables(w io.Writer, item kvitem) error { func renderNestedInsertRelTables(w io.Writer, item kvitem) error {
if len(item.items) == 0 { if len(item.items) == 0 {
if item.relPC != nil && item.relPC.Type == RelOneToMany { if item.relPC != nil && item.relPC.Type == RelOneToMany {
quoted(w, item.relPC.Left.Table)
io.WriteString(w, `, `) io.WriteString(w, `, `)
quoted(w, item.relPC.Left.Table)
} }
} else { } else {
// Render tables needed to set values if child-to-parent // Render tables needed to set values if child-to-parent
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v.relCP.Type == RelOneToMany { if v.relCP.Type == RelOneToMany {
io.WriteString(w, `, `)
if v._ctype > 0 { if v._ctype > 0 {
io.WriteString(w, `"_x_`) io.WriteString(w, `"_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} else { } else {
quoted(w, v.relCP.Left.Table) quoted(w, v.relCP.Left.Table)
io.WriteString(w, `, `)
} }
} }
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -7,9 +7,9 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/core/internal/util"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/qcode"
"github.com/dosco/super-graph/util"
) )
type itemType int type itemType int
@ -396,7 +396,12 @@ func renderInsertUpdateColumns(w io.Writer,
} }
if values { if values {
colWithTable(w, "t", cn.Name) io.WriteString(w, `CAST( i.j ->>`)
io.WriteString(w, `'`)
io.WriteString(w, cn.Name)
io.WriteString(w, `' AS `)
io.WriteString(w, cn.Type)
io.WriteString(w, `)`)
} else { } else {
quoted(w, cn.Name) quoted(w, cn.Name)
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"fmt" "fmt"
@ -8,7 +8,8 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/core/internal/qcode"
) )
const ( const (
@ -19,7 +20,7 @@ const (
var ( var (
qcompile *qcode.Compiler qcompile *qcode.Compiler
pcompile *Compiler pcompile *psql.Compiler
expected map[string][]string expected map[string][]string
) )
@ -133,13 +134,16 @@ func TestMain(m *testing.M) {
log.Fatal(err) log.Fatal(err)
} }
schema := getTestSchema() schema, err := psql.GetTestSchema()
if err != nil {
log.Fatal(err)
}
vars := NewVariables(map[string]string{ vars := psql.NewVariables(map[string]string{
"admin_account_id": "5", "admin_account_id": "5",
}) })
pcompile = NewCompiler(Config{ pcompile = psql.NewCompiler(psql.Config{
Schema: schema, Schema: schema,
Vars: vars, Vars: vars,
}) })
@ -173,7 +177,7 @@ func TestMain(m *testing.M) {
os.Exit(m.Run()) os.Exit(m.Run())
} }
func compileGQLToPSQL(t *testing.T, gql string, vars Variables, role string) { func compileGQLToPSQL(t *testing.T, gql string, vars psql.Variables, role string) {
generateTestFile := false generateTestFile := false
if generateTestFile { if generateTestFile {

View File

@ -9,8 +9,8 @@ import (
"io" "io"
"strings" "strings"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/util" "github.com/dosco/super-graph/core/internal/util"
) )
const ( const (
@ -86,32 +86,37 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
st := NewIntStack() st := NewIntStack()
i := 0 i := 0
io.WriteString(c.w, `SELECT json_build_object(`) io.WriteString(c.w, `SELECT jsonb_build_object(`)
for _, id := range qc.Roots { for _, id := range qc.Roots {
root := &qc.Selects[id]
if root.SkipRender {
continue
}
st.Push(root.ID + closeBlock)
st.Push(root.ID)
if i != 0 { if i != 0 {
io.WriteString(c.w, `, `) io.WriteString(c.w, `, `)
} }
c.renderRootSelect(root) root := &qc.Selects[id]
if root.SkipRender || len(root.Cols) == 0 {
squoted(c.w, root.FieldName)
io.WriteString(c.w, `, `)
io.WriteString(c.w, `NULL`)
} else {
st.Push(root.ID + closeBlock)
st.Push(root.ID)
c.renderRootSelect(root)
}
i++ i++
} }
io.WriteString(c.w, `) as "__root" FROM `)
if i == 0 {
return 0, errors.New("all tables skipped. cannot render query")
}
var ignored uint32 var ignored uint32
if st.Len() != 0 {
io.WriteString(c.w, `) as "__root" FROM `)
} else {
io.WriteString(c.w, `) as "__root"`)
return ignored, nil
}
for { for {
if st.Len() == 0 { if st.Len() == 0 {
break break
@ -122,6 +127,10 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
if id < closeBlock { if id < closeBlock {
sel := &c.s[id] sel := &c.s[id]
if len(sel.Cols) == 0 {
continue
}
ti, err := c.schema.GetTable(sel.Name) ti, err := c.schema.GetTable(sel.Name)
if err != nil { if err != nil {
return 0, err return 0, err
@ -133,7 +142,7 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
c.renderLateralJoin(sel) c.renderLateralJoin(sel)
} }
if !ti.Singular { if !ti.IsSingular {
c.renderPluralSelect(sel, ti) c.renderPluralSelect(sel, ti)
} }
@ -164,15 +173,18 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
return 0, err return 0, err
} }
if !ti.Singular { io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sr", sel.ID)
io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sj", sel.ID)
if !ti.IsSingular {
io.WriteString(c.w, `)`) io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sel", sel.ID) aliasWithID(c.w, "__sj", sel.ID)
} }
if sel.ParentID == -1 { if sel.ParentID == -1 {
io.WriteString(c.w, `)`)
aliasWithID(c.w, "__sel", sel.ID)
if st.Len() != 0 { if st.Len() != 0 {
io.WriteString(c.w, `, `) io.WriteString(c.w, `, `)
} }
@ -194,7 +206,7 @@ func (co *Compiler) compileQuery(qc *qcode.QCode, w io.Writer, vars Variables) (
} }
func (c *compilerContext) renderPluralSelect(sel *qcode.Select, ti *DBTableInfo) error { func (c *compilerContext) renderPluralSelect(sel *qcode.Select, ti *DBTableInfo) error {
io.WriteString(c.w, `SELECT coalesce(json_agg("__sel_`) io.WriteString(c.w, `SELECT coalesce(jsonb_agg("__sj_`)
int2string(c.w, sel.ID) int2string(c.w, sel.ID)
io.WriteString(c.w, `"."json"), '[]') as "json"`) io.WriteString(c.w, `"."json"), '[]') as "json"`)
@ -234,7 +246,7 @@ func (c *compilerContext) renderRootSelect(sel *qcode.Select) error {
io.WriteString(c.w, sel.FieldName) io.WriteString(c.w, sel.FieldName)
io.WriteString(c.w, `', `) io.WriteString(c.w, `', `)
io.WriteString(c.w, `"__sel_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, sel.ID) int2string(c.w, sel.ID)
io.WriteString(c.w, `"."json"`) io.WriteString(c.w, `"."json"`)
@ -243,7 +255,7 @@ func (c *compilerContext) renderRootSelect(sel *qcode.Select) error {
io.WriteString(c.w, sel.FieldName) io.WriteString(c.w, sel.FieldName)
io.WriteString(c.w, `_cursor', `) io.WriteString(c.w, `_cursor', `)
io.WriteString(c.w, `"__sel_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, sel.ID) int2string(c.w, sel.ID)
io.WriteString(c.w, `"."cursor"`) io.WriteString(c.w, `"."cursor"`)
} }
@ -420,11 +432,38 @@ func (c *compilerContext) renderSelect(sel *qcode.Select, ti *DBTableInfo, vars
} }
// SELECT // SELECT
io.WriteString(c.w, `SELECT json_build_object(`) // io.WriteString(c.w, `SELECT jsonb_build_object(`)
// if err := c.renderColumns(sel, ti, skipped); err != nil {
// return 0, err
// }
io.WriteString(c.w, `SELECT to_jsonb("__sr_`)
int2string(c.w, sel.ID)
io.WriteString(c.w, `".*) `)
if sel.Paging.Type != qcode.PtOffset {
for i := range sel.OrderBy {
io.WriteString(c.w, `- '__cur_`)
int2string(c.w, int32(i))
io.WriteString(c.w, `' `)
}
}
io.WriteString(c.w, `AS "json"`)
if sel.Paging.Type != qcode.PtOffset {
for i := range sel.OrderBy {
io.WriteString(c.w, `, "__cur_`)
int2string(c.w, int32(i))
io.WriteString(c.w, `"`)
}
}
io.WriteString(c.w, `FROM (SELECT `)
if err := c.renderColumns(sel, ti, skipped); err != nil { if err := c.renderColumns(sel, ti, skipped); err != nil {
return 0, err return 0, err
} }
io.WriteString(c.w, `) AS "json"`)
if sel.Paging.Type != qcode.PtOffset { if sel.Paging.Type != qcode.PtOffset {
for i, ob := range sel.OrderBy { for i, ob := range sel.OrderBy {
@ -459,8 +498,8 @@ func (c *compilerContext) renderLateralJoin(sel *qcode.Select) error {
} }
func (c *compilerContext) renderLateralJoinClose(sel *qcode.Select) error { func (c *compilerContext) renderLateralJoinClose(sel *qcode.Select) error {
io.WriteString(c.w, `) `) // io.WriteString(c.w, `) `)
aliasWithID(c.w, "__sel", sel.ID) // aliasWithID(c.w, "__sj", sel.ID)
io.WriteString(c.w, ` ON ('true')`) io.WriteString(c.w, ` ON ('true')`)
return nil return nil
} }
@ -502,22 +541,25 @@ func (c *compilerContext) renderJoinByName(table, parent string, id int32) error
func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skipped uint32) error { func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skipped uint32) error {
i := 0 i := 0
var cn string
for _, col := range sel.Cols { for _, col := range sel.Cols {
n := funcPrefixLen(col.Name) if n := funcPrefixLen(c.schema.fm, col.Name); n != 0 {
if n != 0 {
if !sel.Functions { if !sel.Functions {
continue continue
} }
if len(sel.Allowed) != 0 { cn = col.Name[n:]
if _, ok := sel.Allowed[col.Name[n:]]; !ok {
continue
}
}
} else { } else {
if len(sel.Allowed) != 0 { cn = col.Name
if _, ok := sel.Allowed[col.Name]; !ok {
continue if strings.HasSuffix(cn, "_cursor") {
} continue
}
}
if len(sel.Allowed) != 0 {
if _, ok := sel.Allowed[cn]; !ok {
continue
} }
} }
@ -525,9 +567,8 @@ func (c *compilerContext) renderColumns(sel *qcode.Select, ti *DBTableInfo, skip
io.WriteString(c.w, ", ") io.WriteString(c.w, ", ")
} }
squoted(c.w, col.FieldName)
io.WriteString(c.w, ", ")
colWithTableID(c.w, ti.Name, sel.ID, col.Name) colWithTableID(c.w, ti.Name, sel.ID, col.Name)
alias(c.w, col.FieldName)
i++ i++
} }
@ -551,9 +592,8 @@ func (c *compilerContext) renderRemoteRelColumns(sel *qcode.Select, ti *DBTableI
io.WriteString(c.w, ", ") io.WriteString(c.w, ", ")
} }
squoted(c.w, rel.Right.Col)
io.WriteString(c.w, ", ")
colWithTableID(c.w, ti.Name, sel.ID, rel.Left.Col) colWithTableID(c.w, ti.Name, sel.ID, rel.Left.Col)
alias(c.w, rel.Right.Col)
i++ i++
} }
@ -569,26 +609,28 @@ func (c *compilerContext) renderJoinColumns(sel *qcode.Select, ti *DBTableInfo,
continue continue
} }
childSel := &c.s[id] childSel := &c.s[id]
if childSel.SkipRender {
continue
}
if i != 0 { if i != 0 {
io.WriteString(c.w, ", ") io.WriteString(c.w, ", ")
} }
squoted(c.w, childSel.FieldName) if childSel.SkipRender {
io.WriteString(c.w, `NULL`)
alias(c.w, childSel.FieldName)
continue
}
io.WriteString(c.w, `, "__sel_`) io.WriteString(c.w, `"__sj_`)
int2string(c.w, childSel.ID) int2string(c.w, childSel.ID)
io.WriteString(c.w, `"."json"`) io.WriteString(c.w, `"."json"`)
alias(c.w, childSel.FieldName)
if childSel.Paging.Type != qcode.PtOffset { if childSel.Paging.Type != qcode.PtOffset {
io.WriteString(c.w, `, '`) io.WriteString(c.w, `, "__sj_`)
io.WriteString(c.w, childSel.FieldName)
io.WriteString(c.w, `_cursor', "__sel_`)
int2string(c.w, childSel.ID) int2string(c.w, childSel.ID)
io.WriteString(c.w, `"."cursor"`) io.WriteString(c.w, `"."cursor" AS "`)
io.WriteString(c.w, childSel.FieldName)
io.WriteString(c.w, `_cursor"`)
} }
i++ i++
@ -665,7 +707,7 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
} }
switch { switch {
case ti.Singular: case ti.IsSingular:
io.WriteString(c.w, ` LIMIT ('1') :: integer`) io.WriteString(c.w, ` LIMIT ('1') :: integer`)
case len(sel.Paging.Limit) != 0: case len(sel.Paging.Limit) != 0:
@ -693,7 +735,7 @@ func (c *compilerContext) renderBaseSelect(sel *qcode.Select, ti *DBTableInfo, r
func (c *compilerContext) renderFrom(sel *qcode.Select, ti *DBTableInfo, rel *DBRel) error { func (c *compilerContext) renderFrom(sel *qcode.Select, ti *DBTableInfo, rel *DBRel) error {
if rel != nil && rel.Type == RelEmbedded { if rel != nil && rel.Type == RelEmbedded {
// json_to_recordset('[{"a":1,"b":[1,2,3],"c":"bar"}, {"a":2,"b":[1,2,3],"c":"bar"}]') as x(a int, b text, d text); // jsonb_to_recordset('[{"a":1,"b":[1,2,3],"c":"bar"}, {"a":2,"b":[1,2,3],"c":"bar"}]') as x(a int, b text, d text);
io.WriteString(c.w, `"`) io.WriteString(c.w, `"`)
io.WriteString(c.w, rel.Left.Table) io.WriteString(c.w, rel.Left.Table)
@ -880,8 +922,6 @@ func (c *compilerContext) renderExp(ex *qcode.Exp, ti *DBTableInfo, skipNested b
st.Push('(') st.Push('(')
case qcode.OpNot: case qcode.OpNot:
//fmt.Printf("1> %s %d %s %s\n", val.Op, len(val.Children), val.Children[0].Op, val.Children[1].Op)
st.Push(val.Children[0]) st.Push(val.Children[0])
st.Push(qcode.OpNot) st.Push(qcode.OpNot)
@ -1152,7 +1192,7 @@ func (c *compilerContext) renderVal(ex *qcode.Exp, vars map[string]string, col *
io.WriteString(c.w, col.Type) io.WriteString(c.w, col.Type)
} }
func funcPrefixLen(fn string) int { func funcPrefixLen(fm map[string]*DBFunction, fn string) int {
switch { switch {
case strings.HasPrefix(fn, "avg_"): case strings.HasPrefix(fn, "avg_"):
return 4 return 4
@ -1177,6 +1217,14 @@ func funcPrefixLen(fn string) int {
case strings.HasPrefix(fn, "var_samp_"): case strings.HasPrefix(fn, "var_samp_"):
return 9 return 9
} }
fnLen := len(fn)
for k := range fm {
kLen := len(k)
if kLen < fnLen && k[0] == fn[0] && strings.HasPrefix(fn, k) && fn[kLen] == '_' {
return kLen + 1
}
}
return 0 return 0
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"bytes" "bytes"
@ -327,7 +327,7 @@ func jsonColumnAsTable(t *testing.T) {
compileGQLToPSQL(t, gql, nil, "admin") compileGQLToPSQL(t, gql, nil, "admin")
} }
func skipUserIDForAnonRole(t *testing.T) { func nullForAuthRequiredInAnon(t *testing.T) {
gql := `query { gql := `query {
products { products {
id id
@ -387,7 +387,7 @@ func TestCompileQuery(t *testing.T) {
t.Run("multiRoot", multiRoot) t.Run("multiRoot", multiRoot)
t.Run("jsonColumnAsTable", jsonColumnAsTable) t.Run("jsonColumnAsTable", jsonColumnAsTable)
t.Run("withCursor", withCursor) t.Run("withCursor", withCursor)
t.Run("skipUserIDForAnonRole", skipUserIDForAnonRole) t.Run("nullForAuthRequiredInAnon", nullForAuthRequiredInAnon)
t.Run("blockedQuery", blockedQuery) t.Run("blockedQuery", blockedQuery)
t.Run("blockedFunctions", blockedFunctions) t.Run("blockedFunctions", blockedFunctions)
} }

View File

@ -11,17 +11,20 @@ type DBSchema struct {
ver int ver int
t map[string]*DBTableInfo t map[string]*DBTableInfo
rm map[string]map[string]*DBRel rm map[string]map[string]*DBRel
fm map[string]*DBFunction
} }
type DBTableInfo struct { type DBTableInfo struct {
Name string Name string
Type string Type string
Singular bool IsSingular bool
Columns []DBColumn Columns []DBColumn
PrimaryCol *DBColumn PrimaryCol *DBColumn
TSVCol *DBColumn TSVCol *DBColumn
ColMap map[string]*DBColumn ColMap map[string]*DBColumn
ColIDMap map[int16]*DBColumn ColIDMap map[int16]*DBColumn
Singular string
Plural string
} }
type RelType int type RelType int
@ -54,8 +57,10 @@ type DBRel struct {
func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) { func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
schema := &DBSchema{ schema := &DBSchema{
t: make(map[string]*DBTableInfo), ver: info.Version,
rm: make(map[string]map[string]*DBRel), t: make(map[string]*DBTableInfo),
rm: make(map[string]map[string]*DBRel),
fm: make(map[string]*DBFunction, len(info.Functions)),
} }
for i, t := range info.Tables { for i, t := range info.Tables {
@ -66,12 +71,25 @@ func NewDBSchema(info *DBInfo, aliases map[string][]string) (*DBSchema, error) {
} }
for i, t := range info.Tables { for i, t := range info.Tables {
err := schema.updateRelationships(t, info.Columns[i]) err := schema.firstDegreeRels(t, info.Columns[i])
if err != nil { if err != nil {
return nil, err return nil, err
} }
} }
for i, t := range info.Tables {
err := schema.secondDegreeRels(t, info.Columns[i])
if err != nil {
return nil, err
}
}
for k, f := range info.Functions {
if len(f.Params) == 1 {
schema.fm[strings.ToLower(f.Name)] = &info.Functions[k]
}
}
return schema, nil return schema, nil
} }
@ -82,23 +100,28 @@ func (s *DBSchema) addTable(
colidmap := make(map[int16]*DBColumn, len(cols)) colidmap := make(map[int16]*DBColumn, len(cols))
singular := flect.Singularize(t.Key) singular := flect.Singularize(t.Key)
plural := flect.Pluralize(t.Key)
s.t[singular] = &DBTableInfo{ s.t[singular] = &DBTableInfo{
Name: t.Name, Name: t.Name,
Type: t.Type, Type: t.Type,
Singular: true, IsSingular: true,
Columns: cols, Columns: cols,
ColMap: colmap, ColMap: colmap,
ColIDMap: colidmap, ColIDMap: colidmap,
Singular: singular,
Plural: plural,
} }
plural := flect.Pluralize(t.Key)
s.t[plural] = &DBTableInfo{ s.t[plural] = &DBTableInfo{
Name: t.Name, Name: t.Name,
Type: t.Type, Type: t.Type,
Singular: false, IsSingular: false,
Columns: cols, Columns: cols,
ColMap: colmap, ColMap: colmap,
ColIDMap: colidmap, ColIDMap: colidmap,
Singular: singular,
Plural: plural,
} }
if al, ok := aliases[t.Key]; ok { if al, ok := aliases[t.Key]; ok {
@ -131,8 +154,7 @@ func (s *DBSchema) addTable(
return nil return nil
} }
func (s *DBSchema) updateRelationships(t DBTable, cols []DBColumn) error { func (s *DBSchema) firstDegreeRels(t DBTable, cols []DBColumn) error {
jcols := make([]DBColumn, 0, len(cols))
ct := t.Key ct := t.Key
cti, ok := s.t[ct] cti, ok := s.t[ct]
if !ok { if !ok {
@ -230,6 +252,51 @@ func (s *DBSchema) updateRelationships(t DBTable, cols []DBColumn) error {
if err := s.SetRel(ft, ct, rel2); err != nil { if err := s.SetRel(ft, ct, rel2); err != nil {
return err return err
} }
}
return nil
}
func (s *DBSchema) secondDegreeRels(t DBTable, cols []DBColumn) error {
jcols := make([]DBColumn, 0, len(cols))
ct := t.Key
cti, ok := s.t[ct]
if !ok {
return fmt.Errorf("invalid foreign key table '%s'", ct)
}
for i := range cols {
c := cols[i]
if len(c.FKeyTable) == 0 {
continue
}
// Foreign key column name
ft := strings.ToLower(c.FKeyTable)
ti, ok := s.t[ft]
if !ok {
return fmt.Errorf("invalid foreign key table '%s'", ft)
}
// This is an embedded relationship like when a json/jsonb column
// is exposed as a table
if c.Name == c.FKeyTable && len(c.FKeyColID) == 0 {
continue
}
if len(c.FKeyColID) == 0 {
continue
}
// Foreign key column id
fcid := c.FKeyColID[0]
if _, ok := ti.ColIDMap[fcid]; !ok {
return fmt.Errorf("invalid foreign key column id '%d' for table '%s'",
fcid, ti.Name)
}
jcols = append(jcols, c) jcols = append(jcols, c)
} }
@ -313,6 +380,14 @@ func (s *DBSchema) updateSchemaOTMT(
return nil return nil
} }
func (s *DBSchema) GetTableNames() []string {
var names []string
for name := range s.t {
names = append(names, name)
}
return names
}
func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) { func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) {
t, ok := s.t[table] t, ok := s.t[table]
if !ok { if !ok {
@ -322,6 +397,9 @@ func (s *DBSchema) GetTable(table string) (*DBTableInfo, error) {
} }
func (s *DBSchema) SetRel(child, parent string, rel *DBRel) error { func (s *DBSchema) SetRel(child, parent string, rel *DBRel) error {
sp := strings.ToLower(flect.Singularize(parent))
pp := strings.ToLower(flect.Pluralize(parent))
sc := strings.ToLower(flect.Singularize(child)) sc := strings.ToLower(flect.Singularize(child))
pc := strings.ToLower(flect.Pluralize(child)) pc := strings.ToLower(flect.Pluralize(child))
@ -333,9 +411,6 @@ func (s *DBSchema) SetRel(child, parent string, rel *DBRel) error {
s.rm[pc] = make(map[string]*DBRel) s.rm[pc] = make(map[string]*DBRel)
} }
sp := strings.ToLower(flect.Singularize(parent))
pp := strings.ToLower(flect.Pluralize(parent))
if _, ok := s.rm[sc][sp]; !ok { if _, ok := s.rm[sc][sp]; !ok {
s.rm[sc][sp] = rel s.rm[sc][sp] = rel
} }
@ -373,3 +448,11 @@ func (s *DBSchema) GetRel(child, parent string) (*DBRel, error) {
} }
return rel, nil return rel, nil
} }
func (s *DBSchema) GetFunctions() []*DBFunction {
var funcs []*DBFunction
for _, f := range s.fm {
funcs = append(funcs, f)
}
return funcs
}

View File

@ -19,6 +19,10 @@ func (rt RelType) String() string {
} }
func (re *DBRel) String() string { func (re *DBRel) String() string {
if re.Type == RelOneToManyThrough {
return fmt.Sprintf("'%s.%s' --(Through: %s)--> '%s.%s'",
re.Left.Table, re.Left.Col, re.Through, re.Right.Table, re.Right.Col)
}
return fmt.Sprintf("'%s.%s' --(%s)--> '%s.%s'", return fmt.Sprintf("'%s.%s' --(%s)--> '%s.%s'",
re.Left.Table, re.Left.Col, re.Type, re.Right.Table, re.Right.Col) re.Left.Table, re.Left.Col, re.Type, re.Right.Table, re.Right.Col)
} }

View File

@ -1,34 +1,27 @@
package psql package psql
import ( import (
"context" "database/sql"
"fmt" "fmt"
"strconv" "strconv"
"strings" "strings"
"github.com/jackc/pgtype" "github.com/jackc/pgtype"
"github.com/jackc/pgx/v4/pgxpool"
) )
type DBInfo struct { type DBInfo struct {
Version int Version int
Tables []DBTable Tables []DBTable
Columns [][]DBColumn Columns [][]DBColumn
colmap map[string]map[string]*DBColumn Functions []DBFunction
colMap map[string]map[string]*DBColumn
} }
func GetDBInfo(db *pgxpool.Pool) (*DBInfo, error) { func GetDBInfo(db *sql.DB, schema string) (*DBInfo, error) {
di := &DBInfo{} di := &DBInfo{}
dbc, err := db.Acquire(context.Background())
if err != nil {
return nil, fmt.Errorf("error acquiring connection from pool: %w", err)
}
defer dbc.Release()
var version string var version string
err = dbc.QueryRow(context.Background(), `SHOW server_version_num`).Scan(&version) err := db.QueryRow(`SHOW server_version_num`).Scan(&version)
if err != nil { if err != nil {
return nil, fmt.Errorf("error fetching version: %w", err) return nil, fmt.Errorf("error fetching version: %w", err)
} }
@ -38,46 +31,61 @@ func GetDBInfo(db *pgxpool.Pool) (*DBInfo, error) {
return nil, err return nil, err
} }
di.Tables, err = GetTables(dbc) di.Tables, err = GetTables(db, schema)
if err != nil { if err != nil {
return nil, err return nil, err
} }
di.colmap = make(map[string]map[string]*DBColumn, len(di.Tables)) for _, t := range di.Tables {
cols, err := GetColumns(db, schema, t.Name)
for i, t := range di.Tables {
cols, err := GetColumns(dbc, "public", t.Name)
if err != nil { if err != nil {
return nil, err return nil, err
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols)) }
for n, c := range di.Columns[i] { di.colMap = newColMap(di.Tables, di.Columns)
di.colmap[t.Key][c.Key] = &di.Columns[i][n]
} di.Functions, err = GetFunctions(db, schema)
if err != nil {
return nil, err
} }
return di, nil return di, nil
} }
func newColMap(tables []DBTable, columns [][]DBColumn) map[string]map[string]*DBColumn {
cm := make(map[string]map[string]*DBColumn, len(tables))
for i, t := range tables {
cols := columns[i]
cm[t.Key] = make(map[string]*DBColumn, len(cols))
for n, c := range cols {
cm[t.Key][c.Key] = &columns[i][n]
}
}
return cm
}
func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) { func (di *DBInfo) AddTable(t DBTable, cols []DBColumn) {
t.ID = di.Tables[len(di.Tables)-1].ID t.ID = di.Tables[len(di.Tables)-1].ID
di.Tables = append(di.Tables, t) di.Tables = append(di.Tables, t)
di.colmap[t.Key] = make(map[string]*DBColumn, len(cols)) di.colMap[t.Key] = make(map[string]*DBColumn, len(cols))
for i := range cols { for i := range cols {
cols[i].ID = int16(i) cols[i].ID = int16(i)
c := &cols[i] c := &cols[i]
di.colmap[t.Key][c.Key] = c di.colMap[t.Key][c.Key] = c
} }
di.Columns = append(di.Columns, cols) di.Columns = append(di.Columns, cols)
} }
func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) { func (di *DBInfo) GetColumn(table, column string) (*DBColumn, bool) {
v, ok := di.colmap[strings.ToLower(table)][strings.ToLower(column)] v, ok := di.colMap[strings.ToLower(table)][strings.ToLower(column)]
return v, ok return v, ok
} }
@ -88,7 +96,7 @@ type DBTable struct {
Type string Type string
} }
func GetTables(dbc *pgxpool.Conn) ([]DBTable, error) { func GetTables(db *sql.DB, schema string) ([]DBTable, error) {
sqlStmt := ` sqlStmt := `
SELECT SELECT
c.relname as "name", c.relname as "name",
@ -100,14 +108,12 @@ SELECT
FROM pg_catalog.pg_class c FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m','f','') WHERE c.relkind IN ('r','v','m','f','')
AND n.nspname <> ('pg_catalog') AND n.nspname = $1
AND n.nspname <> ('information_schema') AND pg_catalog.pg_table_is_visible(c.oid);`
AND n.nspname !~ ('^pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid);`
var tables []DBTable var tables []DBTable
rows, err := dbc.Query(context.Background(), sqlStmt) rows, err := db.Query(sqlStmt, schema)
if err != nil { if err != nil {
return nil, fmt.Errorf("Error fetching tables: %s", err) return nil, fmt.Errorf("Error fetching tables: %s", err)
} }
@ -142,7 +148,7 @@ type DBColumn struct {
fKeyColID pgtype.Int2Array fKeyColID pgtype.Int2Array
} }
func GetColumns(dbc *pgxpool.Conn, schema, table string) ([]DBColumn, error) { func GetColumns(db *sql.DB, schema, table string) ([]DBColumn, error) {
sqlStmt := ` sqlStmt := `
SELECT SELECT
f.attnum AS id, f.attnum AS id,
@ -183,7 +189,7 @@ WHERE c.relkind IN ('r', 'v', 'm', 'f')
AND f.attisdropped = false AND f.attisdropped = false
ORDER BY id;` ORDER BY id;`
rows, err := dbc.Query(context.Background(), sqlStmt, schema, table) rows, err := db.Query(sqlStmt, schema, table)
if err != nil { if err != nil {
return nil, fmt.Errorf("error fetching columns: %s", err) return nil, fmt.Errorf("error fetching columns: %s", err)
} }
@ -245,6 +251,71 @@ ORDER BY id;`
return cols, nil return cols, nil
} }
type DBFunction struct {
Name string
Params []DBFuncParam
}
type DBFuncParam struct {
ID int
Name sql.NullString
Type string
}
func GetFunctions(db *sql.DB, schema string) ([]DBFunction, error) {
sqlStmt := `
SELECT
routines.routine_name,
parameters.specific_name,
parameters.data_type,
parameters.parameter_name,
parameters.ordinal_position
FROM
information_schema.routines
RIGHT JOIN
information_schema.parameters
ON (routines.specific_name = parameters.specific_name and parameters.ordinal_position IS NOT NULL)
WHERE
routines.specific_schema = $1
ORDER BY
routines.routine_name, parameters.ordinal_position;`
rows, err := db.Query(sqlStmt, schema)
if err != nil {
return nil, fmt.Errorf("Error fetching functions: %s", err)
}
defer rows.Close()
var funcs []DBFunction
fm := make(map[string]int)
parameterIndex := 1
for rows.Next() {
var fn, fid string
fp := DBFuncParam{}
err = rows.Scan(&fn, &fid, &fp.Type, &fp.Name, &fp.ID)
if err != nil {
return nil, err
}
if !fp.Name.Valid {
fp.Name.String = string(parameterIndex)
fp.Name.Valid = true
}
if i, ok := fm[fid]; ok {
funcs[i].Params = append(funcs[i].Params, fp)
} else {
funcs = append(funcs, DBFunction{Name: fn, Params: []DBFuncParam{fp}})
fm[fid] = len(funcs) - 1
}
parameterIndex++
}
return funcs, nil
}
// func GetValType(type string) qcode.ValType { // func GetValType(type string) qcode.ValType {
// switch { // switch {
// case "bigint", "integer", "smallint", "numeric", "bigserial": // case "bigint", "integer", "smallint", "numeric", "bigserial":

View File

@ -1,11 +1,10 @@
package psql package psql
import ( import (
"log"
"strings" "strings"
) )
func getTestSchema() *DBSchema { func GetTestDBInfo() *DBInfo {
tables := []DBTable{ tables := []DBTable{
DBTable{Name: "customers", Type: "table"}, DBTable{Name: "customers", Type: "table"},
DBTable{Name: "users", Type: "table"}, DBTable{Name: "users", Type: "table"},
@ -74,29 +73,19 @@ func getTestSchema() *DBSchema {
} }
} }
schema := &DBSchema{ return &DBInfo{
ver: 110000, Version: 110000,
t: make(map[string]*DBTableInfo), Tables: tables,
rm: make(map[string]map[string]*DBRel), Columns: columns,
Functions: []DBFunction{},
colMap: newColMap(tables, columns),
} }
}
func GetTestSchema() (*DBSchema, error) {
aliases := map[string][]string{ aliases := map[string][]string{
"users": []string{"mes"}, "users": []string{"mes"},
} }
for i, t := range tables { return NewDBSchema(GetTestDBInfo(), aliases)
err := schema.addTable(t, columns[i], aliases)
if err != nil {
log.Fatal(err)
}
}
for i, t := range tables {
err := schema.updateRelationships(t, columns[i])
if err != nil {
log.Fatal(err)
}
}
return schema
} }

View File

@ -0,0 +1,151 @@
=== RUN TestCompileInsert
=== RUN TestCompileInsert/simpleInsert
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id" FROM (SELECT "users"."id" FROM "users" LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/singleInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description", "price", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'user_id' AS bigint) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/bulkInsert
WITH "_sg_input" AS (SELECT '{{insert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/simpleInsertWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone, 'now' :: timestamp without time zone, '{{user_id}}' :: bigint FROM "_sg_input" i RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "customer_id", "product_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "customers"."id", "products"."id" FROM "_sg_input" i, "customers", "products" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "customers" AS (INSERT INTO "customers" ("full_name", "email") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i RETURNING *), "purchases" AS (INSERT INTO "purchases" ("sale_type", "quantity", "due_date", "product_id", "customer_id") SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone), "products"."id", "customers"."id" FROM "_sg_input" i, "products", "customers" RETURNING *) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "users"."id" FROM "_sg_input" i, "users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (INSERT INTO "users" ("full_name", "email", "created_at", "updated_at") SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i RETURNING *), "products" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user", "__sj_2"."json" AS "tags" FROM (SELECT "products"."id", "products"."name", "products"."user_id", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."id" AS "id", "tags_2"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_0"."tags"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileInsert/nestedInsertOneToOneWithConnectArray
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id" = ANY((select a::bigint AS list from json_array_elements_text((i.j->'user'->'connect'->>'id')::json) AS a)) LIMIT 1), "products" AS (INSERT INTO "products" ("name", "price", "created_at", "updated_at", "user_id") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone), "_x_users"."id" FROM "_sg_input" i, "_x_users" RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileInsert (0.02s)
--- PASS: TestCompileInsert/simpleInsert (0.00s)
--- PASS: TestCompileInsert/singleInsert (0.00s)
--- PASS: TestCompileInsert/bulkInsert (0.00s)
--- PASS: TestCompileInsert/simpleInsertWithPresets (0.00s)
--- PASS: TestCompileInsert/nestedInsertManyToMany (0.00s)
--- PASS: TestCompileInsert/nestedInsertOneToMany (0.00s)
--- PASS: TestCompileInsert/nestedInsertOneToOne (0.00s)
--- PASS: TestCompileInsert/nestedInsertOneToManyWithConnect (0.00s)
--- PASS: TestCompileInsert/nestedInsertOneToOneWithConnect (0.00s)
--- PASS: TestCompileInsert/nestedInsertOneToOneWithConnectArray (0.00s)
=== RUN TestCompileMutate
=== RUN TestCompileMutate/singleUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/singleUpsertWhere
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) WHERE (("products"."price") > '3' :: numeric(7,2)) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/bulkUpsert
WITH "_sg_input" AS (SELECT '{{upsert}}' :: json AS j), "products" AS (INSERT INTO "products" ("name", "description") SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i RETURNING *) ON CONFLICT (id) DO UPDATE SET name = EXCLUDED.name, description = EXCLUDED.description RETURNING *) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileMutate/delete
WITH "products" AS (DELETE FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '1' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileMutate (0.00s)
--- PASS: TestCompileMutate/singleUpsert (0.00s)
--- PASS: TestCompileMutate/singleUpsertWhere (0.00s)
--- PASS: TestCompileMutate/bulkUpsert (0.00s)
--- PASS: TestCompileMutate/delete (0.00s)
=== RUN TestCompileQuery
=== RUN TestCompileQuery/withComplexArgs
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT DISTINCT ON ("products"."price") "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."id") < '28' :: bigint) AND (("products"."id") >= '20' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) ORDER BY "products"."price" DESC LIMIT ('30') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereAndList
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereIsNull
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE (((("products"."price") > '10' :: numeric(7,2)) AND NOT (("products"."id") IS NULL) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereMultiOr
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND ((("products"."price") < '20' :: numeric(7,2)) OR (("products"."price") > '10' :: numeric(7,2)) OR NOT (("products"."id") IS NULL)))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/fetchByID
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") = '{{id}}' :: bigint))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/searchQuery
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."search_rank" AS "search_rank", "products_0"."search_headline_description" AS "search_headline_description" FROM (SELECT "products"."id", "products"."name", ts_rank("products"."tsv", websearch_to_tsquery('{{query}}')) AS "search_rank", ts_headline("products"."description", websearch_to_tsquery('{{query}}')) AS "search_headline_description" FROM "products" WHERE ((("products"."tsv") @@ websearch_to_tsquery('{{query}}'))) LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToMany
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email", "__sj_1"."json" AS "products" FROM (SELECT "users"."email", "users"."id" FROM "users" LIMIT ('20') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyReverse
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."price" AS "price", "__sj_1"."json" AS "users" FROM (SELECT "products"."name", "products"."price", "products"."user_id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."email" AS "email" FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('20') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/oneToManyArray
SELECT jsonb_build_object('tags', "__sj_0"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."name" AS "name", "products_2"."price" AS "price", "__sj_3"."json" AS "tags" FROM (SELECT "products"."name", "products"."price", "products"."tags" FROM "products" LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "tags_3"."id" AS "id", "tags_3"."name" AS "name" FROM (SELECT "tags"."id", "tags"."name" FROM "tags" WHERE ((("tags"."slug") = any ("products_2"."tags"))) LIMIT ('20') :: integer) AS "tags_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "tags_0"."name" AS "name", "__sj_1"."json" AS "product" FROM (SELECT "tags"."name", "tags"."slug" FROM "tags" LIMIT ('20') :: integer) AS "tags_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" WHERE ((("tags_0"."slug") = any ("products"."tags"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToMany
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "__sj_1"."json" AS "customers" FROM (SELECT "products"."name", "products"."id" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "customers_1"."email" AS "email", "customers_1"."full_name" AS "full_name" FROM (SELECT "customers"."email", "customers"."full_name" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_0"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/manyToManyReverse
SELECT jsonb_build_object('customers', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."email" AS "email", "customers_0"."full_name" AS "full_name", "__sj_1"."json" AS "products" FROM (SELECT "customers"."email", "customers"."full_name", "customers"."id" FROM "customers" LIMIT ('20') :: integer) AS "customers_0" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_1"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."name" AS "name" FROM (SELECT "products"."name" FROM "products" LEFT OUTER JOIN "purchases" ON (("purchases"."customer_id") = ("customers_0"."id")) WHERE ((("products"."id") = ("purchases"."product_id")) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('20') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunction
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name", "products_0"."count_price" AS "count_price" FROM (SELECT "products"."name", count("products"."price") AS "count_price" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionBlockedByCol
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionDisabled
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."name" AS "name" FROM (SELECT "products"."name" FROM "products" GROUP BY "products"."name" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/aggFunctionWithFilter
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."max_price" AS "max_price" FROM (SELECT "products"."id", max("products"."price") AS "max_price" FROM "products" WHERE ((((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))) AND (("products"."id") > '10' :: bigint))) GROUP BY "products"."id" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/syntheticTables
SELECT jsonb_build_object('me', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT FROM (SELECT "users"."email" FROM "users" WHERE ((("users"."id") = '{{user_id}}' :: bigint)) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/queryWithVariables
SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") = '{{product_price}}' :: numeric(7,2)) AND (("products"."id") = '{{product_id}}' :: bigint) AND ((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2))))) LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/withWhereOnRelations
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" WHERE (NOT EXISTS (SELECT 1 FROM products WHERE (("products"."user_id") = ("users"."id")) AND ((("products"."price") > '3' :: numeric(7,2))))) LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/multiRoot
SELECT jsonb_build_object('customer', "__sj_0"."json", 'user', "__sj_1"."json", 'product', "__sj_2"."json") as "__root" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "products_2"."id" AS "id", "products_2"."name" AS "name", "__sj_3"."json" AS "customers", "__sj_4"."json" AS "customer" FROM (SELECT "products"."id", "products"."name" FROM "products" WHERE (((("products"."price") > '0' :: numeric(7,2)) AND (("products"."price") < '8' :: numeric(7,2)))) LIMIT ('1') :: integer) AS "products_2" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_4".*) AS "json"FROM (SELECT "customers_4"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('1') :: integer) AS "customers_4") AS "__sr_4") AS "__sj_4" ON ('true') LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_3"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_3".*) AS "json"FROM (SELECT "customers_3"."email" AS "email" FROM (SELECT "customers"."email" FROM "customers" LEFT OUTER JOIN "purchases" ON (("purchases"."product_id") = ("products_2"."id")) WHERE ((("customers"."id") = ("purchases"."customer_id"))) LIMIT ('20') :: integer) AS "customers_3") AS "__sr_3") AS "__sj_3") AS "__sj_3" ON ('true')) AS "__sr_2") AS "__sj_2", (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1", (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "customers_0"."id" AS "id" FROM (SELECT "customers"."id" FROM "customers" LIMIT ('1') :: integer) AS "customers_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/jsonColumnAsTable
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "tag_count" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('20') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "tag_count_1"."count" AS "count", "__sj_2"."json" AS "tags" FROM (SELECT "tag_count"."count", "tag_count"."tag_id" FROM "products", json_to_recordset("products"."tag_count") AS "tag_count"(tag_id bigint, count int) WHERE ((("products"."id") = ("products_0"."id"))) LIMIT ('1') :: integer) AS "tag_count_1" LEFT OUTER JOIN LATERAL (SELECT coalesce(jsonb_agg("__sj_2"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "tags_2"."name" AS "name" FROM (SELECT "tags"."name" FROM "tags" WHERE ((("tags"."id") = ("tag_count_1"."tag_id"))) LIMIT ('20') :: integer) AS "tags_2") AS "__sr_2") AS "__sj_2") AS "__sj_2" ON ('true')) AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/withCursor
SELECT jsonb_build_object('products', "__sj_0"."json", 'products_cursor', "__sj_0"."cursor") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json", CONCAT_WS(',', max("__cur_0"), max("__cur_1")) as "cursor" FROM (SELECT to_jsonb("__sr_0".*) - '__cur_0' - '__cur_1' AS "json", "__cur_0", "__cur_1"FROM (SELECT "products_0"."name" AS "name", LAST_VALUE("products_0"."price") OVER() AS "__cur_0", LAST_VALUE("products_0"."id") OVER() AS "__cur_1" FROM (WITH "__cur" AS (SELECT a[1] as "price", a[2] as "id" FROM string_to_array('{{cursor}}', ',') as a) SELECT "products"."name", "products"."id", "products"."price" FROM "products", "__cur" WHERE (((("products"."price") < "__cur"."price" :: numeric(7,2)) OR ((("products"."price") = "__cur"."price" :: numeric(7,2)) AND (("products"."id") > "__cur"."id" :: bigint)))) ORDER BY "products"."price" DESC, "products"."id" ASC LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/nullForAuthRequiredInAnon
SELECT jsonb_build_object('products', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", NULL AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('20') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedQuery
SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE (false) LIMIT ('1') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileQuery/blockedFunctions
SELECT jsonb_build_object('users', "__sj_0"."json") as "__root" FROM (SELECT coalesce(jsonb_agg("__sj_0"."json"), '[]') as "json" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."email" AS "email" FROM (SELECT , "users"."email" FROM "users" WHERE (false) GROUP BY "users"."email" LIMIT ('20') :: integer) AS "users_0") AS "__sr_0") AS "__sj_0") AS "__sj_0"
--- PASS: TestCompileQuery (0.02s)
--- PASS: TestCompileQuery/withComplexArgs (0.00s)
--- PASS: TestCompileQuery/withWhereAndList (0.00s)
--- PASS: TestCompileQuery/withWhereIsNull (0.00s)
--- PASS: TestCompileQuery/withWhereMultiOr (0.00s)
--- PASS: TestCompileQuery/fetchByID (0.00s)
--- PASS: TestCompileQuery/searchQuery (0.00s)
--- PASS: TestCompileQuery/oneToMany (0.00s)
--- PASS: TestCompileQuery/oneToManyReverse (0.00s)
--- PASS: TestCompileQuery/oneToManyArray (0.00s)
--- PASS: TestCompileQuery/manyToMany (0.00s)
--- PASS: TestCompileQuery/manyToManyReverse (0.00s)
--- PASS: TestCompileQuery/aggFunction (0.00s)
--- PASS: TestCompileQuery/aggFunctionBlockedByCol (0.00s)
--- PASS: TestCompileQuery/aggFunctionDisabled (0.00s)
--- PASS: TestCompileQuery/aggFunctionWithFilter (0.00s)
--- PASS: TestCompileQuery/syntheticTables (0.00s)
--- PASS: TestCompileQuery/queryWithVariables (0.00s)
--- PASS: TestCompileQuery/withWhereOnRelations (0.00s)
--- PASS: TestCompileQuery/multiRoot (0.00s)
--- PASS: TestCompileQuery/jsonColumnAsTable (0.00s)
--- PASS: TestCompileQuery/withCursor (0.00s)
--- PASS: TestCompileQuery/nullForAuthRequiredInAnon (0.00s)
--- PASS: TestCompileQuery/blockedQuery (0.00s)
--- PASS: TestCompileQuery/blockedFunctions (0.00s)
=== RUN TestCompileUpdate
=== RUN TestCompileUpdate/singleUpdate
WITH "_sg_input" AS (SELECT '{{update}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "description") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'description' AS text) FROM "_sg_input" i) WHERE ((("products"."id") = '1' :: bigint) AND (("products"."id") = '{{id}}' :: bigint)) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name" FROM (SELECT "products"."id", "products"."name" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/simpleUpdateWithPresets
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), 'now' :: timestamp without time zone FROM "_sg_input" i) WHERE (("products"."user_id") = '{{user_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id" FROM (SELECT "products"."id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateManyToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "purchases" AS (UPDATE "purchases" SET ("sale_type", "quantity", "due_date") = (SELECT CAST( i.j ->>'sale_type' AS character varying), CAST( i.j ->>'quantity' AS integer), CAST( i.j ->>'due_date' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("purchases"."id") = '{{id}}' :: bigint) RETURNING "purchases".*), "customers" AS (UPDATE "customers" SET ("full_name", "email") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "purchases" WHERE (("customers"."id") = ("purchases"."customer_id")) RETURNING "customers".*), "products" AS (UPDATE "products" SET ("name", "price") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)) FROM "_sg_input" i) FROM "purchases" WHERE (("products"."id") = ("purchases"."product_id")) RETURNING "products".*) SELECT jsonb_build_object('purchase', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "purchases_0"."sale_type" AS "sale_type", "purchases_0"."quantity" AS "quantity", "purchases_0"."due_date" AS "due_date", "__sj_1"."json" AS "product", "__sj_2"."json" AS "customer" FROM (SELECT "purchases"."sale_type", "purchases"."quantity", "purchases"."due_date", "purchases"."product_id", "purchases"."customer_id" FROM "purchases" LIMIT ('1') :: integer) AS "purchases_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_2".*) AS "json"FROM (SELECT "customers_2"."id" AS "id", "customers_2"."full_name" AS "full_name", "customers_2"."email" AS "email" FROM (SELECT "customers"."id", "customers"."full_name", "customers"."email" FROM "customers" WHERE ((("customers"."id") = ("purchases_0"."customer_id"))) LIMIT ('1') :: integer) AS "customers_2") AS "__sr_2") AS "__sj_2" ON ('true') LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."id") = ("purchases_0"."product_id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToMany
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = '8' :: bigint) RETURNING "users".*), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) FROM "users" WHERE (("products"."user_id") = ("users"."id") AND "products"."id"= ((i.j->'product'->'where'->>'id'))::bigint) RETURNING "products".*) SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOne
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "products" AS (UPDATE "products" SET ("name", "price", "created_at", "updated_at") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*), "users" AS (UPDATE "users" SET ("email") = (SELECT CAST( i.j ->>'email' AS character varying) FROM "_sg_input" i) FROM "products" WHERE (("users"."id") = ("products"."user_id")) RETURNING "users".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToManyWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "users" AS (UPDATE "users" SET ("full_name", "email", "created_at", "updated_at") = (SELECT CAST( i.j ->>'full_name' AS character varying), CAST( i.j ->>'email' AS character varying), CAST( i.j ->>'created_at' AS timestamp without time zone), CAST( i.j ->>'updated_at' AS timestamp without time zone) FROM "_sg_input" i) WHERE (("users"."id") = '{{id}}' :: bigint) RETURNING "users".*), "products_c" AS ( UPDATE "products" SET "user_id" = "users"."id" FROM "users" WHERE ("products"."id"= ((i.j->'product'->'connect'->>'id'))::bigint) RETURNING "products".*), "products_d" AS ( UPDATE "products" SET "user_id" = NULL FROM "users" WHERE ("products"."id"= ((i.j->'product'->'disconnect'->>'id'))::bigint) RETURNING "products".*), "products" AS (SELECT * FROM "products_c" UNION ALL SELECT * FROM "products_d") SELECT jsonb_build_object('user', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "users_0"."id" AS "id", "users_0"."full_name" AS "full_name", "users_0"."email" AS "email", "__sj_1"."json" AS "product" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" LIMIT ('1') :: integer) AS "users_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "products_1"."id" AS "id", "products_1"."name" AS "name", "products_1"."price" AS "price" FROM (SELECT "products"."id", "products"."name", "products"."price" FROM "products" WHERE ((("products"."user_id") = ("users_0"."id"))) LIMIT ('1') :: integer) AS "products_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithConnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint AND "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT "id" FROM "_sg_input" i,"users" WHERE "users"."email"= ((i.j->'user'->'connect'->>'email'))::character varying AND "users"."id"= ((i.j->'user'->'connect'->>'id'))::bigint LIMIT 1), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{product_id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "__sj_1"."json" AS "user" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0" LEFT OUTER JOIN LATERAL (SELECT to_jsonb("__sr_1".*) AS "json"FROM (SELECT "users_1"."id" AS "id", "users_1"."full_name" AS "full_name", "users_1"."email" AS "email" FROM (SELECT "users"."id", "users"."full_name", "users"."email" FROM "users" WHERE ((("users"."id") = ("products_0"."user_id"))) LIMIT ('1') :: integer) AS "users_1") AS "__sr_1") AS "__sj_1" ON ('true')) AS "__sr_0") AS "__sj_0"
=== RUN TestCompileUpdate/nestedUpdateOneToOneWithDisconnect
WITH "_sg_input" AS (SELECT '{{data}}' :: json AS j), "_x_users" AS (SELECT * FROM (VALUES(NULL::bigint)) AS LOOKUP("id")), "products" AS (UPDATE "products" SET ("name", "price", "user_id") = (SELECT CAST( i.j ->>'name' AS character varying), CAST( i.j ->>'price' AS numeric(7,2)), "_x_users"."id" FROM "_sg_input" i, "_x_users") WHERE (("products"."id") = '{{id}}' :: bigint) RETURNING "products".*) SELECT jsonb_build_object('product', "__sj_0"."json") as "__root" FROM (SELECT to_jsonb("__sr_0".*) AS "json"FROM (SELECT "products_0"."id" AS "id", "products_0"."name" AS "name", "products_0"."user_id" AS "user_id" FROM (SELECT "products"."id", "products"."name", "products"."user_id" FROM "products" LIMIT ('1') :: integer) AS "products_0") AS "__sr_0") AS "__sj_0"
--- PASS: TestCompileUpdate (0.02s)
--- PASS: TestCompileUpdate/singleUpdate (0.00s)
--- PASS: TestCompileUpdate/simpleUpdateWithPresets (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateManyToMany (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToMany (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToOne (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToManyWithConnect (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithConnect (0.00s)
--- PASS: TestCompileUpdate/nestedUpdateOneToOneWithDisconnect (0.00s)
PASS
ok github.com/dosco/super-graph/core/internal/psql 0.306s

View File

@ -6,8 +6,8 @@ import (
"fmt" "fmt"
"io" "io"
"github.com/dosco/super-graph/qcode" "github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/util" "github.com/dosco/super-graph/core/internal/util"
) )
func (c *compilerContext) renderUpdate(qc *qcode.QCode, w io.Writer, func (c *compilerContext) renderUpdate(qc *qcode.QCode, w io.Writer,
@ -91,25 +91,9 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
renderInsertUpdateColumns(w, qc, jt, ti, sk, true) renderInsertUpdateColumns(w, qc, jt, ti, sk, true)
renderNestedUpdateRelColumns(w, item.kvitem, true) renderNestedUpdateRelColumns(w, item.kvitem, true)
io.WriteString(w, ` FROM "_sg_input" i, `) io.WriteString(w, ` FROM "_sg_input" i`)
renderNestedUpdateRelTables(w, item.kvitem) renderNestedUpdateRelTables(w, item.kvitem)
io.WriteString(w, `) `)
if item.array {
io.WriteString(w, `json_populate_recordset`)
} else {
io.WriteString(w, `json_populate_record`)
}
io.WriteString(w, `(NULL::`)
io.WriteString(w, ti.Name)
if len(item.path) == 0 {
io.WriteString(w, `, i.j) t)`)
} else {
io.WriteString(w, `, i.j->`)
joinPath(w, item.path)
io.WriteString(w, `) t) `)
}
if item.id != 0 { if item.id != 0 {
// Render sql to set id values if child-to-parent // Render sql to set id values if child-to-parent
@ -137,9 +121,11 @@ func (c *compilerContext) renderUpdateStmt(w io.Writer, qc *qcode.QCode, item re
io.WriteString(w, `)`) io.WriteString(w, `)`)
} else { } else {
io.WriteString(w, ` WHERE `) if qc.Selects[0].Where != nil {
if err := c.renderWhere(&qc.Selects[0], ti); err != nil { io.WriteString(w, ` WHERE `)
return err if err := c.renderWhere(&qc.Selects[0], ti); err != nil {
return err
}
} }
} }
@ -202,9 +188,9 @@ func renderNestedUpdateRelTables(w io.Writer, item kvitem) error {
// relationship is one-to-many // relationship is one-to-many
for _, v := range item.items { for _, v := range item.items {
if v._ctype > 0 && v.relCP.Type == RelOneToMany { if v._ctype > 0 && v.relCP.Type == RelOneToMany {
io.WriteString(w, `"_x_`) io.WriteString(w, `, "_x_`)
io.WriteString(w, v.relCP.Left.Table) io.WriteString(w, v.relCP.Left.Table)
io.WriteString(w, `", `) io.WriteString(w, `"`)
} }
} }
@ -222,6 +208,10 @@ func (c *compilerContext) renderDelete(qc *qcode.QCode, w io.Writer,
quoted(c.w, ti.Name) quoted(c.w, ti.Name)
io.WriteString(c.w, ` WHERE `) io.WriteString(c.w, ` WHERE `)
if root.Where == nil {
return 0, errors.New("'where' clause missing in delete mutation")
}
if err := c.renderWhere(root, ti); err != nil { if err := c.renderWhere(root, ti); err != nil {
return 0, err return 0, err
} }

View File

@ -1,4 +1,4 @@
package psql package psql_test
import ( import (
"encoding/json" "encoding/json"

View File

@ -0,0 +1,17 @@
goos: darwin
goarch: amd64
pkg: github.com/dosco/super-graph/core/internal/qcode
BenchmarkQCompile
BenchmarkQCompile-16 129614 8649 ns/op 3756 B/op 28 allocs/op
BenchmarkQCompileP
BenchmarkQCompileP-16 487488 2525 ns/op 3792 B/op 28 allocs/op
BenchmarkParse
BenchmarkParse-16 127582 8731 ns/op 3902 B/op 18 allocs/op
BenchmarkParseP
BenchmarkParseP-16 561373 2223 ns/op 3903 B/op 18 allocs/op
BenchmarkSchemaParse
BenchmarkSchemaParse-16 209142 5523 ns/op 3968 B/op 57 allocs/op
BenchmarkSchemaParseP
BenchmarkSchemaParseP-16 716437 1734 ns/op 3968 B/op 57 allocs/op
PASS
ok github.com/dosco/super-graph/core/internal/qcode 8.483s

View File

@ -15,23 +15,27 @@ type QueryConfig struct {
Filters []string Filters []string
Columns []string Columns []string
DisableFunctions bool DisableFunctions bool
Block bool
} }
type InsertConfig struct { type InsertConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Presets map[string]string Presets map[string]string
Block bool
} }
type UpdateConfig struct { type UpdateConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Presets map[string]string Presets map[string]string
Block bool
} }
type DeleteConfig struct { type DeleteConfig struct {
Filters []string Filters []string
Columns []string Columns []string
Block bool
} }
type TRConfig struct { type TRConfig struct {
@ -42,14 +46,14 @@ type TRConfig struct {
} }
type trval struct { type trval struct {
query struct { readOnly bool
query struct {
limit string limit string
fil *Exp fil *Exp
filNU bool filNU bool
cols map[string]struct{} cols map[string]struct{}
disable struct { disable struct{ funcs bool }
funcs bool block bool
}
} }
insert struct { insert struct {
@ -58,6 +62,7 @@ type trval struct {
cols map[string]struct{} cols map[string]struct{}
psmap map[string]string psmap map[string]string
pslist []string pslist []string
block bool
} }
update struct { update struct {
@ -66,12 +71,14 @@ type trval struct {
cols map[string]struct{} cols map[string]struct{}
psmap map[string]string psmap map[string]string
pslist []string pslist []string
block bool
} }
delete struct { delete struct {
fil *Exp fil *Exp
filNU bool filNU bool
cols map[string]struct{} cols map[string]struct{}
block bool
} }
} }

View File

@ -6,7 +6,7 @@ import (
"sync" "sync"
"unsafe" "unsafe"
"github.com/dosco/super-graph/util" "github.com/dosco/super-graph/core/internal/util"
) )
var ( var (
@ -16,8 +16,8 @@ var (
type parserType int32 type parserType int32
const ( const (
maxFields = 100 maxFields = 1200
maxArgs = 10 maxArgs = 25
) )
const ( const (
@ -242,7 +242,8 @@ func (p *Parser) parseOp() (*Operation, error) {
if p.peek(itemArgsOpen) { if p.peek(itemArgsOpen) {
p.ignore() p.ignore()
op.Args, err = p.parseArgs(op.Args)
op.Args, err = p.parseOpParams(op.Args)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -338,6 +339,13 @@ func (p *Parser) parseFields(fields []Field) ([]Field, error) {
if p.peek(itemObjOpen) { if p.peek(itemObjOpen) {
p.ignore() p.ignore()
st.Push(f.ID) st.Push(f.ID)
} else if p.peek(itemObjClose) {
if st.Len() == 0 {
break
} else {
continue
}
} }
} }
@ -371,6 +379,22 @@ func (p *Parser) parseField(f *Field) error {
return nil return nil
} }
func (p *Parser) parseOpParams(args []Arg) ([]Arg, error) {
for {
if len(args) >= maxArgs {
return nil, fmt.Errorf("too many args (max %d)", maxArgs)
}
if p.peek(itemArgsClose) {
p.ignore()
break
}
p.next()
}
return args, nil
}
func (p *Parser) parseArgs(args []Arg) ([]Arg, error) { func (p *Parser) parseArgs(args []Arg) ([]Arg, error) {
var err error var err error
@ -383,6 +407,7 @@ func (p *Parser) parseArgs(args []Arg) ([]Arg, error) {
p.ignore() p.ignore()
break break
} }
if !p.peek(itemName) { if !p.peek(itemName) {
return nil, errors.New("expecting an argument name") return nil, errors.New("expecting an argument name")
} }
@ -577,7 +602,7 @@ func (t parserType) String() string {
// nodePool.Put(n) // nodePool.Put(n)
// freeList = append(freeList, Frees{n, loc}) // freeList = append(freeList, Frees{n, loc})
// } else { // } else {
// fmt.Printf(">>>>(%d) RE_FREE %d %p %s %s\n", loc, freeList[j].loc, freeList[j].n, n.Name, n.Type) // fmt.Printf("(%d) RE_FREE %d %p %s %s\n", loc, freeList[j].loc, freeList[j].n, n.Name, n.Type)
// } // }
// } // }

View File

@ -2,6 +2,7 @@ package qcode
import ( import (
"errors" "errors"
"github.com/chirino/graphql/schema"
"testing" "testing"
) )
@ -130,7 +131,7 @@ updateThread {
} }
var gql = []byte(` var gql = []byte(`
products( {products(
# returns only 30 items # returns only 30 items
limit: 30, limit: 30,
@ -148,7 +149,7 @@ var gql = []byte(`
id id
name name
price price
}`) }}`)
func BenchmarkQCompile(b *testing.B) { func BenchmarkQCompile(b *testing.B) {
qcompile, _ := NewCompiler(Config{}) qcompile, _ := NewCompiler(Config{})
@ -181,3 +182,59 @@ func BenchmarkQCompileP(b *testing.B) {
} }
}) })
} }
func BenchmarkParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
_, err := Parse(gql)
if err != nil {
b.Fatal(err)
}
}
})
}
func BenchmarkSchemaParse(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for n := 0; n < b.N; n++ {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkSchemaParseP(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
doc := schema.QueryDocument{}
err := doc.Parse(string(gql))
if err != nil {
b.Fatal(err)
}
}
})
}

View File

@ -7,7 +7,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/dosco/super-graph/util" "github.com/dosco/super-graph/core/internal/util"
"github.com/gobuffalo/flect" "github.com/gobuffalo/flect"
) )
@ -170,6 +170,7 @@ const (
) )
type Compiler struct { type Compiler struct {
db bool // default block tables if not defined in anon role
tr map[string]map[string]*trval tr map[string]map[string]*trval
bl map[string]struct{} bl map[string]struct{}
} }
@ -218,6 +219,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
} }
trv.query.cols = listToMap(trc.Query.Columns) trv.query.cols = listToMap(trc.Query.Columns)
trv.query.disable.funcs = trc.Query.DisableFunctions trv.query.disable.funcs = trc.Query.DisableFunctions
trv.query.block = trc.Query.Block
// insert config // insert config
trv.insert.fil, trv.insert.filNU, err = compileFilter(trc.Insert.Filters) trv.insert.fil, trv.insert.filNU, err = compileFilter(trc.Insert.Filters)
@ -227,6 +229,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
trv.insert.cols = listToMap(trc.Insert.Columns) trv.insert.cols = listToMap(trc.Insert.Columns)
trv.insert.psmap = parsePresets(trc.Insert.Presets) trv.insert.psmap = parsePresets(trc.Insert.Presets)
trv.insert.pslist = mapToList(trv.insert.psmap) trv.insert.pslist = mapToList(trv.insert.psmap)
trv.insert.block = trc.Insert.Block
// update config // update config
trv.update.fil, trv.update.filNU, err = compileFilter(trc.Update.Filters) trv.update.fil, trv.update.filNU, err = compileFilter(trc.Update.Filters)
@ -236,6 +239,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
trv.update.cols = listToMap(trc.Update.Columns) trv.update.cols = listToMap(trc.Update.Columns)
trv.update.psmap = parsePresets(trc.Update.Presets) trv.update.psmap = parsePresets(trc.Update.Presets)
trv.update.pslist = mapToList(trv.update.psmap) trv.update.pslist = mapToList(trv.update.psmap)
trv.update.block = trc.Update.Block
// delete config // delete config
trv.delete.fil, trv.delete.filNU, err = compileFilter(trc.Delete.Filters) trv.delete.fil, trv.delete.filNU, err = compileFilter(trc.Delete.Filters)
@ -243,6 +247,7 @@ func (com *Compiler) AddRole(role, table string, trc TRConfig) error {
return err return err
} }
trv.delete.cols = listToMap(trc.Delete.Columns) trv.delete.cols = listToMap(trc.Delete.Columns)
trv.delete.block = trc.Delete.Block
singular := flect.Singularize(table) singular := flect.Singularize(table)
plural := flect.Pluralize(table) plural := flect.Pluralize(table)
@ -328,37 +333,82 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
} }
trv := com.getRole(role, field.Name) trv := com.getRole(role, field.Name)
skipRender := false
if trv != nil {
switch action {
case QTQuery:
if trv.query.block {
skipRender = true
}
case QTInsert:
if trv.insert.block {
return fmt.Errorf("insert blocked: %s", field.Name)
}
case QTUpdate:
if trv.update.block {
return fmt.Errorf("update blocked: %s", field.Name)
}
case QTDelete:
if trv.delete.block {
return fmt.Errorf("delete blocked: %s", field.Name)
}
}
} else if role == "anon" {
skipRender = true
}
selects = append(selects, Select{ selects = append(selects, Select{
ID: id, ID: id,
ParentID: parentID, ParentID: parentID,
Name: field.Name, Name: field.Name,
Children: make([]int32, 0, 5), SkipRender: skipRender,
Allowed: trv.allowedColumns(action),
Functions: true,
}) })
s := &selects[(len(selects) - 1)] s := &selects[(len(selects) - 1)]
switch action {
case QTQuery:
s.Functions = !trv.query.disable.funcs
s.Paging.Limit = trv.query.limit
case QTInsert:
s.PresetMap = trv.insert.psmap
s.PresetList = trv.insert.pslist
case QTUpdate:
s.PresetMap = trv.update.psmap
s.PresetList = trv.update.pslist
}
if len(field.Alias) != 0 { if len(field.Alias) != 0 {
s.FieldName = field.Alias s.FieldName = field.Alias
} else { } else {
s.FieldName = s.Name s.FieldName = s.Name
} }
if s.ParentID == -1 {
qc.Roots = append(qc.Roots, s.ID)
} else {
p := &selects[s.ParentID]
p.Children = append(p.Children, s.ID)
}
if skipRender {
id++
continue
}
s.Children = make([]int32, 0, 5)
s.Functions = true
if trv != nil {
s.Allowed = trv.allowedColumns(action)
switch action {
case QTQuery:
s.Functions = !trv.query.disable.funcs
s.Paging.Limit = trv.query.limit
case QTInsert:
s.PresetMap = trv.insert.psmap
s.PresetList = trv.insert.pslist
case QTUpdate:
s.PresetMap = trv.update.psmap
s.PresetList = trv.update.pslist
}
}
err := com.compileArgs(qc, s, field.Args, role) err := com.compileArgs(qc, s, field.Args, role)
if err != nil { if err != nil {
return err return err
@ -367,13 +417,6 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
// Order is important AddFilters must come after compileArgs // Order is important AddFilters must come after compileArgs
com.AddFilters(qc, s, role) com.AddFilters(qc, s, role)
if s.ParentID == -1 {
qc.Roots = append(qc.Roots, s.ID)
} else {
p := &selects[s.ParentID]
p.Children = append(p.Children, s.ID)
}
s.Cols = make([]Column, 0, len(field.Children)) s.Cols = make([]Column, 0, len(field.Children))
action = QTQuery action = QTQuery
@ -413,14 +456,10 @@ func (com *Compiler) compileQuery(qc *QCode, op *Operation, role string) error {
func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) { func (com *Compiler) AddFilters(qc *QCode, sel *Select, role string) {
var fil *Exp var fil *Exp
var nu bool var nu bool // need user_id (or not) in this filter
if trv, ok := com.tr[role][sel.Name]; ok { if trv, ok := com.tr[role][sel.Name]; ok {
fil, nu = trv.filter(qc.Type) fil, nu = trv.filter(qc.Type)
} else if role == "anon" {
// Tables not defined under the anon role will not be rendered
sel.SkipRender = true
} }
if fil == nil { if fil == nil {
@ -811,14 +850,17 @@ func (com *Compiler) compileArgAfterBefore(sel *Select, arg *Arg, pt PagingType)
return nil, false return nil, false
} }
var zeroTrv = &trval{} // var zeroTrv = &trval{}
func (com *Compiler) getRole(role, field string) *trval { func (com *Compiler) getRole(role, field string) *trval {
if trv, ok := com.tr[role][field]; ok { if trv, ok := com.tr[role][field]; ok {
return trv return trv
} else {
return zeroTrv
} }
return nil
// } else {
// return zeroTrv
// }
} }
func AddFilter(sel *Select, fil *Exp) { func AddFilter(sel *Select, fil *Exp) {
@ -945,6 +987,9 @@ func newExp(st *util.Stack, node *Node, usePool bool) (*Exp, error) {
ex.Op = OpDistinct ex.Op = OpDistinct
ex.Val = node.Val ex.Val = node.Val
default: default:
if len(node.Children) == 0 {
return nil, fmt.Errorf("[Where] invalid operation: %s", name)
}
pushChildren(st, node.exp, node) pushChildren(st, node.exp, node)
return nil, nil // skip node return nil, nil // skip node
} }
@ -964,8 +1009,9 @@ func newExp(st *util.Stack, node *Node, usePool bool) (*Exp, error) {
case NodeVar: case NodeVar:
ex.Type = ValVar ex.Type = ValVar
default: default:
return nil, fmt.Errorf("[Where] valid values include string, int, float, boolean and list: %s", node.Type) return nil, fmt.Errorf("[Where] invalid values for: %s", name)
} }
setWhereColName(ex, node) setWhereColName(ex, node)
} }
@ -1014,6 +1060,7 @@ func setWhereColName(ex *Exp, node *Node) {
ex.Col = list[listlen-1] ex.Col = list[listlen-1]
ex.NestedCols = list[:listlen] ex.NestedCols = list[:listlen]
} }
} }
func setOrderByColName(ob *OrderBy, node *Node) { func setOrderByColName(ob *OrderBy, node *Node) {

View File

@ -0,0 +1,47 @@
package qcode
func GetQType(gql string) QType {
ic := false
for i := range gql {
b := gql[i]
switch {
case b == '#':
ic = true
case b == '\n':
ic = false
case !ic && b == '{':
return QTQuery
case !ic && al(b):
switch b {
case 'm', 'M':
return QTMutation
case 'q', 'Q':
return QTQuery
}
}
}
return -1
}
func al(b byte) bool {
return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || (b >= '0' && b <= '9')
}
func (qt QType) String() string {
switch qt {
case QTQuery:
return "query"
case QTMutation:
return "mutation"
case QTInsert:
return "insert"
case QTUpdate:
return "update"
case QTDelete:
return "delete"
case QTUpsert:
return "upsert"
}
return ""
}

View File

@ -0,0 +1,50 @@
package qcode
import "testing"
func TestGetQType(t *testing.T) {
type args struct {
gql string
}
type ts struct {
name string
args args
want QType
}
tests := []ts{
ts{
name: "query",
args: args{gql: " query {"},
want: QTQuery,
},
ts{
name: "mutation",
args: args{gql: " mutation {"},
want: QTMutation,
},
ts{
name: "default query",
args: args{gql: " {"},
want: QTQuery,
},
ts{
name: "default query with comment",
args: args{gql: `# query is good
{`},
want: QTQuery,
},
ts{
name: "failed query with comment",
args: args{gql: `# query is good query {`},
want: -1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := GetQType(tt.args.gql); got != tt.want {
t.Errorf("GetQType() = %v, want %v", got, tt.want)
}
})
}
}

490
core/introspec.go Normal file
View File

@ -0,0 +1,490 @@
package core
import (
"strings"
"github.com/chirino/graphql"
"github.com/chirino/graphql/resolvers"
"github.com/chirino/graphql/schema"
"github.com/dosco/super-graph/core/internal/psql"
)
var typeMap map[string]string = map[string]string{
"smallint": "Int",
"integer": "Int",
"bigint": "Int",
"smallserial": "Int",
"serial": "Int",
"bigserial": "Int",
"decimal": "Float",
"numeric": "Float",
"real": "Float",
"double precision": "Float",
"money": "Float",
"boolean": "Boolean",
}
func (sg *SuperGraph) initGraphQLEgine() error {
engine := graphql.New()
engineSchema := engine.Schema
dbSchema := sg.schema
if err := engineSchema.Parse(`enum OrderDirection { asc desc }`); err != nil {
return err
}
gqltype := func(col psql.DBColumn) schema.Type {
typeName := typeMap[strings.ToLower(col.Type)]
if typeName == "" {
typeName = "String"
}
var t schema.Type = &schema.TypeName{Name: typeName}
if col.NotNull {
t = &schema.NonNull{OfType: t}
}
return t
}
query := &schema.Object{
Name: "Query",
Fields: schema.FieldList{},
}
mutation := &schema.Object{
Name: "Mutation",
Fields: schema.FieldList{},
}
engineSchema.Types[query.Name] = query
engineSchema.Types[mutation.Name] = mutation
engineSchema.EntryPoints[schema.Query] = query
engineSchema.EntryPoints[schema.Mutation] = mutation
//validGraphQLIdentifierRegex := regexp.MustCompile(`^[A-Za-z_][A-Za-z_0-9]*$`)
scalarExpressionTypesNeeded := map[string]bool{}
tableNames := dbSchema.GetTableNames()
funcs := dbSchema.GetFunctions()
for _, table := range tableNames {
ti, err := dbSchema.GetTable(table)
if err != nil {
return err
}
if !ti.IsSingular {
continue
}
singularName := ti.Singular
// if !validGraphQLIdentifierRegex.MatchString(singularName) {
// return errors.New("table name is not a valid GraphQL identifier: " + singularName)
// }
pluralName := ti.Plural
// if !validGraphQLIdentifierRegex.MatchString(pluralName) {
// return errors.New("table name is not a valid GraphQL identifier: " + pluralName)
// }
outputType := &schema.Object{
Name: singularName + "Output",
Fields: schema.FieldList{},
}
engineSchema.Types[outputType.Name] = outputType
inputType := &schema.InputObject{
Name: singularName + "Input",
Fields: schema.InputValueList{},
}
engineSchema.Types[inputType.Name] = inputType
orderByType := &schema.InputObject{
Name: singularName + "OrderBy",
Fields: schema.InputValueList{},
}
engineSchema.Types[orderByType.Name] = orderByType
expressionTypeName := singularName + "Expression"
expressionType := &schema.InputObject{
Name: expressionTypeName,
Fields: schema.InputValueList{
&schema.InputValue{
Name: "and",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "or",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
&schema.InputValue{
Name: "not",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionTypeName}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
for _, col := range ti.Columns {
colName := col.Name
// if !validGraphQLIdentifierRegex.MatchString(colName) {
// return errors.New("column name is not a valid GraphQL identifier: " + colName)
// }
colType := gqltype(col)
nullableColType := ""
if x, ok := colType.(*schema.NonNull); ok {
nullableColType = x.OfType.(*schema.TypeName).Name
} else {
nullableColType = colType.(*schema.TypeName).Name
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: colName,
Type: colType,
})
for _, f := range funcs {
if col.Type != f.Params[0].Type {
continue
}
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: f.Name + "_" + colName,
Type: colType,
})
}
// If it's a numeric type...
if nullableColType == "Float" || nullableColType == "Int" {
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "avg_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "count_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "max_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "min_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "stddev_samp_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "variance_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_pop_" + colName,
Type: colType,
})
outputType.Fields = append(outputType.Fields, &schema.Field{
Name: "var_samp_" + colName,
Type: colType,
})
}
inputType.Fields = append(inputType.Fields, &schema.InputValue{
Name: colName,
Type: colType,
})
orderByType.Fields = append(orderByType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "OrderDirection"}},
})
scalarExpressionTypesNeeded[nullableColType] = true
expressionType.Fields = append(expressionType.Fields, &schema.InputValue{
Name: colName,
Type: &schema.NonNull{OfType: &schema.TypeName{Name: nullableColType + "Expression"}},
})
}
outputTypeName := &schema.TypeName{Name: outputType.Name}
inputTypeName := &schema.TypeName{Name: inputType.Name}
pluralOutputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: outputType.Name}}}}
pluralInputTypeName := &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: inputType.Name}}}}
args := schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: "To sort or ordering results just use the order_by argument. This can be combined with where, search, etc to build complex queries to fit you needs."},
Name: "order_by",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: orderByType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "where",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: expressionType.Name}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "limit",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "offset",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "first",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "last",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Int"}},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "before",
Type: &schema.TypeName{Name: "String"},
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "after",
Type: &schema.TypeName{Name: "String"},
},
}
if ti.PrimaryCol != nil {
t := gqltype(*ti.PrimaryCol)
if _, ok := t.(*schema.NonNull); !ok {
t = &schema.NonNull{OfType: t}
}
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Finds the record by the primary key"},
Name: "id",
Type: t,
})
}
if ti.TSVCol != nil {
args = append(args, &schema.InputValue{
Desc: schema.Description{Text: "Performs full text search using a TSV index"},
Name: "search",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
})
}
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: singularName,
Type: outputTypeName,
Args: args,
})
query.Fields = append(query.Fields, &schema.Field{
Desc: schema.Description{Text: ""},
Name: pluralName,
Type: pluralOutputTypeName,
Args: args,
})
mutationArgs := append(args, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "insert",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "update",
Type: inputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upsert",
Type: inputTypeName,
},
}...)
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: singularName,
Args: mutationArgs,
Type: outputType,
})
mutation.Fields = append(mutation.Fields, &schema.Field{
Name: pluralName,
Args: append(mutationArgs, schema.InputValueList{
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "inserts",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "updates",
Type: pluralInputTypeName,
},
&schema.InputValue{
Desc: schema.Description{Text: ""},
Name: "upserts",
Type: pluralInputTypeName,
},
}...),
Type: outputType,
})
}
for typeName := range scalarExpressionTypesNeeded {
expressionType := &schema.InputObject{
Name: typeName + "Expression",
Fields: schema.InputValueList{
&schema.InputValue{
Name: "eq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "neq",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "not_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lt",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_than",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "gte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "greater_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lte",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "lesser_or_equals",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "nin",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "not_in",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nlike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_like",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_ilike",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "nsimilar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "not_similar",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "has_key",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}},
},
&schema.InputValue{
Name: "has_key_any",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "has_key_all",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contains",
Type: &schema.NonNull{OfType: &schema.List{OfType: &schema.NonNull{OfType: &schema.TypeName{Name: typeName}}}},
},
&schema.InputValue{
Name: "contained_in",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "String"}},
},
&schema.InputValue{
Name: "is_null",
Type: &schema.NonNull{OfType: &schema.TypeName{Name: "Boolean"}},
},
},
}
engineSchema.Types[expressionType.Name] = expressionType
}
if err := engineSchema.ResolveTypes(); err != nil {
return err
}
engine.Resolver = resolvers.Func(func(request *resolvers.ResolveRequest, next resolvers.Resolution) resolvers.Resolution {
resolver := resolvers.MetadataResolver.Resolve(request, next)
if resolver != nil {
return resolver
}
resolver = resolvers.MethodResolver.Resolve(request, next) // needed by the MetadataResolver
if resolver != nil {
return resolver
}
return nil
})
sg.ge = engine
return nil
}

251
core/prepare.go Normal file
View File

@ -0,0 +1,251 @@
package core
import (
"bytes"
"context"
"crypto/sha256"
"database/sql"
"encoding/hex"
"fmt"
"io"
"strings"
"github.com/dosco/super-graph/core/internal/allow"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/valyala/fasttemplate"
)
type preparedItem struct {
sd *sql.Stmt
args [][]byte
st stmt
roleArg bool
}
func (sg *SuperGraph) initPrepared() error {
ct := context.Background()
if sg.allowList.IsPersist() {
return nil
}
sg.prepared = make(map[string]*preparedItem)
tx, err := sg.db.BeginTx(ct, nil)
if err != nil {
return err
}
defer tx.Rollback() //nolint: errcheck
if err = sg.prepareRoleStmt(tx); err != nil {
return fmt.Errorf("prepareRoleStmt: %w", err)
}
if err := tx.Commit(); err != nil {
return err
}
success := 0
list, err := sg.allowList.Load()
if err != nil {
return err
}
for _, v := range list {
if len(v.Query) == 0 {
continue
}
err := sg.prepareStmt(v)
if err != nil {
sg.log.Printf("WRN %s: %v", v.Name, err)
} else {
success++
}
}
sg.log.Printf("INF allow list: prepared %d / %d queries", success, len(list))
return nil
}
func (sg *SuperGraph) prepareStmt(item allow.Item) error {
query := item.Query
qb := []byte(query)
vars := item.Vars
qt := qcode.GetQType(query)
ct := context.Background()
switch qt {
case qcode.QTQuery:
var stmts1 []stmt
var err error
if sg.abacEnabled {
stmts1, err = sg.buildMultiStmt(qb, vars)
} else {
stmts1, err = sg.buildRoleStmt(qb, vars, "user")
}
if err != nil {
return err
}
//logger.Debug().Msgf("Prepared statement 'query %s' (user)", item.Name)
err = sg.prepare(ct, stmts1, stmtHash(item.Name, "user"))
if err != nil {
return err
}
if sg.anonExists {
// logger.Debug().Msgf("Prepared statement 'query %s' (anon)", item.Name)
stmts2, err := sg.buildRoleStmt(qb, vars, "anon")
if err != nil {
return err
}
err = sg.prepare(ct, stmts2, stmtHash(item.Name, "anon"))
if err != nil {
return err
}
}
case qcode.QTMutation:
for _, role := range sg.conf.Roles {
// logger.Debug().Msgf("Prepared statement 'mutation %s' (%s)", item.Name, role.Name)
stmts, err := sg.buildRoleStmt(qb, vars, role.Name)
if err != nil {
return err
}
err = sg.prepare(ct, stmts, stmtHash(item.Name, role.Name))
if err != nil {
return err
}
}
}
return nil
}
func (sg *SuperGraph) prepare(ct context.Context, st []stmt, key string) error {
finalSQL, am := processTemplate(st[0].sql)
sd, err := sg.db.Prepare(finalSQL)
if err != nil {
return fmt.Errorf("prepare failed: %v: %s", err, finalSQL)
}
sg.prepared[key] = &preparedItem{
sd: sd,
args: am,
st: st[0],
roleArg: len(st) > 1,
}
return nil
}
// nolint: errcheck
func (sg *SuperGraph) prepareRoleStmt(tx *sql.Tx) error {
var err error
if !sg.abacEnabled {
return nil
}
w := &bytes.Buffer{}
io.WriteString(w, `SELECT (CASE WHEN EXISTS (`)
io.WriteString(w, sg.conf.RolesQuery)
io.WriteString(w, `) THEN `)
io.WriteString(w, `(SELECT (CASE`)
for _, role := range sg.conf.Roles {
if len(role.Match) == 0 {
continue
}
io.WriteString(w, ` WHEN `)
io.WriteString(w, role.Match)
io.WriteString(w, ` THEN '`)
io.WriteString(w, role.Name)
io.WriteString(w, `'`)
}
io.WriteString(w, ` ELSE {{role}} END) FROM (`)
io.WriteString(w, sg.conf.RolesQuery)
io.WriteString(w, `) AS "_sg_auth_roles_query" LIMIT 1) `)
io.WriteString(w, `ELSE 'anon' END) FROM (VALUES (1)) AS "_sg_auth_filler" LIMIT 1; `)
roleSQL, _ := processTemplate(w.String())
sg.getRole, err = tx.Prepare(roleSQL)
if err != nil {
return err
}
return nil
}
func processTemplate(tmpl string) (string, [][]byte) {
st := struct {
vmap map[string]int
am [][]byte
i int
}{
vmap: make(map[string]int),
am: make([][]byte, 0, 5),
i: 0,
}
execFunc := func(w io.Writer, tag string) (int, error) {
if n, ok := st.vmap[tag]; ok {
return w.Write([]byte(fmt.Sprintf("$%d", n)))
}
st.am = append(st.am, []byte(tag))
st.i++
st.vmap[tag] = st.i
return w.Write([]byte(fmt.Sprintf("$%d", st.i)))
}
t1 := fasttemplate.New(tmpl, `'{{`, `}}'`)
ts1 := t1.ExecuteFuncString(execFunc)
t2 := fasttemplate.New(ts1, `{{`, `}}`)
ts2 := t2.ExecuteFuncString(execFunc)
return ts2, st.am
}
func (sg *SuperGraph) initAllowList() error {
var ac allow.Config
var err error
if len(sg.conf.AllowListFile) == 0 {
sg.conf.UseAllowList = false
sg.log.Printf("WRN allow list disabled no file specified")
}
// When list is not eabled it is still created and
// and new queries are saved to it.
if !sg.conf.UseAllowList {
ac = allow.Config{CreateIfNotExists: true, Persist: true}
}
sg.allowList, err = allow.New(sg.conf.AllowListFile, ac)
if err != nil {
return fmt.Errorf("failed to initialize allow list: %w", err)
}
return nil
}
// nolint: errcheck
func stmtHash(name string, role string) string {
h := sha256.New()
io.WriteString(h, strings.ToLower(name))
io.WriteString(h, role)
return hex.EncodeToString(h.Sum(nil))
}

View File

@ -1,4 +1,4 @@
package serv package core
import ( import (
"bytes" "bytes"
@ -8,24 +8,20 @@ import (
"sync" "sync"
"github.com/cespare/xxhash/v2" "github.com/cespare/xxhash/v2"
"github.com/dosco/super-graph/core/internal/qcode"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/qcode"
) )
func execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) { func (sg *SuperGraph) execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) {
var err error var err error
if len(data) == 0 || st.skipped == 0 {
return data, nil
}
sel := st.qc.Selects sel := st.qc.Selects
h := xxhash.New() h := xxhash.New()
// fetch the field name used within the db response json // fetch the field name used within the db response json
// that are used to mark insertion points and the mapping between // that are used to mark insertion points and the mapping between
// those field names and their select objects // those field names and their select objects
fids, sfmap := parentFieldIds(h, sel, st.skipped) fids, sfmap := sg.parentFieldIds(h, sel, st.skipped)
// fetch the field values of the marked insertion points // fetch the field values of the marked insertion points
// these values contain the id to be used with fetching remote data // these values contain the id to be used with fetching remote data
@ -34,10 +30,10 @@ func execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) {
switch { switch {
case len(from) == 1: case len(from) == 1:
to, err = resolveRemote(hdr, h, from[0], sel, sfmap) to, err = sg.resolveRemote(hdr, h, from[0], sel, sfmap)
case len(from) > 1: case len(from) > 1:
to, err = resolveRemotes(hdr, h, from, sel, sfmap) to, err = sg.resolveRemotes(hdr, h, from, sel, sfmap)
default: default:
return nil, errors.New("something wrong no remote ids found in db response") return nil, errors.New("something wrong no remote ids found in db response")
@ -57,7 +53,7 @@ func execRemoteJoin(st *stmt, data []byte, hdr http.Header) ([]byte, error) {
return ob.Bytes(), nil return ob.Bytes(), nil
} }
func resolveRemote( func (sg *SuperGraph) resolveRemote(
hdr http.Header, hdr http.Header,
h *xxhash.Digest, h *xxhash.Digest,
field jsn.Field, field jsn.Field,
@ -82,7 +78,7 @@ func resolveRemote(
// to find the resolver to use for this relationship // to find the resolver to use for this relationship
k2 := mkkey(h, s.Name, p.Name) k2 := mkkey(h, s.Name, p.Name)
r, ok := rmap[k2] r, ok := sg.rmap[k2]
if !ok { if !ok {
return nil, nil return nil, nil
} }
@ -119,7 +115,7 @@ func resolveRemote(
return to, nil return to, nil
} }
func resolveRemotes( func (sg *SuperGraph) resolveRemotes(
hdr http.Header, hdr http.Header,
h *xxhash.Digest, h *xxhash.Digest,
from []jsn.Field, from []jsn.Field,
@ -150,7 +146,7 @@ func resolveRemotes(
// to find the resolver to use for this relationship // to find the resolver to use for this relationship
k2 := mkkey(h, s.Name, p.Name) k2 := mkkey(h, s.Name, p.Name)
r, ok := rmap[k2] r, ok := sg.rmap[k2]
if !ok { if !ok {
return nil, nil return nil, nil
} }
@ -195,3 +191,59 @@ func resolveRemotes(
return to, cerr return to, cerr
} }
func (sg *SuperGraph) parentFieldIds(h *xxhash.Digest, sel []qcode.Select, skipped uint32) (
[][]byte,
map[uint64]*qcode.Select) {
c := 0
for i := range sel {
s := &sel[i]
if isSkipped(skipped, uint32(s.ID)) {
c++
}
}
// list of keys (and it's related value) to extract from
// the db json response
fm := make([][]byte, c)
// mapping between the above extracted key and a Select
// object
sm := make(map[uint64]*qcode.Select, c)
n := 0
for i := range sel {
s := &sel[i]
if !isSkipped(skipped, uint32(s.ID)) {
continue
}
p := sel[s.ParentID]
k := mkkey(h, s.Name, p.Name)
if r, ok := sg.rmap[k]; ok {
fm[n] = r.IDField
n++
k := xxhash.Sum64(r.IDField)
sm[k] = s
}
}
return fm, sm
}
func isSkipped(n uint32, pos uint32) bool {
return ((n & (1 << pos)) != 0)
}
func colsToList(cols []qcode.Column) []string {
var f []string
for i := range cols {
f = append(f, cols[i].Name)
}
return f
}

View File

@ -1,19 +1,14 @@
package serv package core
import ( import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"net/http/httputil"
"strings" "strings"
"github.com/cespare/xxhash/v2" "github.com/cespare/xxhash/v2"
"github.com/dosco/super-graph/core/internal/psql"
"github.com/dosco/super-graph/jsn" "github.com/dosco/super-graph/jsn"
"github.com/dosco/super-graph/psql"
)
var (
rmap map[uint64]*resolvFn
) )
type resolvFn struct { type resolvFn struct {
@ -22,23 +17,25 @@ type resolvFn struct {
Fn func(h http.Header, id []byte) ([]byte, error) Fn func(h http.Header, id []byte) ([]byte, error)
} }
func initResolvers() { func (sg *SuperGraph) initResolvers() error {
var err error var err error
rmap = make(map[uint64]*resolvFn) sg.rmap = make(map[uint64]*resolvFn)
for _, t := range conf.Tables { for _, t := range sg.conf.Tables {
err = initRemotes(t) err = sg.initRemotes(t)
if err != nil { if err != nil {
break break
} }
} }
if err != nil { if err != nil {
errlog.Fatal().Err(err).Msg("failed to initialize resolvers") return fmt.Errorf("failed to initialize resolvers: %v", err)
} }
return nil
} }
func initRemotes(t configTable) error { func (sg *SuperGraph) initRemotes(t Table) error {
h := xxhash.New() h := xxhash.New()
for _, r := range t.Remotes { for _, r := range t.Remotes {
@ -49,7 +46,7 @@ func initRemotes(t configTable) error {
// if no table column specified in the config then // if no table column specified in the config then
// use the primary key of the table as the id // use the primary key of the table as the id
if len(idcol) == 0 { if len(idcol) == 0 {
pcol, err := pcompile.IDColumn(t.Name) pcol, err := sg.pc.IDColumn(t.Name)
if err != nil { if err != nil {
return err return err
} }
@ -64,7 +61,7 @@ func initRemotes(t configTable) error {
val.Left.Col = idcol val.Left.Col = idcol
val.Right.Col = idk val.Right.Col = idk
err := pcompile.AddRelationship(strings.ToLower(r.Name), t.Name, val) err := sg.pc.AddRelationship(sanitize(r.Name), t.Name, val)
if err != nil { if err != nil {
return err return err
} }
@ -85,16 +82,16 @@ func initRemotes(t configTable) error {
} }
// index resolver obj by parent and child names // index resolver obj by parent and child names
rmap[mkkey(h, r.Name, t.Name)] = rf sg.rmap[mkkey(h, r.Name, t.Name)] = rf
// index resolver obj by IDField // index resolver obj by IDField
rmap[xxhash.Sum64(rf.IDField)] = rf sg.rmap[xxhash.Sum64(rf.IDField)] = rf
} }
return nil return nil
} }
func buildFn(r configRemote) func(http.Header, []byte) ([]byte, error) { func buildFn(r Remote) func(http.Header, []byte) ([]byte, error) {
reqURL := strings.Replace(r.URL, "$id", "%s", 1) reqURL := strings.Replace(r.URL, "$id", "%s", 1)
client := &http.Client{} client := &http.Client{}
@ -117,29 +114,26 @@ func buildFn(r configRemote) func(http.Header, []byte) ([]byte, error) {
req.Header.Set(v, hdr.Get(v)) req.Header.Set(v, hdr.Get(v))
} }
logger.Debug().Str("uri", uri).Msg("Remote Join")
res, err := client.Do(req) res, err := client.Do(req)
if err != nil { if err != nil {
errlog.Error().Err(err).Msgf("Failed to connect to: %s", uri) return nil, fmt.Errorf("failed to connect to '%s': %v", uri, err)
return nil, err
} }
defer res.Body.Close() defer res.Body.Close()
if r.Debug { // if r.Debug {
reqDump, err := httputil.DumpRequestOut(req, true) // reqDump, err := httputil.DumpRequestOut(req, true)
if err != nil { // if err != nil {
return nil, err // return nil, err
} // }
resDump, err := httputil.DumpResponse(res, true) // resDump, err := httputil.DumpResponse(res, true)
if err != nil { // if err != nil {
return nil, err // return nil, err
} // }
logger.Debug().Msgf("Remote Request Debug:\n%s\n%s", // logger.Debug().Msgf("Remote Request Debug:\n%s\n%s",
reqDump, resDump) // reqDump, resDump)
} // }
if res.StatusCode != 200 { if res.StatusCode != 200 {
return nil, return nil,

15
core/utils.go Normal file
View File

@ -0,0 +1,15 @@
package core
import (
"github.com/cespare/xxhash/v2"
)
// nolint: errcheck
func mkkey(h *xxhash.Digest, k1 string, k2 string) uint64 {
h.WriteString(k1)
h.WriteString(k2)
v := h.Sum64()
h.Reset()
return v
}

View File

@ -1,267 +0,0 @@
<template>
<div>
<main aria-labelledby="main-title" >
<Navbar />
<div class="container mx-auto">
<div class="flex flex-col md:flex-row justify-between px-10 md:px-20">
<div class="bg-bottom bg-no-repeat bg-cover">
<div class="text-center md:text-left pt-24">
<h1 v-if="data.heroText !== null" class="text-5xl font-bold text-black pb-0 uppercase">
<img src="/super-graph.png" width="250" />
</h1>
<p class="text-4xl text-gray-800 leading-tight mt-1">
Build web products faster. Secure high performance GraphQL
</p>
<NavLink
class="inline-block px-4 py-3 my-8 bg-blue-600 text-blue-100 font-bold rounded"
:item="actionLink"
/>
<a
class="px-4 py-3 my-8 border-2 border-gray-500 text-gray-600 font-bold rounded"
href="https://github.com/dosco/super-graph"
target="_blank"
>Github</a>
</div>
</div>
<div class="py-10 md:p-20">
<img src="/hologram.svg" class="h-64">
</div>
</div>
</div>
<div>
<div
class="flex flex-wrap mx-2 md:mx-20"
v-if="data.features && data.features.length"
>
<div
class="w-2/4 md:w-1/3 shadow"
v-for="(feature, index) in data.features"
:key="index"
>
<div class="p-8">
<h2 class="md:text-xl text-blue-800 font-medium border-0 mb-1">{{ feature.title }}</h2>
<p class="md:text-xl text-gray-700 leading-snug">{{ feature.details }}</p>
</div>
</div>
</div>
</div>
<div class="bg-gray-100 mt-10">
<div class="container mx-auto px-10 md:px-0 py-32">
<div class="pb-8 hidden md:block ">
<img src="arch-basic.svg">
</div>
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
What is {{ data.heroText }}?
</h1>
<div class="text-2xl md:text-3xl">
Super Graph can automatically learn a Postgres database and instantly serve it as a fast and secured GraphQL API. It comes with tools to create a new app and manage it's database. You get it all, a very productive developer and a highly scalable app backend. It's designed to work well on serverless platforms by Google, AWS, Microsoft, etc. The goal is to save you a ton of time and money so you can focus on you're apps core value.
</div>
</div>
</div>
<div class="flex flex-wrap">
<div class="md:w-2/4">
<img src="/graphql.png">
</div>
<div class="md:w-2/4">
<img src="/json.png">
</div>
</div>
<div class="mt-10 py-10 md:py-20">
<div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
How to use {{ data.heroText }}?
</h1>
<div class="text-2xl md:text-3xl">
<small class="text-sm">Use the below command to download and install Super Graph. You will need Go 1.13 or above</small>
<pre>&#8227; GO111MODULE=on go get -u github.com/dosco/super-graph</pre>
<small class="text-sm">Create a new app and change to it's directory</small>
<pre>&#8227; super-graph new blog; cd blog</pre>
<small class="text-sm">Setup the app database and seed it with fake data. Docker compose will start a Postgres database for your app</small>
<pre>&#8227; docker-compose run blog_api ./super-graph db:setup</pre>
<small class="text-sm">And finally launch Super Graph configured for your app</small>
<pre>&#8227; docker-compose up</pre>
</div>
</div>
</div>
<div class="bg-gray-100 mt-10">
<div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
The story of {{ data.heroText }}
</h1>
<div class="text-2xl md:text-3xl">
After working on several products through my career I find that we spend way too much time on building API backends. Most APIs also require constant updating, this costs real time and money.<br><br>
It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see.<br><br>
I didn't want to write this code anymore, I wanted the computer to do it. Enter GraphQL, to me it sounded great, but it still required me to write all the same database query code.<br><br>
Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL.<br><br>
This compiler is what sits at the heart of Super Graph with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations and everything else needed for you to build production ready apps with it.
</div>
</div>
</div>
<div class="overflow-hidden bg-indigo-900">
<div class="container mx-auto py-20">
<img src="/super-graph-web-ui.png">
</div>
</div>
<div class="py-10 md:py-20">
<div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Try it with a demo Rails app
</h1>
<div class="text-2xl md:text-3xl">
<small class="text-sm">Download the Docker compose config for the demo</small>
<pre>&#8227; curl -L -o demo.yml https://bit.ly/2FZS0uw</pre>
<small class="text-sm">Setup the demo database</small>
<pre>&#8227; docker-compose -f demo.yml run rails_app rake db:create db:migrate db:seed</pre>
<small class="text-sm">Run the demo</small>
<pre>&#8227; docker-compose -f demo.yml up</pre>
<small class="text-sm">Signin to the demo app (user1@demo.com / 123456)</small>
<pre>&#8227; open http://localhost:3000</pre>
<small class="text-sm">Try the super graph web ui</small>
<pre>&#8227; open http://localhost:8080</pre>
</div>
</div>
</div>
<div class="border-t py-10">
<div class="block md:hidden w-100">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;">
</iframe>
</div>
<div class="container mx-auto flex flex-col md:flex-row items-center">
<div class="w-100 md:w-1/2 p-8">
<h1 class="text-2xl font-bold">GraphQL the future of APIs</h1>
<p class="text-xl text-gray-600">Keeping a tight and fast development loop helps you iterate quickly. Leveraging technology like Super Graph focuses your team on building the core product and not reinventing wheels. GraphQL eliminate the dependency on the backend engineering and keeps the things moving fast</p>
</div>
<div class="hidden md:block md:w-1/2">
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container shadow">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen >
</iframe>
</div>
</div>
</div>
</div>
<div class="bg-gray-200 mt-10">
<div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Build Secure Apps
</h1>
<div class="flex flex-col text-2xl md:text-3xl">
<card className="mb-1 p-8">
<template #image><font-awesome-icon icon="portrait" class="text-red-500" /></template>
<template #title>Role Based Access Control</template>
<template #body>Dynamically assign roles like admin, manager or anon to specific users. Generate role specific queries at runtime. For example admins can get all users while others can only fetch their own user.</template>
</card>
<card className="mb-1 p-8">
<template #image><font-awesome-icon icon="shield-alt" class="text-blue-500" /></template>
<template #title>Prepared Statements</template>
<template #body>An additional layer of protection from a variety of security issues like SQL injection. In production mode all queries are precompiled into prepared statements so only those can be executed. This also significantly speeds up all queries.</template>
</card>
<card className="p-8">
<template #image><font-awesome-icon icon="lock" class="text-green-500"/></template>
<template #title>Fuzz Tested Code</template>
<template #body>Fuzzing is done by complex software that generates massives amounts of random input to detect if code is free of security bugs. Google uses fuzzing to protects everything from their cloud infrastructure to the Chrome browser.</template>
</card>
</div>
</div>
</div>
<div class="">
<div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
More Features
</h1>
<div class="flex flex-col md:flex-row text-2xl md:text-3xl">
<card className="mr-0 md:mr-1 mb-1 flex-col w-100 md:w-1/3 items-center">
<template #image><img src="/arch-remote-join.svg" class="h-64"></template>
<template #title>Remote Joins</template>
<template #body>A powerful feature that allows you to query your database and remote REST APIs at the same time. For example fetch a user from the DB, his tweets from Twitter and his payments from Stripe with a single GraphQL query.</template>
</card>
<card className="mr-0 md:mr-1 mb-1 flex-col w-100 md:w-1/3">
<template #image><img src="/arch-search.svg" class="h-64"></template>
<template #title>Full Text Search</template>
<template #body>Postgres has excellent full-text search built-in. You don't need another expensive service. Super Graph makes it super easy to use with keyword ranking and highlighting also supported.</template>
</card>
<card className="mb-1 flex-col w-100 md:w-1/3">
<template #image><img src="/arch-bulk.svg" class="h-64"></template>
<template #title>Bulk Inserts</template>
<template #body>Efficiently insert, update and delete multiple items with a single query. Upserts are also supported</template>
</card>
</div>
</div>
</div>
<div
class="mx-auto text-center py-8"
v-if="data.footer"
>
{{ data.footer }}
</div>
</main>
</div>
</template>
<script>
import NavLink from '@theme/components/NavLink.vue'
import Navbar from '@theme/components/Navbar.vue'
import Card from './Card.vue'
import { library } from '@fortawesome/fontawesome-svg-core'
import { faPortrait, faShieldAlt, faLock } from '@fortawesome/free-solid-svg-icons'
import { FontAwesomeIcon } from '@fortawesome/vue-fontawesome'
library.add(faPortrait, faShieldAlt, faLock)
export default {
components: { NavLink, Navbar, FontAwesomeIcon, Card },
computed: {
data () {
return this.$page.frontmatter
},
actionLink () {
return {
link: this.data.actionLink,
text: this.data.actionText
}
}
}
}
</script>

View File

@ -1,5 +1,5 @@
<template> <template>
<div class="shadow bg-white p-4 flex items-start" :class="className"> <div class="shadow p-4 flex items-start" :class="className">
<slot name="image"></slot> <slot name="image"></slot>
<div class="pl-4"> <div class="pl-4">
<h2 class="p-0"> <h2 class="p-0">

View File

@ -0,0 +1,431 @@
<template>
<div>
<main aria-labelledby="main-title" >
<Navbar />
<div style="height: 3.6rem"></div>
<div class="container mx-auto pt-4">
<div class="text-center">
<div class="text-center text-3xl md:text-4xl text-black leading-tight font-semibold">
Fetch data without code
</div>
<NavLink
class="inline-block px-4 py-3 my-8 bg-blue-600 text-white font-bold rounded"
:item="actionLink"
/>
<a
class="px-4 py-3 my-8 border-2 border-blue-600 text-blue-600 font-bold rounded"
href="https://github.com/dosco/super-graph"
target="_blank"
>Github</a>
</div>
</div>
<div class="container mx-auto mb-8 mt-0 md:mt-20 bg-green-100">
<div class="flex flex-wrap">
<div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">Before, struggle with SQL</div>
<pre>
type User struct {
gorm.Model
Profile Profile
ProfileID int
}
type Profile struct {
gorm.Model
Name string
}
db.Model(&user).
Related(&profile).
Association("Languages").
Where("name in (?)", []string{"test"}).
Joins("left join emails on emails.user_id = users.id")
Find(&users)
and more ...
</pre>
</div>
<div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">With Super Graph, just ask.</div>
<pre>
query {
user(id: 5) {
id
first_name
last_name
picture_url
posts(first: 20, order_by: { score: desc }) {
slug
title
created_at
votes_total
votes { created_at }
author { id name }
tags { id name }
}
posts_cursor
}
}
</pre>
</div>
</div>
</div>
<div class="mt-0 md:mt-20">
<div
class="flex flex-wrap mx-2 md:mx-20"
v-if="data.features && data.features.length"
>
<div
class="w-2/4 md:w-1/3 shadow"
v-for="(feature, index) in data.features"
:key="index"
>
<div class="p-8">
<h2 class="text-lg uppercase border-0">{{ feature.title }}</h2>
<div class="text-xl text-gray-900 leading-snug">{{ feature.details }}</div>
</div>
</div>
</div>
</div>
<div class="pt-0 md:pt-20">
<div class="container mx-auto p-10">
<div class="flex justify-center pb-20">
<img src="arch-basic.svg">
</div>
<div class="text-2xl md:text-3xl">
Super Graph is a library and service that fetches data from any Postgres database using just GraphQL. No more struggling with ORMs and SQL to wrangle data out of the database. No more having to figure out the right joins or making inefficient queries. However complex the GraphQL, Super Graph will always generate just one single efficient SQL query. The goal is to save you time and money so you can focus on you're apps core value.
</div>
</div>
</div>
<div class="pt-20">
<div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-2xl text-blue-800 text-center">
Try Super Graph
</h1>
<h1 class="uppercase font-semibold text-lg text-gray-800">
Deploy as a service using docker
</h1>
<div class="p-4 rounded bg-black text-white">
<pre>$ git clone https://github.com/dosco/super-graph && cd super-graph && make install</pre>
<pre>$ super-graph new blog; cd blog</pre>
<pre>$ docker-compose run blog_api ./super-graph db:setup</pre>
<pre>$ docker-compose up</pre>
</div>
<h1 class="uppercase font-semibold text-lg text-gray-800">
Or use it with your own code
</h1>
<div class="text-md">
<pre class="p-4 rounded bg-black text-white">
package main
import (
"database/sql"
"fmt"
"time"
"github.com/dosco/super-graph/config"
"github.com/dosco/super-graph/core"
_ "github.com/jackc/pgx/v4/stdlib"
)
func main() {
db, err := sql.Open("pgx", "postgres://postgrs:@localhost:5432/example_db")
if err != nil {
log.Fatal(err)
}
sg, err := core.NewSuperGraph(nil, db)
if err != nil {
log.Fatal(err)
}
graphqlQuery := `
query {
posts {
id
title
}
}`
res, err := sg.GraphQL(context.Background(), graphqlQuery, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(res.Data))
}
</pre>
</div>
</div>
</div>
<div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
The story of {{ data.heroText }}
</h1>
<div class="text-2xl md:text-3xl">
After working on several products through my career I find that we spend way too much time on building API backends. Most APIs also require constant updating, this costs real time and money.<br><br>
It's always the same thing, figure out what the UI needs then build an endpoint for it. Most API code involves struggling with an ORM to query a database and mangle the data into a shape that the UI expects to see.<br><br>
I didn't want to write this code anymore, I wanted the computer to do it. Enter GraphQL, to me it sounded great, but it still required me to write all the same database query code.<br><br>
Having worked with compilers before I saw this as a compiler problem. Why not build a compiler that converts GraphQL to highly efficient SQL.<br><br>
This compiler is what sits at the heart of Super Graph with layers of useful functionality around it like authentication, remote joins, rails integration, database migrations and everything else needed for you to build production ready apps with it.
</div>
</div>
</div>
<div class="overflow-hidden bg-indigo-900">
<div class="container mx-auto py-20">
<img src="/super-graph-web-ui.png">
</div>
</div>
<!--
<div class="py-10 md:py-20">
<div class="container mx-auto px-10 md:px-0">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Try it with a demo Rails app
</h1>
<div class="text-2xl md:text-3xl">
<small class="text-sm">Download the Docker compose config for the demo</small>
<pre>&#8227; curl -L -o demo.yml https://bit.ly/2FZS0uw</pre>
<small class="text-sm">Setup the demo database</small>
<pre>&#8227; docker-compose -f demo.yml run rails_app rake db:create db:migrate db:seed</pre>
<small class="text-sm">Run the demo</small>
<pre>&#8227; docker-compose -f demo.yml up</pre>
<small class="text-sm">Signin to the demo app (user1@demo.com / 123456)</small>
<pre>&#8227; open http://localhost:3000</pre>
<small class="text-sm">Try the super graph web ui</small>
<pre>&#8227; open http://localhost:8080</pre>
</div>
</div>
</div>
-->
<div class="pt-0 md:pt-20">
<div class="block md:hidden w-100">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen style="width: 100%; height: 250px;">
</iframe>
</div>
<div class="container mx-auto flex flex-col md:flex-row items-center">
<div class="w-100 md:w-1/2 p-8">
<h1 class="text-2xl font-bold">GraphQL the future of APIs</h1>
<p class="text-xl text-gray-600">Keeping a tight and fast development loop helps you iterate quickly. Leveraging technology like Super Graph focuses your team on building the core product and not reinventing wheels. GraphQL eliminate the dependency on the backend engineering and keeps the things moving fast</p>
</div>
<div class="hidden md:block md:w-1/2">
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container shadow">
<iframe src='https://www.youtube.com/embed/MfPL2A-DAJk' frameborder='0' allowfullscreen >
</iframe>
</div>
</div>
</div>
</div>
<div class="container mx-auto pt-0 md:pt-20">
<div class="flex flex-wrap bg-green-100">
<div class="w-100 md:w-1/2 border border-green-500 text-gray-6 00 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">No more joins joins, json, orms, just use GraphQL. Fetch all the data want in the structure you need.</div>
<pre>
query {
thread {
slug
title
published
createdAt : created_at
totalVotes : cached_votes_total
totalPosts : cached_posts_total
vote : thread_vote(where: { user_id: { eq: $user_id } }) {
created_at
}
topics {
slug
name
}
author : me {
slug
}
posts(first: 1, order_by: { score: desc }) {
slug
body
published
createdAt : created_at
totalVotes : cached_votes_total
totalComments : cached_comments_total
vote {
created_at
}
author : user {
slug
firstName : first_name
lastName : last_name
}
}
posts_cursor
}
}
</pre>
</div>
<div class="w-100 md:w-1/2 border border-l md:border-l-0 border-green-500 text-blue-900 text-sm md:text-lg p-6">
<div class="text-xl font-bold pb-4">Instant results using a single highly optimized SQL. It's just that simple.</div>
<pre>
{
"data": {
"thread": {
"slug": "eveniet-ex-24",
"vote": null,
"posts": [
{
"body": "Dolor laborum harum sed sit est ducimus temporibus velit non nobis repudiandae nobis suscipit commodi voluptatem debitis sed voluptas sequi officia.",
"slug": "illum-in-voluptas-1418",
"vote": null,
"author": {
"slug": "sigurd-kemmer",
"lastName": "Effertz",
"firstName": "Brandt"
},
"createdAt": "2020-04-07T04:22:42.115874+00:00",
"published": true,
"totalVotes": 0,
"totalComments": 2
}
],
"title": "In aut qui deleniti quia dolore quasi porro tenetur voluptatem ut adita alias fugit explicabo.",
"author": null,
"topics": [
{
"name": "CloudRun",
"slug": "cloud-run"
},
{
"name": "Postgres",
"slug": "postgres"
}
],
"createdAt": "2020-04-07T04:22:38.099482+00:00",
"published": true,
"totalPosts": 24,
"totalVotes": 0,
"posts_cursor": "mpeBl6L+QfJHc3cmLkLDj9pOdEZYTt5KQtLsazG3TLITB3hJhg=="
}
}
}
</pre>
</div>
</div>
</div>
<div class="pt-0 md:pt-20">
<div class="container mx-auto px-10 md:px-0 py-32">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
Build Secure Apps
</h1>
<div class="flex flex-col text-2xl md:text-3xl">
<card className="mb-1 p-8">
<template #image><font-awesome-icon icon="portrait" class="text-red-500" /></template>
<template #title>Role Based Access Control</template>
<template #body>Dynamically assign roles like admin, manager or anon to specific users. Generate role specific queries at runtime. For example admins can get all users while others can only fetch their own user.</template>
</card>
<card className="mb-1 p-8">
<template #image><font-awesome-icon icon="shield-alt" class="text-blue-500" /></template>
<template #title>Prepared Statements</template>
<template #body>An additional layer of protection from a variety of security issues like SQL injection. In production mode all queries are precompiled into prepared statements so only those can be executed. This also significantly speeds up all queries.</template>
</card>
<card className="p-8">
<template #image><font-awesome-icon icon="lock" class="text-green-500"/></template>
<template #title>Fuzz Tested Code</template>
<template #body>Fuzzing is done by complex software that generates massives amounts of random input to detect if code is free of security bugs. Google uses fuzzing to protects everything from their cloud infrastructure to the Chrome browser.</template>
</card>
</div>
</div>
</div>
<div class="pt-0 md:py -20">
<div class="container mx-auto">
<h1 class="uppercase font-semibold text-xl text-blue-800 mb-4">
More Features
</h1>
<div class="flex flex-col md:flex-row text-2xl md:text-3xl">
<card className="mr-0 md:mr-1 mb-1 flex-col w-100 md:w-1/3 items-center">
<!-- <template #image><img src="/arch-remote-join.svg" class="h-64"></template> -->
<template #title>Remote Joins</template>
<template #body>A powerful feature that allows you to query your database and remote REST APIs at the same time. For example fetch a user from the DB, his tweets from Twitter and his payments from Stripe with a single GraphQL query.</template>
</card>
<card className="mr-0 md:mr-1 mb-1 flex-col w-100 md:w-1/3">
<!-- <template #image><img src="/arch-search.svg" class="h-64"></template> -->
<template #title>Full Text Search</template>
<template #body>Postgres has excellent full-text search built-in. You don't need another expensive service. Super Graph makes it super easy to use with keyword ranking and highlighting also supported.</template>
</card>
<card className="mb-1 flex-col w-100 md:w-1/3">
<!-- <template #image><img src="/arch-bulk.svg" class="h-64"></template> -->
<template #title>Bulk Inserts</template>
<template #body>Efficiently insert, update and delete multiple items with a single query. Upserts are also supported</template>
</card>
</div>
</div>
</div>
<div
class="mx-auto text-center py-8"
v-if="data.footer"
>
{{ data.footer }}
</div>
</main>
</div>
</template>
<script>
import NavLink from '@theme/components/NavLink.vue'
import Navbar from '@theme/components/Navbar.vue'
import Card from './Card.vue'
import { library } from '@fortawesome/fontawesome-svg-core'
import { faPortrait, faShieldAlt, faLock } from '@fortawesome/free-solid-svg-icons'
import { FontAwesomeIcon } from '@fortawesome/vue-fontawesome'
library.add(faPortrait, faShieldAlt, faLock)
export default {
components: { NavLink, Navbar, FontAwesomeIcon, Card },
computed: {
data () {
return this.$page.frontmatter
},
actionLink () {
return {
link: this.data.actionLink,
text: this.data.actionText
}
}
}
}
</script>

View File

@ -1,6 +1,6 @@
let ogprefix = 'og: http://ogp.me/ns#' let ogprefix = 'og: http://ogp.me/ns#'
let title = 'Super Graph' let title = 'Super Graph'
let description = 'An instant GraphQL API for your app. No code needed.' let description = 'Fetch data without code'
let color = '#f42525' let color = '#f42525'
module.exports = { module.exports = {
@ -15,7 +15,7 @@ module.exports = {
{ text: 'Internals', link: '/internals' }, { text: 'Internals', link: '/internals' },
{ text: 'Github', link: 'https://github.com/dosco/super-graph' }, { text: 'Github', link: 'https://github.com/dosco/super-graph' },
{ text: 'Docker', link: 'https://hub.docker.com/r/dosco/super-graph/builds' }, { text: 'Docker', link: 'https://hub.docker.com/r/dosco/super-graph/builds' },
{ text: 'Join Chat', link: 'https://discord.gg/NKdXBc' }, { text: 'Join Chat', link: 'https://discord.com/invite/23Wh7c' },
], ],
serviceWorker: { serviceWorker: {
updatePopup: true updatePopup: true

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

View File

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

View File

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 66 KiB

View File

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

Before

Width:  |  Height:  |  Size: 149 KiB

After

Width:  |  Height:  |  Size: 149 KiB

View File

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

Before

Width:  |  Height:  |  Size: 146 KiB

After

Width:  |  Height:  |  Size: 146 KiB

Some files were not shown because too many files have changed in this diff Show More