Gudu SQLFlow Product Docs
  • 1. Introduction
    • What is Gudu SQLFlow?
      • What SQLFlow can do
      • Architecture Overview
    • Getting Started
      • Sign up a new account
        • Team Management
        • Delete My Account
        • Activate by entering a coupon
      • How to use SQLFlow
      • Different modes in Gudu SQLFlow
        • Query mode
        • Job mode
      • Basic Usage
      • Convert SQL to E-R Diagram
      • Colors in SQLFlow Diagram
      • Show call relationship
    • Installation
      • Version and Users
        • Cloud and On-Premise version
        • SQLFlow before Version 6
          • For older version SQLFlow under Linux
          • For older version SQLFlow under MacOS
          • For older version SQLFlow under Windows
      • Linux
      • MacOS
      • Windows
      • Docker
      • Clickhouse Installation
        • Clickhouse For CentOs
        • Clickhouse For Ubuntu/Debian/RHEL
      • Troubleshooting
      • Upgrade
      • Third Party Components
      • Renew License File
    • UI
      • SQLText Editor
      • Schema Explorer
      • Diagram Panel
      • Settings
      • Job Management
        • Job Sources
    • Dlineage Tool
      • Overview
      • Usage
        • Analyze data linege from SQL files
        • Analyze data linege from a database
        • Resolve the ambiguous columns in SQL query
        • Map the DataFlowAnalyzer and the settings on SQLFlow UI
        • Settings
      • Dataflow.xml structure
      • FAQ
  • 2. CONCEPTS
    • Data Lineage Basics
      • Dataflow
        • Relations generated by SQLFlow
      • Direct Dataflow
      • Indirect Dataflow
      • Aggregate function and Dataflow
      • Dataflow chain
    • Data Lineage Format Reference
  • 3. API Docs
    • Prerequisites
    • Using the Rest API
    • SQLFlow Rest API reference
      • User Interface
      • Generation Interface
        • /sqlflow
        • /sqlflow/selectedgraph/table_level_lineage
        • /sqlflow/selectedgraph/image
        • /sqlflow/graph
        • /sqlflow/graph/table_level_lineage
        • /sqlflow/graph/image
        • /sqlflow/downstreamGraph
        • /sqlflow/upstreamGraph
        • /sqlflow/erdiagramSelectGraph
        • /sqlflow/leftMostSourceTableGraph
      • Job Interface
        • /submitUserJob
        • /displayUserJobSummary
        • /displayUserJobsSummary
        • /exportLineageAsJson
        • /exportFullLineageAsJson
        • /exportLineageAsGraphml
        • /submitPersistJob
        • /displayUserLatestJobTableLevelLineage
      • Export Image
      • Export CSV
        • /sqlflow/exportFullLineageAsCsv
        • /job/exportFullLineageAsCsv
    • Swagger UI
    • Export the data lineage result
    • Python
      • Basic Usage
      • Advanced Usage
    • SQL Parser API
      • checkSyntax
  • 4. SQLFlow Widget
    • Widget Get started
    • Usages
    • Widget API Reference
  • 5. Databases
    • Database Objects
      • Azure
      • DB2
  • 6. SQLFlow-ingester
    • Introduction
      • SQLFlow-Exporter
      • SQLFlow-Extractor
      • SQLFlow-Submitter
    • Get Started
      • SQL Server
    • SQLFlow-Ingester Java API Usage
    • Understand the format of exported data
      • Oracle
      • Microsoft SQL Server
      • MySQL
      • PostgreSQL
    • List of Supported dbVendors
    • Git Repo
    • Third Party Components
  • 7. Reference
    • Lineage Model
      • Json Format Lineage Model
      • XML Format Lineage Model
      • Data Lineage Elements
    • Database Model
  • 8. other
    • FAQ
      • Handling Internal Database
      • Delete Your Account
      • Table Form Data Without Intermediates
      • Not all schema exported from Oracle
      • Lineage Customization
    • Roadmap
    • SQL Samples
      • Exchange table partition
      • Generate relationship for renamed table
      • Snowflake table function lineage detection
    • Change Logs
    • SQLFlow with Oracle XML functions
    • Major Organizations Utilizing SQLFlow
Powered by GitBook
On this page
  1. 3. API Docs
  2. SQLFlow Rest API reference

Job Interface

https://github.com/sqlparser/sqlflow_public/blob/master/api/sqlflow_api.md#sqlflow-user-job-interface

Previous/sqlflow/leftMostSourceTableGraphNext/submitUserJob

Last updated 2 years ago

Simple job rest API

Call this API by sending the SQL files and get the result includes the data lineage. SQLFlow job supports both of multiple files and zip archive file.

/gspLive_backend/sqlflow/job/submitUserJob

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/submitUserJob" -H "accept:application/json;charset=utf-8" -H "Content-Type:multipart/form-data" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "sqlfiles=@FIRST FILE PATH" -F "sqlfiles=@SECOND FILE PATH" -F "dbvendor=dbvmssql" -F "jobName=job1"

Note:

  • -H "Content-Type:multipart/form-data" is required

  • Add @ before the file path

Sample response:

{
	"code":200,
	"data":{
		"jobId":"c359aef4bd9641d697732422debd8055",
		"jobName":"job1",
		"userId":"google-oauth2|104002923119102769706",
		"dbVendor":"dbvmssql",
		"dataSource":{
			
		},
		"fileNames":["1.sql","1.zip"],
		"createTime":"2020-12-15 15:14:39",
		"status":"create"
	}
}

Please records the jobId field.

2. Get job status

/gspLive_backend/sqlflow/job/displayUserJobSummary

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/displayUserJobSummary" -F "jobId=c359aef4bd9641d697732422debd8055" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE"

Sample response:

{
  "code":200,
  "data":{
  	"jobId":"c359aef4bd9641d697732422debd8055",
  	"jobName":"job1",
  	"userId":"google-oauth2|104002923119102769706",
  	"dbVendor":"dbvmssql",
  	"dataSource":{
  		
  	},
  	"fileNames":["1.sql","1.zip"],
  	"createTime":"2020-12-15 15:14:39",
  	"status":"success",
  	"sessionId":"fe5898d4e1b1a7782352b50a8203ca24c04f5513446e9fb059fc4d584fab4dbf_1608045280033"
  }
}
/gspLive_backend/sqlflow/job/displayUserJobsSummary

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/displayUserJobsSummary" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE"

Sample Response:

{
  "code": 200,
  "data": {
    "total": 2,
    "success": 2,
    "partialSuccess": 0,
    "fail": 0,
    "jobIds": [
      "939fdbbaf52b45139b86c0761d6036b0",
      "44bf162fc5e04ab68289e6e784875203"
    ],
    "jobDetails": [
      {
        "jobId": "939fdbbaf52b45139b86c0761d6036b0",
        "jobName": "yuanhao1",
        "userId": "auth0|632xxxx",
        "userEmail": "xxx@xxx.com",
        "dbVendor": "dbvoracle",
        "showRelationType": "fdd",
        "ignoreRecordSet": true,
        "dataflowOfAggregateFunction": "direct",
        "showConstantTable": false,
        "showTransform": false,
        "ignoreFunction": true,
        "fileNames": [
          "test.sql"
        ],
        "createTime": "2022-10-04 10:45:04",
        "status": "success",
        "sessionId": "24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725",
        "executeTime": "0min 1sec",
        "executeTimeMillis": 0,
        "onTop": false,
        "parallel": false,
        "persist": false,
        "incremental": false,
        "schedulable": false,
        "cancelSchedule": false,
        "lastExecuteTime": "2022-10-04 10:45:05",
        "subJobs": [],
        "version": "5.2.0"
      },
      {
        "jobId": "44bf162fc5e04ab68289e6e784875203",
        "jobName": "sqlserver sample lineage",
        "userId": "auth0|6326xxxx",
        "userEmail": "xxxx@xxxx.com",
        "dbVendor": "dbvmssql",
        "showRelationType": "fdd",
        "ignoreRecordSet": true,
        "showConstantTable": false,
        "showTransform": false,
        "ignoreFunction": true,
        "fileNames": [
          "metadata.zip"
        ],
        "createTime": "2022-09-18 06:46:20",
        "status": "success",
        "sessionId": "e1bc494a6f683c9e724c5f018d2a0c96d80304a3ddb025caf87dc1ac19e1a785_1636819107871",
        "hasMetadata": true,
        "executeTime": "0min 10sec",
        "executeTimeMillis": 0,
        "onTop": false,
        "parallel": false,
        "persist": false,
        "incremental": false,
        "schedulable": false,
        "cancelSchedule": true
      }
    ]
  }
}

3. Export data lineage

When the job status is success, you can export the data lineage in json, csv, graphml formats

/gspLive_backend/sqlflow/job/exportLineageAsJson

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/exportLineageAsJson" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "jobId=c359aef4bd9641d697732422debd8055" --output lineage.json

Note:

If you want to get table to table relation, please add option -F "tableToTable=true"

Sample Response is a file in Json format:

{
	"jobId":"939fdbbaf52b45139b86c0761d6036b0",
	"code":200,
	"data":{
		"mode":"global",
		"summary":{
			"schema":0,
			"process":1,
			"database":0,
			"view":0,
			"mostRelationTables":[],
			"column":3,
			"relationship":0,
			"table":1
		},
		"sqlflow":{
			"dbvendor":"dbvoracle",
			"relationships":[],
			"dbobjs":[
				{
					"queryHashId":"04ebc5aec1a07e1db80b0bc798742875",
					"name":"QUERY INSERT-1",
					"coordinates":[
						{
							"hashCode":"0",
							"x":1,
							"y":1
						},
						{
							"hashCode":"0",
							"x":1,
							"y":73
						}
					],
					"id":"8",
					"type":"process"
				},
				{
					"columns":[
						{
							"name":"ID",
							"coordinates":[
								{
									"hashCode":"0",
									"x":1,
									"y":28
								},
								{
									"hashCode":"0",
									"x":1,
									"y":30
								}
							],
							"id":"5"
						},
						{
							"name":"FIRST_NAME",
							"coordinates":[
								{
									"hashCode":"0",
									"x":1,
									"y":32
								},
								{
									"hashCode":"0",
									"x":1,
									"y":42
								}
							],
							"id":"6"
						},
						{
							"name":"LAST_NAME",
							"coordinates":[
								{
									"hashCode":"0",
									"x":1,
									"y":44
								},
								{
									"hashCode":"0",
									"x":1,
									"y":53
								}
							],
							"id":"7"
						}
					],
					"name":"RAW_CUSTOMERS",
					"coordinates":[
						{
							"hashCode":"0",
							"x":1,
							"y":13
						},
						{
							"hashCode":"0",
							"x":1,
							"y":26
						}
					],
					"id":"4",
					"type":"table"
				}
			]
		},
		"graph":{
			"relationshipIdMap":{},
			"elements":{
				"tables":[],
				"edges":[]
			},
			"tooltip":{},
			"listIdMap":{}
		}
	},
	"sessionId":"24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725"
}
/gspLive_backend/sqlflow/job/exportFullLineageAsCsv

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/exportFullLineageAsCsv" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "jobId=c359aef4bd9641d697732422debd8055" --output lineage.csv

Note:

If you want to get table to table relation, please add option -F "tableToTable=true"

If you want to change csv delimiter, please add option -F "delimiter=<delimiter char>"

/gspLive_backend/sqlflow/job/exportLineageAsGraphml

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/exportLineageAsGraphml" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "jobId=c359aef4bd9641d697732422debd8055" --output lineage.graphml

Note:

If you want to get table to table relation, please add option -F "tableToTable=true"

Regular job rest API

1. Submit a regular job

Call this API by sending the SQL files and get the result includes the data lineage. SQLFlow job supports both of multiple files and zip archive file.

Set incremental=true If the job is incremental.

  • jobId should be null for the first submit and please note down the jobId field from response message

  • jobId cannot be null for next submit. Give the jobId which is returned in the first submit response.

/gspLive_backend/sqlflow/job/submitPersistJob

Example in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/submitPersistJob" -H "accept:application/json;charset=utf-8" -H "Content-Type:multipart/form-data" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "sqlfiles=@FIRST FILE PATH" -F "sqlfiles=@SECOND FILE PATH" -F "dbvendor=dbvmssql" -F "jobName=job1" -F "incremental=true"

Incremental submit in Curl

curl -X POST "https://api.gudusoft.com/gspLive_backend/sqlflow/job/submitPersistJob" -H "accept:application/json;charset=utf-8" -H "Content-Type:multipart/form-data" -F "userId=YOUR USER ID HERE" -F "token=YOUR TOKEN HERE" -F "sqlfiles=@FIRST FILE PATH" -F "sqlfiles=@SECOND FILE PATH" -F "dbvendor=dbvmssql" -F "jobName=job1" -F "incremental=true" -F "jobId=JobId OF FIRST SUBMIT"

Note:

  • -H "Content-Type:multipart/form-data" is required

  • Add @ before the file path

Return data:

{
	"code":200,
	"data":{
		"jobId":"c359aef4bd9641d697732422debd8055",
		"jobName":"job1",
		"userId":"google-oauth2|104002923119102769706",
		"dbVendor":"dbvmssql",
		"dataSource":{
			
		},
		"fileNames":["1.sql","1.zip"],
		"createTime":"2020-12-15 15:14:39",
		"status":"create"
	}
}

Please records the jobId field for the further usage.

1.

3.1

3.2

3.3

Submit a sqlflow job
Get the specific user job status and summary
Get all jobs (include history jobs) status and summary
Export data lineage in json format
Export data lineage in csv format
Export data lineage in graphml format with which you can view the lineage graph at yEd Graph Editor

display the specific user job summary

post

display the specific user job summary

Query parameters
jobIdstringOptional

job id

jobNamestringOptional

job name

tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

userIdstringRequired

the user id of sqlflow web or client

Responses
200
OK
application/json;charset=utf-8
ResponsestringExample: {"code":200,"data":{"jobId":"939fdbbaf52b45139b86c0761d6036b0","jobName":"yuanhao1","userId":"auth0|6326bxxxx","userEmail":"xxx@xxx.com","dbVendor":"dbvoracle","showRelationType":"fdd","ignoreRecordSet":true,"dataflowOfAggregateFunction":"direct","showConstantTable":false,"showTransform":false,"ignoreFunction":true,"fileNames":["test.sql"],"createTime":"2022-10-04 10:45:04","status":"success","sessionId":"24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725","executeTime":"0min 1sec","executeTimeMillis":0,"onTop":false,"parallel":false,"persist":false,"incremental":false,"schedulable":false,"cancelSchedule":false,"lastExecuteTime":"2022-10-04 10:45:05","subJobs":[],"version":"5.2.0"}}
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/displayUserJobSummary HTTP/1.1
Host: api.gudusoft.com
Accept: */*
{
  "code": 200,
  "data": {
    "jobId": "939fdbbaf52b45139b86c0761d6036b0",
    "jobName": "yuanhao1",
    "userId": "auth0|6326bxxxx",
    "userEmail": "xxx@xxx.com",
    "dbVendor": "dbvoracle",
    "showRelationType": "fdd",
    "ignoreRecordSet": true,
    "dataflowOfAggregateFunction": "direct",
    "showConstantTable": false,
    "showTransform": false,
    "ignoreFunction": true,
    "fileNames": [
      "test.sql"
    ],
    "createTime": "2022-10-04 10:45:04",
    "status": "success",
    "sessionId": "24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725",
    "executeTime": "0min 1sec",
    "executeTimeMillis": 0,
    "onTop": false,
    "parallel": false,
    "persist": false,
    "incremental": false,
    "schedulable": false,
    "cancelSchedule": false,
    "lastExecuteTime": "2022-10-04 10:45:05",
    "subJobs": [],
    "version": "5.2.0"
  }
}

Get all jobs (include history jobs) status and summary

post

Get all jobs (include history jobs) status and summary

Query parameters
tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

userIdstringRequired

user id

Responses
200
OK
application/json;charset=utf-8
ResponsestringExample: {"code":200,"data":{"total":2,"success":2,"partialSuccess":0,"fail":0,"jobIds":["939fdbbaf52b45139b86c0761d6036b0","44bf162fc5e04ab68289e6e784875203"],"jobDetails":[{"jobId":"939fdbbaf52b45139b86c0761d6036b0","jobName":"yuanhao1","userId":"auth0|632xxxx","userEmail":"xxx@xxx.com","dbVendor":"dbvoracle","showRelationType":"fdd","ignoreRecordSet":true,"dataflowOfAggregateFunction":"direct","showConstantTable":false,"showTransform":false,"ignoreFunction":true,"fileNames":["test.sql"],"createTime":"2022-10-04 10:45:04","status":"success","sessionId":"24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725","executeTime":"0min 1sec","executeTimeMillis":0,"onTop":false,"parallel":false,"persist":false,"incremental":false,"schedulable":false,"cancelSchedule":false,"lastExecuteTime":"2022-10-04 10:45:05","subJobs":[],"version":"5.2.0"},{"jobId":"44bf162fc5e04ab68289e6e784875203","jobName":"sqlserver sample lineage","userId":"auth0|6326xxxx","userEmail":"xxxx@xxxx.com","dbVendor":"dbvmssql","showRelationType":"fdd","ignoreRecordSet":true,"showConstantTable":false,"showTransform":false,"ignoreFunction":true,"fileNames":["metadata.zip"],"createTime":"2022-09-18 06:46:20","status":"success","sessionId":"e1bc494a6f683c9e724c5f018d2a0c96d80304a3ddb025caf87dc1ac19e1a785_1636819107871","hasMetadata":true,"executeTime":"0min 10sec","executeTimeMillis":0,"onTop":false,"parallel":false,"persist":false,"incremental":false,"schedulable":false,"cancelSchedule":true}]}}
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/displayUserJobsSummary HTTP/1.1
Host: api.gudusoft.com
Accept: */*
{
  "code": 200,
  "data": {
    "total": 2,
    "success": 2,
    "partialSuccess": 0,
    "fail": 0,
    "jobIds": [
      "939fdbbaf52b45139b86c0761d6036b0",
      "44bf162fc5e04ab68289e6e784875203"
    ],
    "jobDetails": [
      {
        "jobId": "939fdbbaf52b45139b86c0761d6036b0",
        "jobName": "yuanhao1",
        "userId": "auth0|632xxxx",
        "userEmail": "xxx@xxx.com",
        "dbVendor": "dbvoracle",
        "showRelationType": "fdd",
        "ignoreRecordSet": true,
        "dataflowOfAggregateFunction": "direct",
        "showConstantTable": false,
        "showTransform": false,
        "ignoreFunction": true,
        "fileNames": [
          "test.sql"
        ],
        "createTime": "2022-10-04 10:45:04",
        "status": "success",
        "sessionId": "24a4455c71fa35c0393d5747f9c23a9d99f32fa4c130a6b8da8d6a7db8d157ae_1664880305725",
        "executeTime": "0min 1sec",
        "executeTimeMillis": 0,
        "onTop": false,
        "parallel": false,
        "persist": false,
        "incremental": false,
        "schedulable": false,
        "cancelSchedule": false,
        "lastExecuteTime": "2022-10-04 10:45:05",
        "subJobs": [],
        "version": "5.2.0"
      },
      {
        "jobId": "44bf162fc5e04ab68289e6e784875203",
        "jobName": "sqlserver sample lineage",
        "userId": "auth0|6326xxxx",
        "userEmail": "xxxx@xxxx.com",
        "dbVendor": "dbvmssql",
        "showRelationType": "fdd",
        "ignoreRecordSet": true,
        "showConstantTable": false,
        "showTransform": false,
        "ignoreFunction": true,
        "fileNames": [
          "metadata.zip"
        ],
        "createTime": "2022-09-18 06:46:20",
        "status": "success",
        "sessionId": "e1bc494a6f683c9e724c5f018d2a0c96d80304a3ddb025caf87dc1ac19e1a785_1636819107871",
        "hasMetadata": true,
        "executeTime": "0min 10sec",
        "executeTimeMillis": 0,
        "onTop": false,
        "parallel": false,
        "persist": false,
        "incremental": false,
        "schedulable": false,
        "cancelSchedule": true
      }
    ]
  }
}

export sqlflow lineage as json format

post

export sqlflow lineage as json format

Query parameters
columnstringOptional

column to export

databasestringOptional

source database

schemastringOptional

schema

tablestringOptional

table

ignoreFunctionbooleanOptional

whether ignore function

Default: falseExample: false
ignoreRecordSetbooleanOptional

whether ignore record set

Default: falseExample: false
jobIdstringOptional

job to export. will return user's latest job if empty

showConstantTablebooleanOptional

whether show constant table

Default: false
showLinkOnlybooleanOptional

whether show link only

Default: true
showRelationTypestringOptional

show relation type, required false, default value is 'fdd', multiple values seperated by comma like fdd,frd,fdr,join. Availables are 'fdd' value of target column from source column, 'frd' the recordset count of target column which is affected by value of source column, 'fdr' value of target column which is affected by the recordset count of source column, 'join' combines rows from two or more tables, based on a related column between them

Default: fdd
showTransformbooleanOptional

whether show Transform

Default: false
simpleOutputbooleanOptional

simple output, ignore the intermediate results, defualt is false.

Default: falseExample: false
tableToTablebooleanOptional

whether show table to table relation only

Default: falseExample: false
tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

treatArgumentsInCountFunctionAsDirectDataflowbooleanOptional

Whether treat the arguments in COUNT function as direct Dataflow

Default: false
userIdstringRequired

the user id of sqlflow web or client

Responses
200
OK
application/json
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/exportLineageAsJson HTTP/1.1
Host: api.gudusoft.com
Accept: */*
{
  "jsonResonse": "binary"
}

export sqlflow full lineage as csv format

post

export full sqlflow lineage as csv format

Query parameters
delimiterstringOptional

delimiter of the values in CSV, default would be ','

Default: ,
jobIdstringOptional

job to export. will return user's latest job if empty

tableToTablebooleanOptional

whether show table to table relation only

Default: false
tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

userIdstringRequired

the user id of sqlflow web or client

Responses
200
OK
application/json;charset=utf-8
Responsestring · binary
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/exportFullLineageAsCsv HTTP/1.1
Host: api.gudusoft.com
Accept: */*
binary

export sqlflow lineage as graphml format

post

export sqlflow lineage as graphml format

Query parameters
columnstringOptional

column

databasestringOptional

database

schemastringOptional

schema

tablestringOptional

table

ignoreFunctionbooleanOptional

whether ignore function

Default: falseExample: false
ignoreRecordSetbooleanOptional

whether ignore record set

Default: falseExample: false
jobIdstringOptional

job to export. will return user's latest job if empty

showConstantTablebooleanOptional

whether show constant table

Default: false
showLinkOnlybooleanOptional

whether show link only

Default: true
showRelationTypestringOptional

show relation type, required false, default value is 'fdd', multiple values seperated by comma like fdd,frd,fdr,join. Availables are 'fdd' value of target column from source column, 'frd' the recordset count of target column which is affected by value of source column, 'fdr' value of target column which is affected by the recordset count of source column, 'join' combines rows from two or more tables, based on a related column between them

Default: fdd
showTransformbooleanOptional

whether show Transform

Default: false
simpleOutputbooleanOptional

simple output, ignore the intermediate results, defualt is false.

Default: falseExample: false
tableToTablebooleanOptional

whether show table to table relation only

Default: falseExample: false
tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

treatArgumentsInCountFunctionAsDirectDataflowbooleanOptional

Whether treat the arguments in COUNT function as direct Dataflow

Default: false
userIdstringRequired

the user id of sqlflow web or client

Responses
200
OK
application/json;charset=utf-8
Responsestring · binary
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/exportLineageAsGraphml HTTP/1.1
Host: api.gudusoft.com
Accept: */*
binary
  • Simple job rest API
  • POSTsubmit user job
  • POSTdisplay the specific user job summary
  • POSTGet all jobs (include history jobs) status and summary
  • POSTexport sqlflow lineage as json format
  • POSTexport sqlflow full lineage as csv format
  • POSTexport sqlflow lineage as graphml format
  • Regular job rest API
  • POSTsubmit persist or incremental job

submit user job

post

Submit a simple sqlflow job. Send the SQL files and get the data lineage result. SQLFlow job supports both of multiple files and zip archive file.

Query parameters
accountstringOptional

account name of the target database

csvFormatstringOptional

Csv Format code, Format of a CSV file. used to represent the CSV in the Catalog, Schema, ObjectType, ObjectName, ObjectCode, Notes each column is the number of columns in the CSV file, does not exist it is 0. check https://www.gudusoft.com/blog/2021/09/05/sqlflow-csv/ for more detail

databasestringOptional

database metadata in the jdbc string

dbvendorstring · enumRequired

database vendor

Default: dbvoraclePossible values:
defaultDatabasestringOptional

default databse when there's no metadata

defaultSchemastringOptional

default schema

excludedDbsSchemasstringOptional

This parameters works under the resultset filtered by extractedDbsSchemas. List of databases and schemas to exclude from extraction, separated by commas database1/schema1,database2 or database1.schema1,database2 When parameter database is filled in, this parameter is considered a schema. And support wildcard characters such as database1/,/schema,/.

extractedDbsSchemasstringOptional

List of databases and schemas to extract, separated by commas, which are to be provided in the format database/schema; Or blank to extract all databases. database1/schema1,database2/schema2,database3 or database1.schema1,database2.schema2,database3 When parameter database is filled in, this parameter is considered a schema. And support wildcard characters such as database1/,/schema,/.

extractedStoredProceduresstringOptional

A list of stored procedures under the specified database and schema to extract, separated by commas, which are to be provided in the format database.schema.procedureName or schema.procedureName; Or blank to extract all databases, support expression. database1.schema1.procedureName1,database2.schema2.procedureName2,database3.schema3,database4 or database1/schema1/procedureName1,database2/schema2

extractedViewsstringOptional

A list of stored views under the specified database and schema to extract, separated by commas, which are to be provided in the format database.schema.viewName or schema.viewName. Or blank to extract all databases, support expression. database1.schema1.procedureName1,database2.schema2.procedureName2,database3.schema3,database4 or database1/schema1/procedureName1,database2/schema2

hostnamestringOptional

db hostname

passwordstringOptional

db password

portinteger · int32Optional

db port

ignoreFunctionbooleanOptional

whether ignore Function

Default: true
ignoreRecordSetbooleanOptional

whether ignore Record Set

Default: true
jobNamestringRequired

jobName

objectCodeEncloseCharstringOptional

Character to specify the SQL content, default is "

objectCodeEscapeCharstringOptional

specifies the string to escape in case of objectCodeEncloseChar in the SQL, default is ".

onTopbooleanOptional

whether set the job on the top

Default: false
parallelbooleanOptional

whether parallelly process the job. Parallel processing will lead to a high performance while decreasing the lineage accuracy.

Default: false
showConstantTablebooleanOptional

whether show constant table

Default: false
showRelationTypestringOptional

show relation type, required false, default value is 'fdd', multiple values seperated by comma like fdd,frd,fdr,join. Availables are 'fdd' value of target column from source column, 'frd' the recordset count of target column which is affected by value of source column, 'fdr' value of target column which is affected by the recordset count of source column, 'join' combines rows from two or more tables, based on a related column between them

Default: fdd
showTransformbooleanOptional

whether show Transform

Default: false
sqlsourcestringOptional

sql source

tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

treatArgumentsInCountFunctionAsDirectDataflowbooleanOptional

Whether treat the arguments in COUNT function as direct Dataflow

Default: false
userIdstringRequired

the user id of sqlflow web or client

Body
sqlfilesstring · binary[]Optional
Responses
200
OK
application/json;charset=utf-8
ResponsestringExample: {code=200, data={jobId=c359aef4bd9641d697732422debd8055, jobName=job1, userId=google-oauth2|104002923119102769706, dbVendor=dbvmssql, dataSource={}, fileNames=[1.sql, 1.zip], createTime=2020-12-15 15:14:39, status=create}}
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/submitUserJob HTTP/1.1
Host: api.gudusoft.com
Content-Type: multipart/form-data
Accept: */*
Content-Length: 23

{
  "sqlfiles": [
    "binary"
  ]
}
{code=200, data={jobId=c359aef4bd9641d697732422debd8055, jobName=job1, userId=google-oauth2|104002923119102769706, dbVendor=dbvmssql, dataSource={}, fileNames=[1.sql, 1.zip], createTime=2020-12-15 15:14:39, status=create}}

submit persist or incremental job

post

submit persist or incremental job

Query parameters
accountstringOptional

account name of the target database

passwordstringOptional

db password

portinteger · int32Optional

db port

databasestringOptional

source database

schedulablebooleanOptional

whether schedulable

Default: false
cronExpressionstringOptional

cron expression to schedule time for the job to be executed

Example: 0 1 * * *
dbvendorstringRequired

database dbvendor

defaultDatabasestringOptional

default databse when there's no metadata

defaultSchemastringOptional

default schema

extrastringOptional

extra

firstSubmitbooleanOptional

whether this is the first submit

Default: false
hostnamestringOptional

db hostname

incrementalbooleanOptional

whether incremental job

Default: true
jobIdstringOptional

jobId should be null for the first submit and please note down the jobId field from response message. jobId cannot be null for next submit. Give the jobId which is returned in the first submit response.

jobNamestringOptional

jobName

onTopbooleanOptional

whether set the job on the top

Default: false
sqlsourcestringOptional

sqlsource

tokenstringRequired

The token is generated from userid and usersecret. It is used in every Api invocation.

userIdstringRequired

userId

Body
sqlfilesstring · binary[]Optional
Responses
200
OK
application/json;charset=utf-8
ResponsestringExample: {"code":200,"data":{"jobId":"c359aef4bd9641d697732422debd8055","jobName":"job1","userId":"google-oauth2|104002923119102769706","dbVendor":"dbvmssql","dataSource":{},"fileNames":["1.sql","1.zip"],"createTime":"2020-12-15 15:14:39","status":"create"}}
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
post
POST /api/gspLive_backend/sqlflow/job/submitPersistJob HTTP/1.1
Host: api.gudusoft.com
Content-Type: multipart/form-data
Accept: */*
Content-Length: 23

{
  "sqlfiles": [
    "binary"
  ]
}
{
  "code": 200,
  "data": {
    "jobId": "c359aef4bd9641d697732422debd8055",
    "jobName": "job1",
    "userId": "google-oauth2|104002923119102769706",
    "dbVendor": "dbvmssql",
    "dataSource": {},
    "fileNames": [
      "1.sql",
      "1.zip"
    ],
    "createTime": "2020-12-15 15:14:39",
    "status": "create"
  }
}