visit
Populate the script properties: Script file name: A name for the script file, for example: GlueSalesforceJDBCS3 path where the script is stored: Fill in or browse to an S3 bucket. Temporary directory: Fill in or browse to an S3 bucket. Expand Security configuration, script libraries and job parameters (optional). For Dependent jars path, fill in or browse to the S3 bucket where you uploaded the JAR file. Be sure to include the name of the JAR file itself in the path, i.e.: s3://mybucket/cdata.jdbc.salesforce.jarClick Next. Here you will have the option to add connection to other AWS endpoints. So, if your Destination is Redshift, MySQL, etc, you can create and use connections to those data sources. Click "Save job and edit script" to create the job. In the editor that opens, write a python script for the job. You can use the sample script (see below) as an example.
To connect to Salesforce using the CData JDBC driver, you will need to create a JDBC URL, populating the necessary connection properties. Additionally, you will need to set the RTK property in the JDBC URL (unless you are using a Beta driver). You can view the licensing file included in the installation for information on how to set this property.
There are several authentication methods available for connecting to Salesforce: Login, OAuth, and SSO. The Login method requires you to have the username, password, and security token of the user.If you do not have access to the username and password or do not wish to require them, you can use OAuth authentication.SSO (single sign-on) can be used by setting the SSOProperties, SSOLoginUrl, and TokenUrl connection properties, which allow you to authenticate to an identity provider. See the "Getting Started" chapter in the help documentation for more information.Built-in Connection String DesignerFor assistance in constructing the JDBC URL, use the connection string designer built into the Salesforce JDBC Driver. Either double-click the JAR file or execute the JAR file from the command-line.java -jar cdata.jdbc.salesforce.jar
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sparkContext = SparkContext()
glueContext = GlueContext(sparkContext)
sparkSession = glueContext.spark_session
##Use the CData JDBC driver to read Salesforce data from the Account table into a DataFrame
##Note the populated JDBC URL and driver class name
source_df = sparkSession.read.format("jdbc").option("url","jdbc:salesforce:RTK=5246...;User=username;Password=password;SecurityToken=Your_Security_Token;").option("dbtable","Account").option("driver","cdata.jdbc.salesforce.SalesforceDriver").load()
glueJob = Job(glueContext)
glueJob.init(args['JOB_NAME'], args)
##Convert DataFrames to AWS Glue's DynamicFrames Object
dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df")
##Write the DynamicFrame as a file in CSV format to a folder in an S3 bucket.
##It is possible to write to any Amazon data store (SQL Server, Redshift, etc) by using any previously defined connections.
retDatasink4 = glueContext.write_dynamic_frame.from_options(frame = dynamic_dframe, connection_type = "s3", connection_options = {"path": "s3://mybucket/outfiles"}, format = "csv", transformation_ctx = "datasink4")
glueJob.commit()