New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Error] [JvmBridge] java.sql.SQLException: No suitable driver #271
Comments
@schinchanikar-rms: Thank you for reporting this problem! Can you provide the full logs, the precise spark-submit command you are using and your complete C# app code please? And btw, you have to supply the appropriate driver through your spark-submit. For instance,
Note that this is just an example. You'd have to replace the JAR version with the correct version you installed. |
I am using the following code to load the SQL table into the dataframe:
DataFrame dataFrame = spark.Read().Format("jdbc").Option("url", "jdbc:sqlserver://localhost;databaseName=TEST_DB;integratedSecurity=true;")
.Option("driver", "com.Microsoft.SqlServerDriver")
.Option("dbtable", "Address")
.Load();
And this is the exception:
[2019-09-26T22:38:24.8283494Z] [CAWL113418] [Error] [JvmBridge] JVM method execution failed: Nonstatic method load failed for class 6 when called with 1 arguments ([Index=1, Type=String[], Value=System.String[]], )
[2019-09-26T22:38:24.8283960Z] [CAWL113418] [Error] [JvmBridge] java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:162)
at org.apache.spark.api.dotnet.DotnetBackendHandler.handleBackendRequest(DotnetBackendHandler.scala:102)
at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:29)
at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:24)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
From: Rahul Potharaju <notifications@github.com>
Sent: Friday, September 27, 2019 11:28 AM
To: dotnet/spark <spark@noreply.github.com>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com>; Mention <mention@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
@schinchanikar-rms<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fschinchanikar-rms&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C081c44b415f54f42677408d743785e3d%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637052056572228130&sdata=kAK25xtsOQa1RdkqAnyvwaAwKIDs%2BZL6V%2FCfJzKM8hI%3D&reserved=0>: Thank you for reporting this problem! Can you provide the full logs and your complete app code please?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK25AOPMXNENRNQIQXTQLZGBNA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7ZXOVA%23issuecomment-536049492&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C081c44b415f54f42677408d743785e3d%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637052056572238123&sdata=a9JYbyLbTMPatrGZrK5LWreckGND30sfOBaITWTjy4o%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSKYBLXFMGCGNNZL4DNDQLZGBNANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C081c44b415f54f42677408d743785e3d%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637052056572238123&sdata=4phWovsgJ6VUlsDBkhAuMc7zYeSnHGT6%2F7nFeJhrwHk%3D&reserved=0>.
|
Thanks! Can you also provide the spark-submit command you are using to submit your job? |
Can you supply the appropriate driver JAR through your spark-submit. For instance,
Note that this is just an example. You'd have to replace the JAR version with the correct version you installed. |
Here is the full log .Please tell me if this is really about the driver or something else in the code:
C:\Users\schinchanikar\mySparkApp>%SPARK_HOME%\bin\spark-submit --jars bin\debug\netcoreapp2.2\mssql-jdbc-7.4.1.jre8.jar --class org.apache.spark.deploy.dotnet.DotnetRunner --master local bin\Debug\netcoreapp2.2\microsoft-spark-2.4.x-0.4.0.jar dotnet bin\Debug\netcoreapp2.2\mySparkApp.dll
19/09/30 10:07:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/09/30 10:07:30 INFO DotnetRunner: Starting DotnetBackend with dotnet.
19/09/30 10:07:34 INFO DotnetRunner: Port number used by DotnetBackend is 62736
19/09/30 10:07:34 INFO DotnetRunner: Adding key=spark.jars and value=file:///C:/Users/schinchanikar/mySparkApp/bin/debug/netcoreapp2.2/mssql-jdbc-7.4.1.jre8.jar,file:/C:/Users/schinchanikar/mySparkApp/bin/Debug/netcoreapp2.2/microsoft-spark-2.4.x-0.4.0.jar to environment
19/09/30 10:07:34 INFO DotnetRunner: Adding key=spark.app.name and value=org.apache.spark.deploy.dotnet.DotnetRunner to environment
19/09/30 10:07:34 INFO DotnetRunner: Adding key=spark.submit.deployMode and value=client to environment
19/09/30 10:07:34 INFO DotnetRunner: Adding key=spark.master and value=local to environment
19/09/30 10:07:34 INFO DotnetRunner: Adding key=spark.repl.local.jars and value=file:///C:/Users/schinchanikar/mySparkApp/bin/debug/netcoreapp2.2/mssql-jdbc-7.4.1.jre8.jar to environment
[2019-09-30T17:07:34.3265640Z] [CAWL113418] [Info] [ConfigurationService] Using port 62736 for connection.
[2019-09-30T17:07:34.3312609Z] [CAWL113418] [Info] [JvmBridge] JvMBridge port is 62736
19/09/30 10:07:34 INFO SparkContext: Running Spark version 2.4.1
19/09/30 10:07:34 INFO SparkContext: Submitted application: word_count_sample
19/09/30 10:07:34 INFO SecurityManager: Changing view acls to: schinchanikar
19/09/30 10:07:34 INFO SecurityManager: Changing modify acls to: schinchanikar
19/09/30 10:07:34 INFO SecurityManager: Changing view acls groups to:
19/09/30 10:07:34 INFO SecurityManager: Changing modify acls groups to:
19/09/30 10:07:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(schinchanikar); groups with view permissions: Set(); users with modify permissions: Set(schinchanikar); groups with modify permissions: Set()
19/09/30 10:07:34 INFO Utils: Successfully started service 'sparkDriver' on port 62743.
19/09/30 10:07:34 INFO SparkEnv: Registering MapOutputTracker
19/09/30 10:07:34 INFO SparkEnv: Registering BlockManagerMaster
19/09/30 10:07:34 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/09/30 10:07:34 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/09/30 10:07:34 INFO DiskBlockManager: Created local directory at C:\Users\schinchanikar\AppData\Local\Temp\blockmgr-af30b85f-ff9a-4eed-89d3-a71444379669
19/09/30 10:07:34 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
19/09/30 10:07:34 INFO SparkEnv: Registering OutputCommitCoordinator
19/09/30 10:07:35 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/09/30 10:07:35 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://CAWL113418.rms.com:4040
19/09/30 10:07:35 INFO SparkContext: Added JAR file:///C:/Users/schinchanikar/mySparkApp/bin/debug/netcoreapp2.2/mssql-jdbc-7.4.1.jre8.jar at spark://CAWL113418.rms.com:62743/jars/mssql-jdbc-7.4.1.jre8.jar with timestamp 1569863255250
19/09/30 10:07:35 INFO SparkContext: Added JAR file:/C:/Users/schinchanikar/mySparkApp/bin/Debug/netcoreapp2.2/microsoft-spark-2.4.x-0.4.0.jar at spark://CAWL113418.rms.com:62743/jars/microsoft-spark-2.4.x-0.4.0.jar with timestamp 1569863255253
19/09/30 10:07:35 INFO Executor: Starting executor ID driver on host localhost
19/09/30 10:07:35 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 62757.
19/09/30 10:07:35 INFO NettyBlockTransferService: Server created on CAWL113418.rms.com:62757
19/09/30 10:07:35 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/09/30 10:07:35 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, CAWL113418.rms.com, 62757, None)
19/09/30 10:07:35 INFO BlockManagerMasterEndpoint: Registering block manager CAWL113418.rms.com:62757 with 366.3 MB RAM, BlockManagerId(driver, CAWL113418.rms.com, 62757, None)
19/09/30 10:07:35 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, CAWL113418.rms.com, 62757, None)
19/09/30 10:07:35 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, CAWL113418.rms.com, 62757, None)
19/09/30 10:07:35 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/C:/Users/schinchanikar/mySparkApp/spark-warehouse').
19/09/30 10:07:35 INFO SharedState: Warehouse path is 'file:/C:/Users/schinchanikar/mySparkApp/spark-warehouse'.
19/09/30 10:07:36 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/09/30 10:07:36 ERROR DotnetBackendHandler: methods:
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.slf4j.Logger org.apache.spark.sql.DataFrameReader.log()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.format(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.load(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.load(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.load()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.load(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.table(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.schema(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.schema(org.apache.spark.sql.types.StructType)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public java.lang.String org.apache.spark.sql.DataFrameReader.logName()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logDebug(scala.Function0)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logDebug(scala.Function0,java.lang.Throwable)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logTrace(scala.Function0)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logTrace(scala.Function0,java.lang.Throwable)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logWarning(scala.Function0,java.lang.Throwable)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logWarning(scala.Function0)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public boolean org.apache.spark.sql.DataFrameReader.isTraceEnabled()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logError(scala.Function0)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logError(scala.Function0,java.lang.Throwable)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logInfo(scala.Function0)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.logInfo(scala.Function0,java.lang.Throwable)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.initializeLogIfNecessary(boolean)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public boolean org.apache.spark.sql.DataFrameReader.initializeLogIfNecessary(boolean,boolean)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.slf4j.Logger org.apache.spark.sql.DataFrameReader.org$apache$spark$internal$Logging$$log_()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.options(scala.collection.Map)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.options(java.util.Map)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.text(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.text(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.text(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public boolean org.apache.spark.sql.DataFrameReader.initializeLogIfNecessary$default$2()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public void org.apache.spark.sql.DataFrameReader.org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(org.apache.spark.api.java.JavaRDD)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(org.apache.spark.rdd.RDD)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(org.apache.spark.sql.Dataset)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.json(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.option(java.lang.String,double)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.option(java.lang.String,long)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.option(java.lang.String,boolean)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.DataFrameReader org.apache.spark.sql.DataFrameReader.option(java.lang.String,java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.jdbc(java.lang.String,java.lang.String,java.util.Properties)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.jdbc(java.lang.String,java.lang.String,java.lang.String[],java.util.Properties)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.jdbc(java.lang.String,java.lang.String,java.lang.String,long,long,int,java.util.Properties)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.orc(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.orc(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.orc(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.textFile(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.textFile(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.textFile(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.csv(org.apache.spark.sql.Dataset)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.csv(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.csv(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.csv(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.parquet(scala.collection.Seq)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.parquet(java.lang.String)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public org.apache.spark.sql.Dataset org.apache.spark.sql.DataFrameReader.parquet(java.lang.String[])
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final void java.lang.Object.wait() throws java.lang.InterruptedException
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final void java.lang.Object.wait(long,int) throws java.lang.InterruptedException
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final native void java.lang.Object.wait(long) throws java.lang.InterruptedException
19/09/30 10:07:36 ERROR DotnetBackendHandler: public boolean java.lang.Object.equals(java.lang.Object)
19/09/30 10:07:36 ERROR DotnetBackendHandler: public java.lang.String java.lang.Object.toString()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public native int java.lang.Object.hashCode()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final native java.lang.Class java.lang.Object.getClass()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final native void java.lang.Object.notify()
19/09/30 10:07:36 ERROR DotnetBackendHandler: public final native void java.lang.Object.notifyAll()
19/09/30 10:07:36 ERROR DotnetBackendHandler: args:
19/09/30 10:07:36 ERROR DotnetBackendHandler: argType: java.lang.String[], argValue: [Ljava.lang.String;@513f4682
[2019-09-30T17:07:36.6389962Z] [CAWL113418] [Error] [JvmBridge] JVM method execution failed: Nonstatic method load failed for class 6 when called with 1 arguments ([Index=1, Type=String[], Value=System.String[]], )
[2019-09-30T17:07:36.6390335Z] [CAWL113418] [Error] [JvmBridge] java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:162)
at org.apache.spark.api.dotnet.DotnetBackendHandler.handleBackendRequest(DotnetBackendHandler.scala:102)
at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:29)
at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:24)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
[2019-09-30T17:07:36.6655902Z] [CAWL113418] [Exception] [JvmBridge] JVM method execution failed: Nonstatic method load failed for class 6 when called with 1 arguments ([Index=1, Type=String[], Value=System.String[]], )
at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] args)
Unhandled Exception: System.Exception: JVM method execution failed: Nonstatic method load failed for class 6 when called with 1 arguments ([Index=1, Type=String[], Value=System.String[]], )
at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] args)
at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object arg0)
at Microsoft.Spark.Interop.Ipc.JvmBridge.CallNonStaticJavaMethod(JvmObjectReference objectId, String methodName, Object arg0)
at Microsoft.Spark.Sql.DataFrameReader.Load(String[] paths)
at MySparkApp.Program.Main(String[] args) in C:\Users\schinchanikar\mySparkApp\Program.cs:line 18
19/09/30 10:07:39 INFO DotnetRunner: Closing DotnetBackend
19/09/30 10:07:39 INFO DotnetBackend: Requesting to close all call back sockets
19/09/30 10:07:39 INFO SparkContext: Invoking stop() from shutdown hook
19/09/30 10:07:39 INFO SparkUI: Stopped Spark web UI at http://CAWL113418.rms.com:4040
19/09/30 10:07:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/09/30 10:07:39 INFO MemoryStore: MemoryStore cleared
19/09/30 10:07:39 INFO BlockManager: BlockManager stopped
19/09/30 10:07:39 INFO BlockManagerMaster: BlockManagerMaster stopped
19/09/30 10:07:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/09/30 10:07:39 INFO SparkContext: Successfully stopped SparkContext
19/09/30 10:07:39 INFO ShutdownHookManager: Shutdown hook called
19/09/30 10:07:39 INFO ShutdownHookManager: Deleting directory C:\Users\schinchanikar\AppData\Local\Temp\spark-6b216f47-bfb0-4e32-9a9a-e3e7df1ce888
19/09/30 10:07:39 INFO ShutdownHookManager: Deleting directory C:\Users\schinchanikar\AppData\Local\Temp\spark-f226ee48-854b-46c3-94cd-e9b5c933b62c
From: Rahul Potharaju <notifications@github.com>
Sent: Friday, September 27, 2019 7:49 PM
To: dotnet/spark <spark@noreply.github.com>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com>; Mention <mention@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
Can you supply the appropriate driver JAR through your spark-submit. For instance,
spark-submit.cmd --jars path\to\sql-server\mssql-jdbc-7.4.1.jre8.jar .... rest of the command params ...
Note that this is just an example. You'd have to replace the JAR version with the correct version you installed.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK7JSR4XIZVE3MNXCX3QL3A2PA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD72PDVI%23issuecomment-536146389&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7Cc9d883593a7643679e4d08d743be7124%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637052357548173414&sdata=987RzVXi3tI7cO2wuk6pYTjJYYoIBP37XTjcJWewlOQ%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSK4AWEYFEW24ZFIXXWDQL3A2PANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7Cc9d883593a7643679e4d08d743be7124%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637052357548173414&sdata=J5Jzp7g8O0xZQchnkizZfepEEXg2vFfW90zsGsuoArg%3D&reserved=0>.
|
Hi, The error you are getting "java.sql.SQLException: No suitable driver" is a jvm error that means that the driver you are asking for cannot be loaded. The first thing is to make sure you have the sql server driver, which I think you do and it is here "file:///C:/Users/YOURUSERNAME/mySparkApp/bin/debug/netcoreapp2.2/mssql-jdbc-7.4.1.jre8.jar". If the jar isn't actually there you will need to download it. The second thing is the microsft jdbc driver is in the package "com.microsoft.sqlserver.jdbc" and is called "SQLServerDriver" so when you pass in the driver option to spark it needs to be like: .Option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") but it looks like you are using: .Option("driver", "com.Microsoft.SqlServerDriver") Because there is no I notice that in the connection string you are trying to use windows authentication, there is one more thing you will need to do. In the folder with the Microsoft jdbc jar there is a folder called "chs\auth\x64" which contains a dll called "sqljdbc_auth.dll", on windows you will need to add the folder that contains sqljdbc_auth.dll to your path If you have the driver jar referenced using --jars pathToJar.jar and you use the right name, and you have sqljdbc_auth.dll in a folder on your windows path statement you will be able to connect to SQL Server from spark on your windows machine. |
Thanks. I am able to connect to the SQL server now.
Thanks for the help.
Suchi
From: Ed Elliott <notifications@github.com>
Sent: Monday, September 30, 2019 12:41 PM
To: dotnet/spark <spark@noreply.github.com>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com>; Mention <mention@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
Hi,
The error you are getting "java.sql.SQLException: No suitable driver" is a jvm error that means that the driver you are asking for cannot be loaded.
The first thing is to make sure you have the sql server driver, which I think you do and it is here "file:///C:/Users/YOURUSERNAME/mySparkApp/bin/debug/netcoreapp2.2/mssql-jdbc-7.4.1.jre8.jar".
If the jar isn't actually there you will need to download it.
The second thing is the microsft jdbc driver is in the package "com.microsoft.sqlserver.jdbc" and is called "SQLServerDriver" so when you pass in the driver option to spark it needs to be like:
.Option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
but it looks like you are using:
.Option("driver", "com.Microsoft.SqlServerDriver")
Because there is no com.Microsoft package in that jar, it won't be found and you will get this error.
I notice that in the connection string you are trying to use windows authentication, there is one more thing you will need to do. In the folder with the Microsoft jdbc jar there is a folder called "chs\auth\x64" which contains a dll called "sqljdbc_auth.dll", on windows you will need to add the folder that contains sqljdbc_auth.dll to your path set PATH=c:\folder\to\sqljdbc_auth.dll;%PATH% before starting spark-shell or spark-submit - if you are running it on a cluster i'm not sure if you can get windows authentication to work - i've always used sql auth form spark.
If you have the driver jar referenced using --jars pathToJar.jar and you use the right name, and you have sqljdbc_auth.dll in a folder on your windows path statement you will be able to connect to SQL Server from spark on your windows machine.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSKYASVYHXH7Y6IYAJ43QMJI4ZA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD763D3Y%23issuecomment-536719855&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C3335f00d75ce44cbdaef08d745de1fde%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637054692639558937&sdata=PJP8cJIkugQlsev8o4RQjn6H5790MwAjLl40Jf%2Fo80w%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSK2742QUSIHRGUPWXXTQMJI4ZANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C3335f00d75ce44cbdaef08d745de1fde%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637054692639568930&sdata=4IXyQzte4O2AMGNxdI0cfcavaKJsb%2BOKdppLv5xFdmY%3D&reserved=0>.
|
I wrote this up, in case it helps anyone else (I think it will be quite a common issue for people using spark dotnet!) |
I tried to connect to SQLServer using databricks cluster and I get following exception:
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host CAWL113418, port 1433 has failed. Error: "CAWL113418. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227) at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:284) at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2435) at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:635) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2010) at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:64) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:343) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:283) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:201) at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:312)
I checked the SQL server configuration Manager to make sure Port 1433 is enabled and server is listening on the port.
I also added inbound and outbound rules to the firewall for port 1433.
What else can I check?
My connection string is as follows:
DataFrameReader dfr = spark.Read().Format("jdbc").Option("url", "jdbc:sqlserver://localhost;databaseName=Test_DB;")
.Option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.Option("username", "uuser")
.Option("password", "password!23")
.Option("dbtable", "dbo.Address");
From: Ed Elliott <notifications@github.com>
Sent: Monday, September 30, 2019 10:53 PM
To: dotnet/spark <spark@noreply.github.com>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com>; Mention <mention@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
I wrote this up, in case it helps anyone else (I think it will be quite a common issue for people using spark dotnet!)
https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK6HFH2JNWHEJBCEM5TQMLQTFA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAABVNA%23issuecomment-536877748&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=f866ATQ8GeUQ9MLSIHFyvhUwp6pwQCgct%2FHmDyftxmg%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSK2THJIBCLLW3P4NS3DQMLQTFANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=Jqa2o2IOaymBcwFArdqKrfZTJ%2FHKvMfjcHKx9yNGWWw%3D&reserved=0>.
|
Where is your Databricks cluster and SQL Server instance running (on prem, Azure, somewhere else)?
Assuming you specified the correct connection information for your SQL Server instance (I am skeptical if localhost works), you would probably need to make sure that the machine ranges that your Databricks cluster is using is whitelisted in your SQL Server port’s firewall setting.
I would expect this error to be regardless of whether you are using Scala, pySpark or .NET for Spark. Does it work for another Spark cluster?
Cheers
Michael
From: schinchanikar-rms <notifications@github.com>
Sent: Tuesday, October 1, 2019 11:41 AM
To: dotnet/spark <spark@noreply.github.com>
Cc: Subscribed <subscribed@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
I tried to connect to SQLServer using databricks cluster and I get following exception:
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host CAWL113418, port 1433 has failed. Error: "CAWL113418. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227) at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:284) at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2435) at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:635) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2010) at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:64) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:343) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:283) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:201) at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:312)
I checked the SQL server configuration Manager to make sure Port 1433 is enabled and server is listening on the port.
I also added inbound and outbound rules to the firewall for port 1433.
What else can I check?
My connection string is as follows:
DataFrameReader dfr = spark.Read().Format("jdbc").Option("url", "jdbc:sqlserver://localhost;databaseName=Test_DB;")
.Option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.Option("username", "uuser")
.Option("password", "password!23")
.Option("dbtable", "dbo.Address");
From: Ed Elliott <notifications@github.com<mailto:notifications@github.com>>
Sent: Monday, September 30, 2019 10:53 PM
To: dotnet/spark <spark@noreply.github.com<mailto:spark@noreply.github.com>>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com<mailto:Suchi.Chinchanikar@rms.com>>; Mention <mention@noreply.github.com<mailto:mention@noreply.github.com>>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
I wrote this up, in case it helps anyone else (I think it will be quite a common issue for people using spark dotnet!)
https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0<https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/%3chttps:/nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0>>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK6HFH2JNWHEJBCEM5TQMLQTFA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAABVNA%23issuecomment-536877748&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=f866ATQ8GeUQ9MLSIHFyvhUwp6pwQCgct%2FHmDyftxmg%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSK2THJIBCLLW3P4NS3DQMLQTFANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=Jqa2o2IOaymBcwFArdqKrfZTJ%2FHKvMfjcHKx9yNGWWw%3D&reserved=0>.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DACZXGJAXOG5AUAIMHBKTV2TQMOKSJA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEACJYCY%23issuecomment-537173003&data=02%7C01%7Cmrys%40microsoft.com%7C7e25d7c50440405666d708d7469ed9bb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637055520384752656&sdata=0IuR%2F4rr1NqUfHRjQQTMvc5zRlcpXgm0f3XlkSWB3eg%3D&reserved=0>, or mute the thread<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACZXGJH7KLM7FE6M74XVJXLQMOKSJANCNFSM4I25NZWQ&data=02%7C01%7Cmrys%40microsoft.com%7C7e25d7c50440405666d708d7469ed9bb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637055520384762650&sdata=SK4NZNrl51ICU0NRatsK4SCrk0CVpLUnOa0VuPIM3l0%3D&reserved=0>.
|
You are right to check that the sql server is listening on tcp and port 1433 and to add a firewall rule to allow it. You will definitely need connectivity from your databricks cluster through to your sql server - if you don't have that you won't be able to connect. One thing though is that SocketFinder.findSocket uses I would say that it is unlikely "CAWL113418" is going to resolvable to an ip address from a databricks cluster so try using the fqdn or the ip address of the SQL Server. |
Yes. I tried servername, ip address of my server (ipv4 address). Still doesn’t work. I have 2 sql server options, I tried both. One on my localmachine (I tried using ipv4) and one in Azure.
From: Michael Rys <notifications@github.com>
Sent: Tuesday, October 1, 2019 12:26 PM
To: dotnet/spark <spark@noreply.github.com>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com>; Mention <mention@noreply.github.com>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
Where is your Databricks cluster and SQL Server instance running (on prem, Azure, somewhere else)?
Assuming you specified the correct connection information for your SQL Server instance (I am skeptical if localhost works), you would probably need to make sure that the machine ranges that your Databricks cluster is using is whitelisted in your SQL Server port’s firewall setting.
I would expect this error to be regardless of whether you are using Scala, pySpark or .NET for Spark. Does it work for another Spark cluster?
Cheers
Michael
From: schinchanikar-rms <notifications@github.com<mailto:notifications@github.com>>
Sent: Tuesday, October 1, 2019 11:41 AM
To: dotnet/spark <spark@noreply.github.com<mailto:spark@noreply.github.com>>
Cc: Subscribed <subscribed@noreply.github.com<mailto:subscribed@noreply.github.com>>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
I tried to connect to SQLServer using databricks cluster and I get following exception:
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host CAWL113418, port 1433 has failed. Error: "CAWL113418. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227) at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:284) at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2435) at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:635) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2010) at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:64) at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:55) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210) at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:343) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:283) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:201) at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:312)
I checked the SQL server configuration Manager to make sure Port 1433 is enabled and server is listening on the port.
I also added inbound and outbound rules to the firewall for port 1433.
What else can I check?
My connection string is as follows:
DataFrameReader dfr = spark.Read().Format("jdbc").Option("url", "jdbc:sqlserver://localhost;databaseName=Test_DB;")
.Option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.Option("username", "uuser")
.Option("password", "password!23")
.Option("dbtable", "dbo.Address");
From: Ed Elliott <notifications@github.com<mailto:notifications@github.com<mailto:notifications@github.com%3cmailto:notifications@github.com>>>
Sent: Monday, September 30, 2019 10:53 PM
To: dotnet/spark <spark@noreply.github.com<mailto:spark@noreply.github.com<mailto:spark@noreply.github.com%3cmailto:spark@noreply.github.com>>>
Cc: Suchi Chinchanikar <Suchi.Chinchanikar@rms.com<mailto:Suchi.Chinchanikar@rms.com<mailto:Suchi.Chinchanikar@rms.com%3cmailto:Suchi.Chinchanikar@rms.com>>>; Mention <mention@noreply.github.com<mailto:mention@noreply.github.com<mailto:mention@noreply.github.com%3cmailto:mention@noreply.github.com>>>
Subject: Re: [dotnet/spark] [Error] [JvmBridge] java.sql.SQLException: No suitable driver (#271)
CAUTION – UNVERIFIED EXTERNAL EMAIL
I wrote this up, in case it helps anyone else (I think it will be quite a common issue for people using spark dotnet!)
https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0<https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/%3chttps:/nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0<https://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/%3chttps:/nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0%3chttps://the.agilesql.club/2019/10/how-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver/%3chttps:/nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fthe.agilesql.club%2F2019%2F10%2Fhow-to-connect-spark-to-ms-sql-server-without-error-jvmbridge-java.sql.sqlexception-no-suitable-driver%2F&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732093029&sdata=FONQEJStwaa7UXe6zlWNa4CWqeYJ8sraNkGJYZHtgcI%3D&reserved=0>>>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK6HFH2JNWHEJBCEM5TQMLQTFA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAABVNA%23issuecomment-536877748&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=f866ATQ8GeUQ9MLSIHFyvhUwp6pwQCgct%2FHmDyftxmg%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSK2THJIBCLLW3P4NS3DQMLQTFANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C24576a1bfb7b4246133308d746339828%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055059732103028&sdata=Jqa2o2IOaymBcwFArdqKrfZTJ%2FHKvMfjcHKx9yNGWWw%3D&reserved=0>.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DACZXGJAXOG5AUAIMHBKTV2TQMOKSJA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEACJYCY%23issuecomment-537173003&data=02%7C01%7Cmrys%40microsoft.com%7C7e25d7c50440405666d708d7469ed9bb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637055520384752656&sdata=0IuR%2F4rr1NqUfHRjQQTMvc5zRlcpXgm0f3XlkSWB3eg%3D&reserved=0>, or mute the thread<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACZXGJH7KLM7FE6M74XVJXLQMOKSJANCNFSM4I25NZWQ&data=02%7C01%7Cmrys%40microsoft.com%7C7e25d7c50440405666d708d7469ed9bb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637055520384762650&sdata=SK4NZNrl51ICU0NRatsK4SCrk0CVpLUnOa0VuPIM3l0%3D&reserved=0>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdotnet%2Fspark%2Fissues%2F271%3Femail_source%3Dnotifications%26email_token%3DAMACSK3EQIC576TJY72OJYTQMOP5RA5CNFSM4I25NZW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEACOFQA%23issuecomment-537191104&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C00b24c3ba8194287fc6508d746a53b38%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055547788812707&sdata=gNH82Yqxr0uGP%2FFaJL7bdw%2FbdTMsL4Eqg7UdfsD%2Bw3s%3D&reserved=0>, or mute the thread<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAMACSKZQJNKCJBAXLTNMD5TQMOP5RANCNFSM4I25NZWQ&data=02%7C01%7Csuchi.chinchanikar%40rms.com%7C00b24c3ba8194287fc6508d746a53b38%7Cd43fb8a804da4990b86cc4ba9ba4511f%7C0%7C0%7C637055547788822704&sdata=qg0uq%2BoD04th6UbTP6qBzwjlWqQjNMzpIIij4s%2BmFPU%3D&reserved=0>.
|
Connecting databricks (azure) to on-prem: Deploying databricks to an azure virtual network: https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html Have you got your databricks cluster connected to any other SQL Servers? It sounds like you have some networking work to do :) |
Just an additional thought really, if you are struggling to connect up your various networks you will likely find it easier to have a system where you push your data from your network with sql on to blob or adls and use databricks to read from that (adf can help). Having adf read your batches, move them into adls and then trigger a databricks job is a pretty typical use of adf / databricks. I've used this when connecting to SQL Servers which we weren't allowed to open up externally (even to internal azure subscriptions) |
It is helpful though if you can close an issue if it is complete (the original question was about finding the sql server driver) otherwise it is hard for other people to find answers to the questions here. |
Hi, we are going to close this issue as it has been inactive for a while and the original issue has been resolved. Please feel free to re-open it if the issue persists and/or there are any new updates. Thank you! |
Problem encountered on https://dotnet.microsoft.com/learn/data/spark-tutorial/install-spark
Operating System: windows
I am trying to read a dataframe from SQL database through spark session using spark.Read.Format("jdbc").
I installed the sql jdbc driver as specified in https://docs.microsoft.com/en-us/sql/connect/jdbc/using-the-jdbc-driver?view=sql-server-2017. I still get error [Error] [JvmBridge] java.sql.SQLException: No suitable driver
The text was updated successfully, but these errors were encountered: