Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Import Databricks Data Using Azure Data Factory
Use CData Connect Cloud to connect to Databricks Data from Azure Data Factory and import live Databricks data.
Microsoft Azure Data Factory (ADF)) is a completely managed, serverless data integration service. When combined with CData Connect Cloud, ADF enables immediate cloud-to-cloud access to Databricks data within data flows. This article outlines the process of connecting to Databricks through Connect Cloud and accessing Databricks data within ADF.
CData Connect Cloud offers a cloud-to-cloud interface tailored for Databricks, granting you the ability to access live data from Databricks data within Azure Data Factory without the need for data replication to a natively supported database. Equipped with optimized data processing capabilities by default, CData Connect Cloud seamlessly channels all supported SQL operations, including filters and JOINs, directly to Databricks. This harnesses server-side processing to expedite the retrieval of the desired Databricks data.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
Configure Databricks Connectivity for ADF
Connectivity to Databricks from Azure Data Factory is made possible through CData Connect Cloud. To work with Databricks data from Azure Data Factory, we start by creating and configuring a Databricks connection.
CData Connect Cloud uses a straightforward, point-and-click interface to connect to data sources.
- Log into Connect Cloud, click Connections and click Add Connection
- Select "Databricks" from the Add Connection panel
-
Enter the necessary authentication properties to connect to Databricks.
To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
- Click Create & Test
-
Navigate to the Permissions tab in the Add Databricks Connection page and update the User-based permissions.


Add a Personal Access Token
If you are connecting from a service, application, platform, or framework that does not support OAuth authentication, you can create a Personal Access Token (PAT) to use for authentication. Best practices would dictate that you create a separate PAT for each service, to maintain granularity of access.
- Click on your username at the top right of the Connect Cloud app and click User Profile.
- On the User Profile page, scroll down to the Personal Access Tokens section and click Create PAT.
- Give your PAT a name and click Create.
- The personal access token is only visible at creation, so be sure to copy it and store it securely for future use.

With the connection configured, you are ready to connect to Databricks data from Azure Data Factory.
Access Live Databricks Data in Azure Data Factory
To establish a connection from Azure Data Factory to the CData Connect Cloud Virtual SQL Server API, follow these steps.
- Login to Azure Data Factory.
- If you have not yet created a Data Factory, Click New -> Dataset.
- In the search bar, enter SQL Server and select it when it appears. On the following screen, enter a name for the server. In the Linked service field, select New.
-
Enter the connection settings.
- Name - enter a name of your choice.
- Server name - enter the Virtual SQL Server endpoint and port separated by a comma: tds.cdata.com,14333
- Database name - enter the Connection Name of the CData Connect Cloud data source you want to connect to (for example, Databricks1).
- User Name - enter your CData Connect Cloud username. This is displayed in the top-right corner of the CData Connect Cloud interface. For example, test@cdata.com.
- Password - select Password (not Azure Key Vault) and enter the PAT you generated on the Settings page.
- Click Create.
- In Set properties, set the Name, choose the Linked service we just created, select a Table name from those available, and Import schema from connection/store. Click OK.
- After creating the linked service, the following screen should appear:
- Click preview data to see the imported Databricks table.







Get CData Connect Cloud
To get live data access to 100+ SaaS, Big Data, and NoSQL sources directly from your cloud applications, try CData Connect Cloud today!