Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Connect to Live Databricks Data in PostGresSQL Interface through CData Connect Cloud
Create a live connection to Databricks in CData Connect Cloud and connect to your Databricks data from PostgreSQL.
There are a vast number of PostgreSQL clients available on the Internet. PostgreSQL is a popular interface for data access. When you pair PostgreSQL with CData Connect Cloud, you gain database-like access to live Databricks data from PostgreSQL. In this article, we walk through the process of connecting to Databricks data in Connect Cloud and establishing a connection between Connect Cloud and PostgreSQL using a TDS foreign data wrapper (FDW).
CData Connect Cloud provides a pure SQL Server interface for Databricks, allowing you to query data from Databricks without replicating the data to a natively supported database. Using optimized data processing out of the box, CData Connect Cloud pushes all supported SQL operations (filters, JOINs, etc.) directly to Databricks, leveraging server-side processing to return the requested Databricks data quickly.
About Databricks Data Integration
Accessing and integrating live data from Databricks has never been easier with CData. Customers rely on CData connectivity to:
- Access all versions of Databricks from Runtime Versions 9.1 - 13.X to both the Pro and Classic Databricks SQL versions.
- Leave Databricks in their preferred environment thanks to compatibility with any hosting solution.
- Secure authenticate in a variety of ways, including personal access token, Azure Service Principal, and Azure AD.
- Upload data to Databricks using Databricks File System, Azure Blog Storage, and AWS S3 Storage.
While many customers are using CData's solutions to migrate data from different systems into their Databricks data lakehouse, several customers use our live connectivity solutions to federate connectivity between their databases and Databricks. These customers are using SQL Server Linked Servers or Polybase to get live access to Databricks from within their existing RDBMs.
Read more about common Databricks use-cases and how CData's solutions help solve data problems in our blog: What is Databricks Used For? 6 Use Cases.
Getting Started
Connect to Databricks in Connect Cloud
CData Connect Cloud uses a straightforward, point-and-click interface to connect to data sources.
- Log into Connect Cloud, click Connections and click Add Connection
- Select "Databricks" from the Add Connection panel
-
Enter the necessary authentication properties to connect to Databricks.
To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, and selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
- Click Create & Test
-
Navigate to the Permissions tab in the Add Databricks Connection page and update the User-based permissions.


Add a Personal Access Token
If you are connecting from a service, application, platform, or framework that does not support OAuth authentication, you can create a Personal Access Token (PAT) to use for authentication. Best practices would dictate that you create a separate PAT for each service, to maintain granularity of access.
- Click on your username at the top right of the Connect Cloud app and click User Profile.
- On the User Profile page, scroll down to the Personal Access Tokens section and click Create PAT.
- Give your PAT a name and click Create.
- The personal access token is only visible at creation, so be sure to copy it and store it securely for future use.

Build the TDS Foreign Data Wrapper
The Foreign Data Wrapper can be installed as an extension to PostgreSQL, without recompiling PostgreSQL. The tds_fdw extension is used as an example (https://github.com/tds-fdw/tds_fdw).
- You can clone and build the git repository via something like the following view source:
sudo apt-get install git git clone https://github.com/tds-fdw/tds_fdw.git cd tds_fdw make USE_PGXS=1 sudo make USE_PGXS=1 install
Note: If you have several PostgreSQL versions and you do not want to build for the default one, first locate where the binary for pg_config is, take note of the full path, and then append PG_CONFIG=after USE_PGXS=1 at the make commands. - After you finish the installation, then start the server:
sudo service postgresql start
- Then go inside the Postgres database
psql -h localhost -U postgres -d postgres
Note: Instead of localhost you can put the IP where your PostgreSQL is hosted.
Connect to Databricks data as a PostgreSQL Database and query the data!
After you have installed the extension, follow the steps below to start executing queries to Databricks data:
- Log into your database.
- Load the extension for the database:
CREATE EXTENSION tds_fdw;
- Create a server object for Databricks data:
CREATE SERVER "Databricks1" FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername'tds.cdata.com', port '14333', database 'Databricks1');
- Configure user mapping with your email and Personal Access Token from your Connect Cloud account:
CREATE USER MAPPING for postgres SERVER "Databricks1" OPTIONS (username 'username@cdata.com', password 'your_personal_access_token' );
- Create the local schema:
CREATE SCHEMA "Databricks1";
- Create a foreign table in your local database:
#Using a table_name definition: CREATE FOREIGN TABLE "Databricks1".Customers ( id varchar, CompanyName varchar) SERVER "Databricks1" OPTIONS(table_name 'Databricks.Customers', row_estimate_method 'showplan_all'); #Or using a schema_name and table_name definition: CREATE FOREIGN TABLE "Databricks1".Customers ( id varchar, CompanyName varchar) SERVER "Databricks1" OPTIONS (schema_name 'Databricks', table_name 'Customers', row_estimate_method 'showplan_all'); #Or using a query definition: CREATE FOREIGN TABLE "Databricks1".Customers ( id varchar, CompanyName varchar) SERVER "Databricks1" OPTIONS (query 'SELECT * FROM Databricks.Customers', row_estimate_method 'showplan_all'); #Or setting a remote column name: CREATE FOREIGN TABLE "Databricks1".Customers ( id varchar, col2 varchar OPTIONS (column_name 'CompanyName')) SERVER "Databricks1" OPTIONS (schema_name 'Databricks', table_name 'Customers', row_estimate_method 'showplan_all');
- You can now execute read/write commands to Databricks:
SELECT id, CompanyName FROM "Databricks1".Customers;
More Information & Free Trial
Now, you have created a simple query from live Databricks data. For more information on connecting to Databricks (and more than 100 other data sources), visit the Connect Cloud page. Sign up for a free trial and start working with live Databricks data in PostgreSQL.