Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →How to Build an ETL App for Phoenix Data in Python with CData
Create ETL applications and real-time data pipelines for Phoenix data in Python with petl.
The rich ecosystem of Python modules lets you get to work quickly and integrate your systems more effectively. With the CData Python Connector for Phoenix and the petl framework, you can build Phoenix-connected applications and pipelines for extracting, transforming, and loading Phoenix data. This article shows how to connect to Phoenix with the CData Python Connector and use petl and pandas to extract, transform, and load Phoenix data.
With built-in, optimized data processing, the CData Python Connector offers unmatched performance for interacting with live Phoenix data in Python. When you issue complex SQL queries from Phoenix, the driver pushes supported SQL operations, like filters and aggregations, directly to Phoenix and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations).
Connecting to Phoenix Data
Connecting to Phoenix data looks just like connecting to any relational data source. Create a connection string using the required connection properties. For this article, you will pass the connection string as a parameter to the create_engine function.
Connect to Apache Phoenix via the Phoenix Query Server. Set the Server and Port (if different from the default port) properties to connect to Apache Phoenix. The Server property will typically be the host name or IP address of the server hosting Apache Phoenix.
Authenticating to Apache Phoenix
By default, no authentication will be used (plain). If authentication is configured for your server, set AuthScheme to NEGOTIATE and set the User and Password properties (if necessary) to authenticate through Kerberos.
After installing the CData Phoenix Connector, follow the procedure below to install the other required modules and start accessing Phoenix through Python objects.
Install Required Modules
Use the pip utility to install the required modules and frameworks:
pip install petl pip install pandas
Build an ETL App for Phoenix Data in Python
Once the required modules and frameworks are installed, we are ready to build our ETL app. Code snippets follow, but the full source code is available at the end of the article.
First, be sure to import the modules (including the CData Connector) with the following:
import petl as etl import pandas as pd import cdata.apachephoenix as mod
You can now connect with a connection string. Use the connect function for the CData Phoenix Connector to create a connection for working with Phoenix data.
cnxn = mod.connect("Server=localhost;Port=8765;")
Create a SQL Statement to Query Phoenix
Use SQL to create a statement for querying Phoenix. In this article, we read data from the MyTable entity.
sql = "SELECT Id, Column1 FROM MyTable WHERE Id = '123456'"
Extract, Transform, and Load the Phoenix Data
With the query results stored in a DataFrame, we can use petl to extract, transform, and load the Phoenix data. In this example, we extract Phoenix data, sort the data by the Column1 column, and load the data into a CSV file.
Loading Phoenix Data into a CSV File
table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Column1') etl.tocsv(table2,'mytable_data.csv')
With the CData Python Connector for Phoenix, you can work with Phoenix data just like you would with any database, including direct access to data in ETL packages like petl.
Free Trial & More Information
Download a free, 30-day trial of the CData Python Connector for Phoenix to start building Python apps and scripts with connectivity to Phoenix data. Reach out to our Support Team if you have any questions.
Full Source Code
import petl as etl import pandas as pd import cdata.apachephoenix as mod cnxn = mod.connect("Server=localhost;Port=8765;") sql = "SELECT Id, Column1 FROM MyTable WHERE Id = '123456'" table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Column1') etl.tocsv(table2,'mytable_data.csv')