使用其他 LLM、其他 VectorDB 为 Snowflake 生成 SQL¶
本笔记本将引导您了解使用 vanna
Python 包通过 AI(RAG + LLM)生成 SQL 的过程,包括连接到数据库和训练。如果您还没准备好在自己的数据库上进行训练,您仍然可以使用示例 SQLite 数据库进行尝试。
您想使用哪个 LLM?
您想将“训练”数据存储在哪里?
设置¶
In [ ]
%pip install 'vanna[snowflake]'
In [ ]
from vanna.base import VannaBase
In [ ]
class MyCustomVectorDB(VannaBase):
def add_ddl(self, ddl: str, **kwargs) -> str:
# Implement here
def add_documentation(self, doc: str, **kwargs) -> str:
# Implement here
def add_question_sql(self, question: str, sql: str, **kwargs) -> str:
# Implement here
def get_related_ddl(self, question: str, **kwargs) -> list:
# Implement here
def get_related_documentation(self, question: str, **kwargs) -> list:
# Implement here
def get_similar_question_sql(self, question: str, **kwargs) -> list:
# Implement here
def get_training_data(self, **kwargs) -> pd.DataFrame:
# Implement here
def remove_training_data(id: str, **kwargs) -> bool:
# Implement here
class MyCustomLLM(VannaBase):
def __init__(self, config=None):
pass
def generate_plotly_code(self, question: str = None, sql: str = None, df_metadata: str = None, **kwargs) -> str:
# Implement here
def generate_question(self, sql: str, **kwargs) -> str:
# Implement here
def get_followup_questions_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def get_sql_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def submit_prompt(self, prompt, **kwargs) -> str:
# Implement here
class MyVanna(MyCustomVectorDB, MyCustomLLM):
def __init__(self, config=None):
MyCustomVectorDB.__init__(self, config=config)
MyCustomLLM.__init__(self, config=config)
vn = MyVanna()
您想查询哪个数据库?
-
Postgres
-
[已选择] Snowflake
-
BigQuery
-
SQLite
-
其他数据库使用 Vanna 为任何 SQL 数据库生成查询
In [ ]
vn.connect_to_snowflake(
account="myaccount",
username="myusername",
password="mypassword",
database="mydatabase",
role="myrole",
)
训练¶
您只需要训练一次。除非您想添加更多训练数据,否则请勿再次训练。
In [ ]
# The information schema query may need some tweaking depending on your database. This is a good starting point.
df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")
# This will break up the information schema into bite-sized chunks that can be referenced by the LLM
plan = vn.get_training_plan_generic(df_information_schema)
plan
# If you like the plan, then uncomment this and run it to train
# vn.train(plan=plan)
In [ ]
# The following are methods for adding training data. Make sure you modify the examples to match your database.
# DDL statements are powerful because they specify table names, colume names, types, and potentially relationships
vn.train(ddl="""
CREATE TABLE IF NOT EXISTS my-table (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT
)
""")
# Sometimes you may want to add documentation about your business terminology or definitions.
vn.train(documentation="Our business defines OTIF score as the percentage of orders that are delivered on time and in full")
# You can also add SQL queries to your training data. This is useful if you have some queries already laying around. You can just copy and paste those from your editor to begin generating new SQL.
vn.train(sql="SELECT * FROM my-table WHERE name = 'John Doe'")
In [ ]
# At any time you can inspect what training data the package is able to reference
training_data = vn.get_training_data()
training_data
In [ ]
# You can remove training data if there's obsolete/incorrect information.
vn.remove_training_data(id='1-ddl')
询问 AI¶
每次您提出新问题时,它都会找到 10 条最相关的训练数据,并将其用作 LLM 提示的一部分来生成 SQL。
In [ ]
vn.ask(question=...)
启动用户界面¶
In [ ]
from vanna.flask import VannaFlaskApp
app = VannaFlaskApp(vn)
app.run()