防止網站流量被刷seo數據是什么
文章目錄
- 一、用 Llama-index 創(chuàng)建 Agent
- 1. 測試模型
- 2. 自定義一個接口類
- 3. 使用 ReActAgent & FunctionTool 構建 Agent
- 二、數據庫對話 Agent
- 1. SQLite 數據庫
- 1.1 創(chuàng)建數據庫 & 連接
- 1.2 創(chuàng)建、插入、查詢、更新、刪除數據
- 1.3 關閉連接
- 建立數據庫
- 2. ollama
- 3. 配置對話 & Embedding 模型
- 三、RAG 接入Agent
一、用 Llama-index 創(chuàng)建 Agent
LlamaIndex 實現 Agent,需要導入:
- Function Tool:將工具函數放在 Function Tool 對象中
- 工具函數 -> 完成 Agent 任務。??大模型會根據函數注釋來判斷使用哪個函數來完成任務,所以,注釋一定要寫清楚函數功能和返回值
- ReActAgent:通過結合推理(Reasoning)和行動(Acting)來創(chuàng)建動態(tài)的 LLM Agent 的框架
- 初始推理:agent首先進行推理步驟,以理解任務、收集相關信息并決定下一步行為
- 行動:agent基于其推理采取行動——例如查詢API、檢索數據或執(zhí)行命令
- 觀察:agent觀察行動的結果并收集任何新的信息
- 優(yōu)化推理:利用新消息,代理再次進行推理,更新其理解、計劃或假設
- 重復:代理重復該循環(huán),在推理和行動之間交替,直到達到滿意的結論或完成任務
1. 測試模型
- 使用一個數學能力較差的模型
# https://bailian.console.aliyun.com/#/model-market/detail/chatglm3-6b?tabKey=sdk
from dashscope import Generation messages = [{'role': "system", 'content': 'You are a helpful assistant.'},{'role': "user", 'content': '9.11 和 9.8 哪個大?'},
]gen = Generation()
response = gen.call(api_key=os.getenv("API_KEY"),model='chatglm3-6b',messages=messages,result_format='message',
)print(response.output.choices[0].message.content)
9.11 比 9.8 更大。
2. 自定義一個接口類
# https://www.datawhale.cn/learn/content/86/3058
from llama_index.core.llms import CustomLLM, LLMMetadata, CompletionResponse
from llama_index.core.llms.callbacks import llm_completion_callback
import os
from typing import Any, Generatorclass MyLLM(CustomLLM):api_key: str = Field(default=os.getenv("API_KEY"))base_url: str = Field(default=os.getenv("BASE_URL"))client: Generation = Field(default=Generation(), exclude=True)model_name: str@propertydef metadata(self) -> LLMMetadata:return LLMMetadata(model_name=self.model_name,context_window=32768, # 根據模型實際情況設置num_output=512)@llm_completion_callback()def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:messages = [{'role': "user", 'content': prompt}, # 根據API需求調整]response = self.client.call(api_key=self.api_key,model=self.model_name,messages=messages,result_format='message',)return CompletionResponse(text=response.output.choices[0].message.content)@llm_completion_callback()def stream_complete(self, prompt: str, **kwargs: Any) -> Generator[CompletionResponse, None, None]:response = self.client.call(api_key=self.api_key,model=self.model_name,messages=[{'role': "user", 'content': prompt}],stream=True,)current_text = ""for chunk in response:content = chunk.output.choices[0].delta.get('content', '')current_text += contentyield CompletionResponse(text=current_text, delta=content)# 實例化時使用大寫環(huán)境變量名
llm = MyLLM(api_key=os.getenv("API_KEY"), base_url=os.getenv("BASE_URL"), model_name='chatglm3-6b'
)
3. 使用 ReActAgent & FunctionTool 構建 Agent
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import ReActAgentdef compare_number(a: float, b: float) -> str:"""比較兩個數的大小"""if a > b:return f"{a} 大于 {b}"elif a < b:return f"{a} 小于 {b}"else:return f"{a} 等于 {b}"tool = FunctionTool.from_defaults(fn=compare_number)
agent = ReActAgent.from_tools([tool], llm=llm, verbose=True)
response = agent.chat("9.11 和 9.8 哪個大?使用工具計算")
print(response)
> Running step 8c56594a-4edd-4d63-a196-99198df94e12. Step input: 9.11 和 9.8 哪個大?使用工具計算
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
Running step 22bbb997-4b52-4230-8a4d-d8eda252b7d1. Step input: None
Thought: The user is asking to compare the numbers 9.11 and 9.8, and they would like to know which one is greater. I can use the compare_number function to achieve this.
Action: compare_number
Action Input: {'a': 9.11, 'b': 9.8}
Observation: 9.11 小于 9.8
> Running step c6ce4186-3ea7-48c8-8f76-7d219118afc4. Step input: None
Thought: 根據比較結果,9.11小于9.8。
Answer: 9.11 < 9.8
9.11 < 9.8
二、數據庫對話 Agent
1. SQLite 數據庫
1.1 創(chuàng)建數據庫 & 連接
import sqlite3# 連接數據庫
conn = sqlite3.connect('mydatabase.db')# 創(chuàng)建游標對象
cursor = conn.cursor()
1.2 創(chuàng)建、插入、查詢、更新、刪除數據
- 創(chuàng)建
# create
create_tabel_sql = """CREATE TABLE IF NOT EXISTS employees ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, department TEXT,salary REAL ); """cursor.execute(create_table_sql)# 提交事務
conn.commit()
- 插入
insert_sql = "INSERT INTO employees (name, department, salary) VALUES (?, ?, ?)"# insert single
data = ("Alice", "Engineering", 75000.0)
cursor.execute(insert_sql, data)
cursor.commit()# insert many
employees = [("Bob", "Marketing", 68000.0),("Charlie", "Sales", 72000.0)
]
cursor.executemany(insert_sql, employees)
cursor.commit()
- 查詢
# 查詢
# 條件查詢(按部門篩選)
cursor.execute("SELECT name, salary FROM employees WHERE department=?", ("Engineering",))
engineering_employees = cursor.fetchall()
print("\nEngineering department:")
for emp in engineering_employees: print(f"{emp[0]} - ${emp[1]:.2f}")
- 更新
update_sql = "UPDATE employees SET salary = ? WHERE name = ?"
cursor.execute(update_sql, (8000.0, 'Alice'))
cursor.commit()
- 刪除
delect_sql = "DELECT FROM employees WHERE name = ?"
cursor.execute(delect_sql, ("Bob",))
conn.commit()
1.3 關閉連接
# 關閉游標和連接(釋放資源)
cursor.close()
conn.close()
建立數據庫
python建立數據庫的方法
import sqlite3
# create sql
sqlite_path = "llmdb.db"
# 1. 創(chuàng)建數據庫、創(chuàng)建游標對象
conn = sqlite3.connect(sqlite_path)
curosr = conn.cursor()create_sql = """CREATE TABLE `section_stats` (`部門` varchar(100) DEFAULT NULL,`人數` int(11) DEFAULT NULL);"""insert_sql = """INSERT INTO section_stats (部門, 人數)values(?, ?)"""data = [['專利部', 22], ['商務部', 25]]# 2. 創(chuàng)建數據庫
cursor.execute(create_sql)
cursor.commit()
# 3. 插入數據
cursor.executemany(insert_sql, data)
cursor.commit()
# 4. 關閉連接
cursor.close()
conn.close()
2. ollama
安裝 ollama
- 官網下載安裝: [https://ollama.com](https://ollama.com/)
- 模型安裝, 如運行 ollama run qwen2.5:7b(出現了success安裝成功)- 然后出現 >>> 符號,即對話窗口, 輸入 /bye 推出交互頁面- 瀏覽器輸入 127.0.0.1:11434, 如果出現 ollama is running,說明端口運行正常
- 環(huán)境配置- `OLLAMA_MODELS` & `OLLAMA_HOST` 環(huán)境配置1. 創(chuàng)建存儲路徑,如`mkdir -p ~/programs/ollama/models`2. 編輯環(huán)境變量配置路徑 `vim ~/.bash_profile # ~/.zshrc``export OLLAMA_MODELS=~/programs/ollama/models``export OLLAMA_HOST=0.0.0.0:11434`- 確定mac地址和防火墻允許:系統(tǒng)偏好設置 -> 網絡 (安全性和隱私-> 防火墻)- 使配置生效`source ~/.bash_profile # ~/.zshrc`
3. 配置對話 & Embedding 模型
!pip install llama-index-llms-dashscope
三、RAG 接入Agent
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md