2cld ai notes
https://casa.bradnordyke.com - casaos
| doc | local link | external |
|---|---|---|
| WebUI /api/v1/docs | http://192.168.6.30:8080/api/v1/docs | https://chat.bradnordyke.com/api/v1/docs |
| Ollama /ollama/docs | http://192.168.6.30:8080/ollama/docs | https://chat.bradnordyke.com/ollama/docs |
| OpenAI /openai/docs | http://192.168.6.30:8080/openai/docs | https://chat.bradnordyke.com/openai/docs |
| Images /images/api/v1/docs | http://192.168.6.30:8080/images/api/v1/docs | https://chat.bradnordyke.com/images/api/v1/docs |
| Audio /audio/api/v1/docs | http://192.168.6.30:8080/audio/api/v1/docs | https://chat.bradnordyke.com/audio/api/v1/docs |
| RAG /rag/api/v1/docs | http://192.168.6.30:8080/rag/api/v1/docs | https://chat.bradnordyke.com/rag/api/v1/docs |
### cloudflare git clone cli setup
What’s the best way to setup a cli that access z.ai?
It appears there might be a typo in your request, as z.ai is not currently a widely recognized major AI API provider.
Assuming you meant x.ai (Elon Musk’s xAI, creators of Grok), or that you are looking for a generic way to access a custom AI endpoint, here is the best way to set up a CLI.
I have broken this down into the two most likely scenarios.
xAI has an API that is OpenAI-compatible. This means you can use standard OpenAI libraries and tools just by changing the “Base URL.”
llm CLI tool)The absolute best way to interact with any AI provider via CLI is Simon Willison’s llm tool. It creates a clean interface for almost every model.
# macOS/Linux
brew install llm
# or via pip
pip install llm
llm install llm-openai
# Set your xAI API Key
export XAI_API_KEY="your-xai-api-key-here"
# Add the model to llm
llm keys set xai --value $XAI_API_KEY
# Configure the model pointing to xAI's endpoint
llm openai add-model grok-beta \
--api-base "https://api.x.ai/v1" \
--api-key-env-var XAI_API_KEY
llm -m grok-beta "Explain the meaning of life in 5 words"
If you want to build your own custom CLI tool, use Python with the official OpenAI library, pointing it to xAI.
pip install openai
zai_cli.py:
import os
from openai import OpenAI
# Initialize client pointing to xAI
client = OpenAI(
api_key=os.environ.get("XAI_API_KEY"), # Ensure you exported this key
base_url="https://api.x.ai/v1",
)
def main():
print("Chatting with Grok (type 'exit' to quit)...")
messages = []
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]: break
messages.append({"role": "user", "content": user_input})
stream = client.chat.completions.create(
model="grok-beta",
messages=messages,
stream=True,
)
print("AI: ", end="", flush=True)
response_text = ""
for chunk in stream:
content = chunk.choices[0].delta.content or ""
print(content, end="", flush=True)
response_text += content
print("\n")
messages.append({"role": "assistant", "content": response_text})
if __name__ == "__main__":
main()
export XAI_API_KEY="your-key-here"
python zai_cli.py
If z.ai is a private tool or a specific niche service you use, and it provides a REST API, the best generic setup is to use a tool like HTTPie or a simple curl wrapper.
brew install httpie
# Replace 'Bearer' with whatever auth method z.ai uses
http --session=zai_session POST z.ai/api/v1/chat Authorization:"Bearer YOUR_API_KEY" prompt="Hello world"
If you just want a quick command to send a prompt:
.zshrc or .bashrc).zai() {
# Adjust the URL and JSON payload based on z.ai's specific docs
curl -s https://z.ai/api/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"$1\"}" | jq -r '.response'
}
source ~/.zshrc.zai "What is the weather?"llm tool). It supports history, templates, and logging out of the box.base_url.