| azure_openai | Send LLM Messages to an OpenAI Chat Completions endpoint on Azure |
| chatgpt | ChatGPT Wrapper (Deprecated) |
| check_claude_batch | Check Batch Processing Status for Claude API |
| check_openai_batch | Check Batch Processing Status for OpenAI Batch API |
| claude | Interact with Claude AI models via the Anthropic API |
| df_llm_message | Convert a Data Frame to an LLMMessage Object |
| fetch_claude_batch | Fetch Results for a Claude Batch |
| fetch_openai_batch | Fetch Results for an OpenAI Batch |
| generate_callback_function | Generate API-Specific Callback Function for Streaming Responses |
| get_reply | Get Assistant Reply as Text |
| get_reply_data | Get Data from an Assistant Reply by parsing structured JSON responses |
| get_user_message | Retrieve a User Message by Index |
| groq | Send LLM Messages to the Groq Chat API |
| groq_transcribe | Transcribe an Audio File Using Groq transcription API |
| initialize_api_env | Initialize or Retrieve API-specific Environment |
| last_reply | Get the Last Assistant Reply as Text |
| last_reply_data | Get the Last Assistant Reply as Text |
| last_user_message | Retrieve the Last User Message |
| list_claude_batches | List Claude Batch Requests |
| list_openai_batches | List OpenAI Batch Requests |
| LLMMessage | Large Language Model Message Class |
| llm_message | Create or Update Large Language Model Message Object |
| mistral | Send LLMMessage to Mistral API |
| mistral_embedding | Generate Embeddings Using Mistral API |
| ollama | Interact with local AI models via the Ollama API |
| ollama_download_model | Download a model from the Ollama API |
| ollama_embedding | Generate Embeddings Using Ollama API |
| ollama_list_models | Retrieve and return model information from the Ollama API |
| openai | Send LLM Messages to the OpenAI Chat Completions API |
| openai_embedding | Generate Embeddings Using OpenAI API |
| parse_duration_to_seconds | This internal function parses duration strings as returned by the OpenAI API |
| pdf_page_batch | Batch Process PDF into LLM Messages |
| perform_api_request | Perform an API request to interact with language models |
| ratelimit_from_header | Extract rate limit information from API response headers |
| rate_limit_info | Get the current rate limit information for all or a specific API |
| send_claude_batch | Send a Batch of Messages to Claude API |
| send_openai_batch | Send a Batch of Messages to OpenAI Batch API |
| tidyllm_schema | Create a JSON schema for structured outputs |
| update_rate_limit | Update the standard API rate limit info in the hidden .tidyllm_rate_limit_env environment |