Skip to main content

OllamaFunctions

danger

This was an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. The primary Ollama integration now supports tool calling, and should be used instead.

This notebook shows how to use an experimental wrapper around Ollama that gives it tool calling capabilities.

Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models. For a complete list of supported models and model variants, see the Ollama model library.

Overviewโ€‹

Integration detailsโ€‹

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
OllamaFunctionslangchain-experimentalโœ…โŒโŒPyPI - DownloadsPyPI - Version

Model featuresโ€‹

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
โœ…โœ…โœ…โœ…โŒโŒโŒโœ…โŒโŒ

Setupโ€‹

To access OllamaFunctions you will need to install langchain-experimental integration package. Follow these instructions to set up and run a local Ollama instance as well as download and serve supported models.

Credentialsโ€‹

Credentials support is not present at this time.

Installationโ€‹

The OllamaFunctions class lives in the langchain-experimental package:

%pip install -qU langchain-experimental

Instantiationโ€‹

OllamaFunctions takes the same init parameters as ChatOllama.

In order to use tool calling, you must also specify format="json".

from langchain_experimental.llms.ollama_functions import OllamaFunctions

llm = OllamaFunctions(model="phi3")
API Reference:OllamaFunctions

Invocationโ€‹

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore programmer.", id='run-94815fcf-ae11-438a-ba3f-00819328b5cd-0')
ai_msg.content
"J'adore programmer."

Chainingโ€‹

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content='Programmieren ist sehr verrรผckt! Es freut mich, dass Sie auf Programmierung so positiv eingestellt sind.', id='run-ee99be5e-4d48-4ab6-b602-35415f0bdbde-0')

Tool Callingโ€‹

OllamaFunctions.bind_tools()โ€‹

With OllamaFunctions.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a tool definition schemas, which looks like:

from langchain_core.pydantic_v1 import BaseModel, Field


class GetWeather(BaseModel):
"""Get the current weather in a given location"""

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content='', id='run-b9769435-ec6a-4cb8-8545-5a5035fc19bd-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_064c4e1cb27e4adb9e4e7ed60362ecc9'}])

AIMessage.tool_callsโ€‹

Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic.

ai_msg.tool_calls
[{'name': 'GetWeather',
'args': {'location': 'San Francisco, CA'},
'id': 'call_064c4e1cb27e4adb9e4e7ed60362ecc9'}]

For more on binding tools and tool call outputs, head to the tool calling docs.

API referenceโ€‹

For detailed documentation of all ToolCallingLLM features and configurations head to the API reference: https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.ollama_functions.OllamaFunctions.html


Was this page helpful?


You can also leave detailed feedback on GitHub.