Before you start, ensure you’ve already obtained the API_KEY from the Friendli Suite > Personal Settings > API Keys.
Our products are entirely compatible with OpenAI, so we use the langchain-openai package by referring to the FriendliAI baseURL.
Now we can instantiate our model object and generate chat completions.
We provide usage examples for each type of endpoint. Choose the one that best suits your needs:
We can chain our model with a prompt template.
Prompt templates convert raw user input to better input to the LLM.
from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages([ ("system", "You are a world class technical documentation writer."), ("user", "{input}")])chain = prompt | llmprint(chain.invoke({"input": "how can langsmith help with testing?"}))
To get the string value instead of the message, we can add an output parser to the chain.
from langchain_core.output_parsers import StrOutputParseroutput_parser = StrOutputParser()chain = prompt | llm | output_parserprint(chain.invoke({"input": "how can langsmith help with testing?"}))
Describe tools and their parameters, and let the model return a tool to invoke with the input arguments.
Tool calling is extremely useful for enhancing the model’s capability to provide more comprehensive and actionable responses.
The @tool decorator is used to define a tool.
If you set parse_docstring=True, the tool will parse the docstring to extract the information of arguments.
from langchain_core.tools import tool@tooldef add(a: int, b: int) -> int: """Adds a and b.""" return a + b@tooldef multiply(a: int, b: int) -> int: """Multiplies a and b.""" return a * btools = [add, multiply]