Skip to content

LCEL (LangChain Expression Language)

Why use LCEL?

LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging.

But it seems that not all cases are suitable for LCEL.

Basics

The syntax of LCEL is implemented by __or__ and __call__ methods.

class Runnable:
    def __init__(self, func):
        self.func = func

    def __or__(self, other):
        def chained_func(*args, **kwargs):
            # the other func consumes the result of this func
            return other(self.func(*args, **kwargs))

        return Runnable(chained_func)

    def __call__(self, *args, **kwargs):
        return self.func(*args, **kwargs)


def add_five(x):
    return x + 5


def multiply_by_two(x):
    return x * 2


# wrap the functions with Runnable
add_five = Runnable(add_five)
multiply_by_two = Runnable(multiply_by_two)

# run them using the object approach
chain = add_five | multiply_by_two
print(chain(3))

Ref: https://www.pinecone.io/learn/series/langchain/langchain-expression-language/

Example

chain1 = prompt1 | model | StrOutputParser()
chain2 = prompt2 | model | StrOutputParser()
  1. Generate an argument about: {input} -> base_response
    planner = (
        ChatPromptTemplate.from_template("Generate an argument about: {input}")
        | ChatOpenAI()
        | StrOutputParser()
        | {"base_response": RunnablePassthrough()}
    )
    
  2. List the pros or positive aspects of {base_response}
    arguments_for = (
        ChatPromptTemplate.from_template(
            "List the pros or positive aspects of {base_response}"
        )
        | ChatOpenAI()
        | StrOutputParser()
    )
    
  3. List the cons or negative aspects of {base_response}
    arguments_against = (
        ChatPromptTemplate.from_template(
            "List the cons or negative aspects of {base_response}"
        )
        | ChatOpenAI()
        | StrOutputParser()
    )
    
  4. Generate a final response given the critique

    final_responder = (
        ChatPromptTemplate.from_messages(
            [
                ("ai", "{original_response}"),
                ("human", "Pros:\n{results_1}\n\nCons:\n{results_2}"),
                ("system", "Generate a final response given the critique"),
            ]
        )
        | ChatOpenAI()
        | StrOutputParser()
    )
    
    1. Combine

    chain = (
        planner
        | {
            "results_1": arguments_for,
            "results_2": arguments_against,
            "original_response": itemgetter("base_response"),
        }
        | final_responder
    )
    
  5. Run

    chain.invoke({"input": "scrum"})
    

Implementation

Runnable

  • invoke/ainvoke: Transforms a single input into an output.
  • batch/abatch: Efficiently transforms multiple inputs into outputs.
  • stream/astream: Streams output from a single input as it's produced.
  • astream_log: Streams output and selected intermediate results from an input.
class Runnable(Generic[Input, Output], ABC):
    def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
        """
        Bind arguments to a Runnable, returning a new Runnable.
        """
        return RunnableBinding(bound=self, kwargs=kwargs, config={})

RunnableBinding

example:

# Create a runnable binding that invokes the ChatModel with the
# additional kwarg `stop=['-']` when running it.
from langchain_community.chat_models import ChatOpenAI
model = ChatOpenAI()
model.invoke('Say "Parrot-MAGIC"', stop=['-']) # Should return `Parrot`
# Using it the easy way via `bind` method which returns a new
# RunnableBinding
runnable_binding = model.bind(stop=['-'])
runnable_binding.invoke('Say "Parrot-MAGIC"') # Should return `Parrot`

Idea

人間の情報検索の無意識な思考を反映できるのではないか。

例. 1. どうやって設定するんだろう 1. Documentはどこだ?→人に聞くかも 1. Documentを読む -> ある程度できる 1. 問題に直面する -> 人に聞く

例. 1. 人に聞く 1. 回答をもらう 1. 自分の理解していないところをさらに聞く 1. 相手から回答が得られる 1. 問題があることに気づく 1. チケットにする

自分の作業 ↔ 人とのやりとり

Ref

  1. https://medium.com/@twjjosiah/chain-loops-in-langchain-expression-language-lcel-a38894db0cee