Welcome to Promplate#
Promplate is a templating framework that progressively enhances your prompt engineering workflow with minimal dependency.
- If you want to run the example below, you need to install
openaitoo. You can do so bypip install promplate[openai].
Promplate runs well on python 3.8 - 3.14, and is well-tested on CPython and PyPy.
A simple example#
Let's say I need to greet in foreign language. Let's compose two simple prompts that just work.
from promplate.llm.openai import ChatComplete #(1)!
from promplate import Node
reply = Node.read("reply.j2")
translate = Node.read("translate.j2")
translate.run_config["temperature"] = 0
chain = reply + translate #(2)!
complete = ChatComplete().bind(model="gpt-3.5-turbo")
context = {"lang": "chinese"}
- Importing an LLM is optional. If you only use
promplateas a templating engine, runningpip install promplateneeds no dependency. - Chaining nodes is simply adding them together. We believe that nice debug printing is a must for development experience. So, with some magic behind the scenes, if you
print(chain), you will get</reply/> + </translate/>. This is useful if you have a lot of prompt templates and always useprintto debug.
{# import time #}
<|system|>
current time: {{ time.localtime() }}
<|user|>
Say happy new year to me in no more than 5 words.
Note that you must include the year in the message.
Note
This shows some special markup syntax in promplate:
- Inside
{# ... #}are python codes to run in the context. In this case, we want to usetime.localtime()to get the current time. So we import it in the template. <|user|>and<|assistant|>are chat markups. It will be formatted into alist[Message]object before being passed to the LLM.- Inside
{{ ... }}can be any python expressions.
Then call chain.invoke({"lang": "chinese"}, complete).result to get a Chinese greeting relating with the time now.
Why promplate?#
I am a prompt engineer who suffered from the following problems:
Problems#
Writing prompts inside scripts is not elegant#
- There is no syntax highlighting, no auto completion, no linting, etc.
- the indenting is ugly, or you have to bare with lots of spaces/tabs in your prompts
- Some characters must be escaped, like
"""inside a python string, or`inside a JavaScript string.
So in The template name will be their filenames. The template name will be the variable name.promplate, we support writing prompts in separate files. Of course, you can still write prompts inside scripts too.details
repr(foo) and str(foo) are slightly different. repr(foo) will output </foo/>print(Template("...")) so that there is no "variable name", it will be simply <Template>.
v0.3) writing chat prompts through magic
Chaining prompts is somehow difficult#
Often we need several LLM calls in a process. LCEL is langchain's solution.
Ours is like that, but everything unit is a promplate.Node instance. Router are implemented with 2-3 lines in callback functions through raise Jump(...) statements.
Promplate Nodes are just state machines.
Chat templates are hard to read#
Usually you need to manually construct the message list if you are using a chat model. In promplate, you can write chat templates in separate files, and use a render it as a list.
Identical prompts are hard to reuse & maintain#
Promplate has a component system (same meaning as in frontend ecosystem), which enable you to reuse prompt template fragments in different prompts.
Callbacks and output parsers are hard to bind#
In langchain, you can bind callback to a variety of event types. Promplate has a flexible callback system similarly, but you can bind simple callbacks through decorators like @node.pre_process.
Features#
- more than templating: components, chat markup
- more than LLM: callbacks, state machines
- developer experience: full typing, good printing ...
- flexibility: underlying ecosystem power
Further reading#
You can the quick-start tutorial, which is a more detailed explanation. If you have any questions, feel free to ask on GitHub Discussions!