Pipelines

Store LLM outputs into ready to fine-tune datasets.

Pipelines are a tool for storing LLM outputs for fine-tuning.

Fine-tuning a smaller model on the outputs of a larger model is a common strategy to optimize cost and performance while ensuring consistent and quality responses.

Using smaller, open-source models can also enable you to self-host due to less-restrictive hardware requirements, giving you ownership of your models and keeping data within your network.

What is a pipeline?

A pipeline is a collection of LLM outputs that you can easily create, filter, and fine-tune on later.

There are a few steps in the pipeline lifecycle:

  1. Create the pipeline

  2. Add LLM outputs to the pipeline

  3. Filter the pipeline to create a dataset

  4. Fine-tune a model on the dataset

  5. Collect more LLM outputs, add to the pipeline, and repeat

Samples added to a pipeline can be tagged with a user_id, group_id, or custom metadata. You can filter pipelines to view segments of your data and create training datasets from those segments.

Getting started

Currently pipelines are only supported through the Forefront Python and Typescript SDK. Below is a walkthrough of how to get started:

Install the package

The Typescript SDK can be used in Node.js and serverless environments (including Cloudflare workers).

pip install forefront

Initialize the Forefront client

from forefront import ForefrontClient

client = ForefrontClient(api_key="<YOUR_API_KEY>")

Create a pipeline

pipeline = ff.pipelines.create("my-first-pipeline")

Get pipelines

pipelines = ff.pipelines.list()

print(pipelines[0].id)

Get pipeline by ID

pipe = ff.pipelines.get_by_id("pipe_123")

Add data to a pipeline

# Assume the messages are the output of an LLM
messages = [
    {
        "role": "user", 
        "content": "Write a hello world in rust."
    },
    {
        "role": "assistant",
        "content": '```rust\nfn main() {\nprintln!("Hello, World!");\n}\n```',
    },
]

# Get the pipeline object if you haven't already
pipe = ff.pipelines.get_by_id("pipe_123")

# Add the data to your pipeline
# Optionally add a user_id, group_id, or key-value metadata to filter by later
pipe.add(
    messages=messages,
    user_id='user_123',
    group_id='group_a',
    metadata={
      "lang": "rust"
      }
)

Filter pipeline data

# Get a pipeline of samples created by "user1"
user_1_examples = pipe.filter_by_user_id("user1")

# Get a pipeline of samples created by "group1"
group_1_examples = pipe.filter_by_group_id("group1"

# Get a pipeline of samples tagged with specific metadata 
rust_examples = pipe.filter_by_metadata({"lang": "rust"})

Inspect pipeline data

'''
Returns an array of dataset samples that 
meets the filter criteria from the previous step
'''
data = await rust_examples.get_samples()

Create dataset from pipeline

To fine-tune a model from a pipeline, you will first need to convert it to a dataset.

'''
Create a dataset called "my-rust-dataset using
the filtered pipeline object from the previous step
'''
rust_dataset = rust_examples.create_dataset_from_pipeline("my-rust-dataset")

Create a fine-tuned model and inference it

For completeness, here is an example of creating a fine-tuning job from a dataset and then inferencing the model once training is completed.

# Create fine-tuned model
my_rust_llm = ff.fine_tunes.create(
     name="my-rust-llm", 
     base_model="mistralai/mistral-7b",
     training_dataset=rust_dataset.dataset_string,
     epochs=1,
     public=False
)

# Get the model string > "team-name/my-rust-llm"
model_string = my_rust_llm.model_string

# Inference the model
completion = ff.chat.completions.create(
    messages=[
        {
            "role":"system", 
            "content":"You are a helpful coding assistant"
        },
        {
            "role": "user", 
            "content": "Write the Fibonacci sequence"
        },
    ],
    model=model_string,
)

Last updated