Skip to content

Commit

Permalink
add anyscale support (#45)
Browse files Browse the repository at this point in the history
  • Loading branch information
jxnl authored Jan 4, 2024
1 parent e02e8a4 commit 33f33dc
Show file tree
Hide file tree
Showing 7 changed files with 113 additions and 12 deletions.
2 changes: 1 addition & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,6 @@
"editor.defaultFormatter": "vscode.json-language-features"
},
"[typescript]": {
"editor.defaultFormatter": "vscode.typescript-language-features"
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
5 changes: 5 additions & 0 deletions docs/blog/.authors.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
authors:
jxnl:
name: Jason Liu
description: Creator
avatar: https://pbs.twimg.com/profile_images/1724672723748638720/qOBwmkOI_400x400.jpg
69 changes: 69 additions & 0 deletions docs/blog/posts/anyscale.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
draft: False
date: 2024-01-01
slug: patching
tags:
- patching
- open source
authors:
- jxnl
---

# Structured Outputs with Anyscale and Zod

Open-source LLMS are gaining popularity, and the release of Anyscale's Mistral model has made it possible to obtain structured outputs using JSON schema at any scale. Instead of relying on a model's default output mode, you can utilize JSON schema to obtain structured outputs. This approach is a time-saving alternative to extensive prompt engineering.

By the end of this blog post, you will learn how to effectively utilize instructor with Anyscale. But before we proceed, let's first explore the concept of patching.

## Understanding Modes

Instructor's patch enhances a openai api it with the following features, you can learn more about them [here](../../concepts/modes.md), for anyscale they support `JSON_SCHEMA` and `FUNCTIONS` modes. and with instructor we'll be able to use the following features:

- `response_model` in `create` calls that returns a pydantic model
- `max_retries` in `create` calls that retries the call if it fails by using a backoff strategy

## Anyscale

The good news is that Anyscale employs the same OpenAI client, and its models support some of these output modes too!

!!! note "Getting access"

If you want to try this out for yourself check out the [Anyscale](https://anyscale.com/) website. You can get started [here](https://docs.anyscale.com/get-started).

Let's explore one of the models available in Anyscale's extensive collection!

```ts
import Instructor from "@/instructor"
import OpenAI from "openai"
import { z } from "zod"

const UserSchema = z.object({
age: z.number(),
name: z.string()
})

const oai = new OpenAI({
baseURL: "https://api.endpoints.anyscale.com/v1",
apiKey: process.env.ANYSCALE_API_KEY ?? undefined,
})

const client = Instructor({
client: oai,
mode: "JSON_SCHEMA"
})

const user = await client.chat.completions.create({
model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
messages: [{ role: "user", content: "Jason Liu is 30 years old" }],
response_model: UserSchema,
})

console.log(user)
// {
// age: 30,
// name: "Jason Liu",
// }

```

You can find more information about Anyscale's output mode support [here](https://docs.endpoints.anyscale.com/).
31 changes: 31 additions & 0 deletions examples/extract_user/anyscale.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import Instructor from "@/instructor"
import OpenAI from "openai"
import { z } from "zod"

const UserSchema = z.object({
age: z.number(),
name: z.string()
})

const oai = new OpenAI({
baseURL: "https://api.endpoints.anyscale.com/v1",
apiKey: process.env.ANYSCALE_API_KEY ?? undefined,
})

const client = Instructor({
client: oai,
mode: "JSON_SCHEMA"
})

const user = await client.chat.completions.create({
messages: [{ role: "user", content: "Jason Liu is 30 years old" }],
model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
response_model: UserSchema,
max_retries: 3
})

console.log(user)
// {
// age: 30,
// name: "Jason Liu",
// }
9 changes: 1 addition & 8 deletions examples/extract_user/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,7 @@ import { z } from "zod"

const UserSchema = z.object({
age: z.number(),
name: z.string().refine(name => name.includes(" "), {
message: "Name must contain a space"
}),
thingsThatAreTheSameAgeAsTheUser: z
.array(z.string(), {
description: "a list of random things that are the same age as the user"
})
.min(6)
name: z.string()
})

type User = z.infer<typeof UserSchema>
Expand Down
8 changes: 6 additions & 2 deletions src/oai/params.ts
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,17 @@ export function OAIBuildToolFunctionParams(definition, params) {
}

export function OAIBuildMessageBasedParams(definition, params, mode) {
const { name, ...jsonSchema } = definition

Check warning on line 46 in src/oai/params.ts

View workflow job for this annotation

GitHub Actions / run-tests

'name' is assigned a value but never used. Allowed unused vars must match /^_/u

const MODE_SPECIFIC_CONFIGS = {
[MODE.JSON]: {
response_format: { type: "json_object" }
},
[MODE.JSON_SCHEMA]: {
//TODO: not sure what is different about this mode - the OAI sdk doesnt accept a schema here
response_format: { type: "json_object" }
response_format: {
type: "json_object",
schema: jsonSchema
}
}
}

Expand Down
1 change: 0 additions & 1 deletion tsconfig.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@
"resolveJsonModule": true,
"isolatedModules": true,
"incremental": true,
"strictNullChecks": true,
"tsBuildInfoFile": "tsconfig.tsbuildinfo",
"paths": {
"@/*": ["./src/*"]
Expand Down

0 comments on commit 33f33dc

Please sign in to comment.