Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(js): Pull SDK and build JS API refs #628

Merged
merged 13 commits into from
Jan 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,12 @@ build-api-ref:
$(PYTHON) langsmith-sdk/python/docs/create_api_rst.py
LC_ALL=C $(PYTHON) -m sphinx -T -E -b html -d langsmith-sdk/python/docs/_build/doctrees -c langsmith-sdk/python/docs langsmith-sdk/python/docs langsmith-sdk/python/docs/_build/html -j auto
$(PYTHON) langsmith-sdk/python/docs/scripts/custom_formatter.py langsmith-sdk/docs/_build/html/

cd langsmith-sdk/js && yarn && yarn run build:typedoc --useHostedBaseUrlForAbsoluteLinks true --hostedBaseUrl "https://$${VERCEL_URL:-docs.smith.langchain.com}/reference/js/"

vercel-build: install-vercel-deps build-api-ref
mkdir -p static/reference/python
mv langsmith-sdk/python/docs/_build/html/* static/reference/python/
mkdir -p static/reference/js
mv langsmith-sdk/js/_build/api_refs/* static/reference/js/
rm -rf langsmith-sdk
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build

4 changes: 2 additions & 2 deletions docs/evaluation/concepts/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ There are a number of ways to define and run evaluators:
- **Custom code**: Define [custom evaluators](/evaluation/how_to_guides/custom_evaluator) as Python or TypeScript functions and run them client-side using the SDKs or server-side via the UI.
- **Built-in evaluators**: LangSmith has a number of built-in evaluators that you can configure and run via the UI.

You can run evaluators using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python) and TypeScript), via the [Prompt Playground](../../prompt_engineering/concepts#prompt-playground), or by configuring [Rules](../../observability/how_to_guides/monitoring/rules) to automatically run them on particular tracing projects or datasets.
You can run evaluators using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python) and [TypeScript](https://docs.smith.langchain.com/reference/js)), via the [Prompt Playground](../../prompt_engineering/concepts#prompt-playground), or by configuring [Rules](../../observability/how_to_guides/monitoring/rules) to automatically run them on particular tracing projects or datasets.

#### Evaluation techniques

Expand Down Expand Up @@ -165,7 +165,7 @@ It is offline because we're evaluating on a pre-compiled set of data.
An online evaluation, on the other hand, is one in which we evaluate a deployed application's outputs on real traffic, in near realtime.
Offline evaluations are used for testing a version(s) of your application pre-deployment.

You can run offline evaluations client-side using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python) and TypeScript). You can run them server-side via the [Prompt Playground](../../prompt_engineering/concepts#prompt-playground) or by configuring [automations](/observability/how_to_guides/monitoring/rules) to run certain evaluators on every new experiment against a specific dataset.
You can run offline evaluations client-side using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python) and [TypeScript](https://docs.smith.langchain.com/reference/js)). You can run them server-side via the [Prompt Playground](../../prompt_engineering/concepts#prompt-playground) or by configuring [automations](/observability/how_to_guides/monitoring/rules) to run certain evaluators on every new experiment against a specific dataset.

![Offline](./static/offline.png)

Expand Down
1 change: 1 addition & 0 deletions docs/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Technical reference that covers components, APIs, and other aspects of LangSmith
### SDK

- [Python SDK Reference](https://docs.smith.langchain.com/reference/python)
- [JS/TS SDK Reference](https://docs.smith.langchain.com/reference/js)
- [LangChain off-the-shelf evaluators (Python only)](./reference/sdk_reference/langchain_evaluators)

### Common data types
Expand Down
4 changes: 4 additions & 0 deletions docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,10 @@ const config = {
label: "Python",
to: "https://docs.smith.langchain.com/reference/python",
},
{
label: "JS/TS",
to: "https://docs.smith.langchain.com/reference/js",
},
],
},
],
Expand Down
36 changes: 18 additions & 18 deletions vercel.json
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@
"destination": "/old/category/proxy/:path*"
},
{
"source": "/category/release-notes",
"source": "/category/release-notes",
"destination": "/self_hosting/release_notes"
},
{
Expand Down Expand Up @@ -187,37 +187,37 @@
"destination": "/prompt_engineering/tutorials/optimize_classifier"
},
{
"source": "/evaluation/how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-particular-version-of-a-dataset",
"destination": "/evaluation/how_to_guides/evaluation/dataset_version"
"source": "/evaluation/how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-particular-version-of-a-dataset",
"destination": "/evaluation/how_to_guides/evaluation/dataset_version"
},
{
"source": "/evaluation/how_to_guides/evaluation/:path*",
"destination": "/evaluation/how_to_guides/:path*"
"source": "/evaluation/how_to_guides/evaluation/:path*",
"destination": "/evaluation/how_to_guides/:path*"
},
{
"source": "/evaluation/how_to_guides/datasets/:path*",
"destination": "/evaluation/how_to_guides/:path*"
"source": "/evaluation/how_to_guides/datasets/:path*",
"destination": "/evaluation/how_to_guides/:path*"
},
{
"source": "/evaluation/how_to_guides/human_feedback/:path*",
"destination": "/evaluation/how_to_guides/:path*"
"source": "/evaluation/how_to_guides/human_feedback/:path*",
"destination": "/evaluation/how_to_guides/:path*"
},
{
"source": "/reference/python(/?)",
"destination": "/reference/python/reference"
"source": "/reference/python(/?)",
"destination": "/reference/python/reference"
},
{
"source": "/reference/sdk_reference(/?)",
"destination": "/reference/"
"source": "/reference/sdk_reference(/?)",
"destination": "/reference/"
},
{
"source": "/administration/pricing",
"destination": "https://www.langchain.com/pricing-langsmith"
},
{
"source": "/pricing/plans",
"destination": "https://www.langchain.com/pricing-langsmith"
}
},
{
"source": "/pricing/plans",
"destination": "https://www.langchain.com/pricing-langsmith"
}
],
"builds": [
{
Expand Down
Loading