Skip to content

feat: log wrap_openai runs with unified usage_metadata #37

feat: log wrap_openai runs with unified usage_metadata

feat: log wrap_openai runs with unified usage_metadata #37

Triggered via pull request October 8, 2024 05:15
Status Success
Total duration 19m 15s
Artifacts

py-bench.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 warning and 2 notices
benchmark
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/cache@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
Benchmark results: python/langsmith/schemas.py#L1
......................................... create_5_000_run_trees: Mean +- std dev: 587 ms +- 47 ms ......................................... create_10_000_run_trees: Mean +- std dev: 1.16 sec +- 0.06 sec ......................................... create_20_000_run_trees: Mean +- std dev: 1.16 sec +- 0.06 sec ......................................... dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 768 us +- 8 us ......................................... dumps_class_nested_py_leaf_50x100: Mean +- std dev: 27.6 ms +- 0.4 ms ......................................... dumps_class_nested_py_leaf_100x200: Mean +- std dev: 113 ms +- 3 ms ......................................... dumps_dataclass_nested_50x100: Mean +- std dev: 27.8 ms +- 0.4 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (14.5 ms) is 25% of the mean (58.9 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydantic_nested_50x100: Mean +- std dev: 58.9 ms +- 14.5 ms ......................................... WARNING: the benchmark result may be unstable * the standard deviation (31.7 ms) is 15% of the mean (217 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydanticv1_nested_50x100: Mean +- std dev: 217 ms +- 32 ms
Comparison against main: python/langsmith/schemas.py#L1
+-----------------------------------------------+----------+------------------------+ | Benchmark | main | changes | +===============================================+==========+========================+ | dumps_pydantic_nested_50x100 | 63.7 ms | 58.9 ms: 1.08x faster | +-----------------------------------------------+----------+------------------------+ | dumps_dataclass_nested_50x100 | 27.9 ms | 27.8 ms: 1.00x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_branch_and_leaf_200x400 | 762 us | 768 us: 1.01x slower | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_50x100 | 27.3 ms | 27.6 ms: 1.01x slower | +-----------------------------------------------+----------+------------------------+ | create_5_000_run_trees | 572 ms | 587 ms: 1.03x slower | +-----------------------------------------------+----------+------------------------+ | create_10_000_run_trees | 1.12 sec | 1.16 sec: 1.04x slower | +-----------------------------------------------+----------+------------------------+ | create_20_000_run_trees | 1.12 sec | 1.16 sec: 1.04x slower | +-----------------------------------------------+----------+------------------------+ | Geometric mean | (ref) | 1.00x slower | +-----------------------------------------------+----------+------------------------+ Benchmark hidden because not significant (2): dumps_pydanticv1_nested_50x100, dumps_class_nested_py_leaf_100x200