Skip to content

Commit

Permalink
feat: Update Leva gem to the latest version
Browse files Browse the repository at this point in the history
Update the Leva gem to the latest stable version to ensure compatibility with the latest dependencies and take advantage of any bug fixes or new features. This update will improve the overall performance and functionality of the application.
  • Loading branch information
kieranklaassen committed Aug 14, 2024
1 parent bc1acec commit 55b934e
Show file tree
Hide file tree
Showing 5 changed files with 32 additions and 48 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ dataset.records << TextContent.create(text: "I's ok", expected_label: "Neutral")
Create a run class to handle the execution of your inference logic:

```bash
$ rails generate leva:runner sentiment
rails generate leva:runner sentiment
```

```ruby
Expand All @@ -53,7 +53,7 @@ end
Create one or more eval classes to evaluate the model's output:

```bash
$ rails generate leva:eval sentiment_accuracy
rails generate leva:eval sentiment_accuracy
```

```ruby
Expand Down
3 changes: 0 additions & 3 deletions lib/generators/leva/templates/eval.rb.erb
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# frozen_string_literal: true

class <%= class_name %>Eval < Leva::BaseEval
leva_dataset_record_class "YourRecordClass"

# @param prediction [String] The prediction to evaluate
# @param record [YourRecordClass] The record to evaluate
# @return [Leva::Result] The result of the evaluation
Expand All @@ -11,7 +9,6 @@ class <%= class_name %>Eval < Leva::BaseEval

Leva::Result.new(
label: "<%= file_name.underscore %>",
prediction: prediction,
score: score
)
end
Expand Down
15 changes: 15 additions & 0 deletions test/dummy/app/evals/sentiment_accuracy_eval.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# frozen_string_literal: true

class SentimentAccuracyEval < Leva::BaseEval
# @param prediction [String] The prediction to evaluate
# @param record [TextContent] The record to evaluate
# @return [Leva::Result] The result of the evaluation
def evaluate(prediction, text_content)
score = prediction == text_content.expected_label ? 1.0 : 0.0

Leva::Result.new(
label: "sentiment_accuracy",
score: score
)
end
end
37 changes: 0 additions & 37 deletions test/dummy/app/evals/sentiment_eval.rb

This file was deleted.

21 changes: 15 additions & 6 deletions test/dummy/app/runners/sentiment_run.rb
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
# frozen_string_literal: true

class SentimentRun < Leva::BaseRun
# @param text [String] The text to analyze
# @return [String] The sentiment analysis result
def execute(record)
# Your model execution logic here
# This could involve calling an API, running a local model, etc.
# Return the model's output
# Executes sentiment analysis on the given text content.
#
# @param text_content [TextContent] The text to analyze
# @return [String] The sentiment analysis result (Positive, Neutral, or Negative)
def execute(text_content)
text = text_content.text.downcase

case
when text.match?(/\b(love|great|excellent|awesome|fantastic)\b/)
"Positive"
when text.match?(/\b(hate|terrible|awful|horrible|bad)\b/)
"Negative"
else
"Neutral"
end
end
end

0 comments on commit 55b934e

Please sign in to comment.