Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
jamesrochabrun authored Oct 24, 2023
1 parent 27a2c31 commit e066c2a
Showing 1 changed file with 108 additions and 1 deletion.
109 changes: 108 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1267,10 +1267,117 @@ let modelID = "gpt-3.5-turbo-instruct"
let retrievedModel = try await service.retrieveModelWith(id: modelID)
```
```swift
/// Delete fins tune model
/// Delete fine tuned model
let modelID = "fine-tune-model-id"
let deletionStatus = try await service.deleteFineTuneModelWith(id: modelID)
```
### Moderations
Parameters
```swift
/// [Classifies if text violates OpenAI's Content Policy.](https://platform.openai.com/docs/api-reference/moderations/create)
public struct ModerationParameter<Input: Encodable>: Encodable {

/// The input text to classify, string or array.
let input: Input
/// Two content moderations models are available: text-moderation-stable and text-moderation-latest.
/// The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest.
let model: String?

enum Model: String {
case stable = "text-moderation-stable"
case latest = "text-moderation-latest"
}

init(
input: Input,
model: Model? = nil)
{
self.input = input
self.model = model?.rawValue
}
}
```
Response
```swift
/// The [moderation object](https://platform.openai.com/docs/api-reference/moderations/object). Represents policy compliance report by OpenAI's content moderation model against a given input.
public struct ModerationObject: Decodable {

/// The unique identifier for the moderation request.
public let id: String
/// The model used to generate the moderation results.
public let model: String
/// A list of moderation objects.
public let results: [Moderation]

public struct Moderation: Decodable {

/// Whether the content violates OpenAI's usage policies.
public let flagged: Bool
/// A list of the categories, and whether they are flagged or not.
public let categories: Category<Bool>
/// A list of the categories along with their scores as predicted by model.
public let categoryScores: Category<Double>

public struct Category<T: Decodable>: Decodable {

/// Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harrassment.
public let hate: T
/// Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
public let hateThreatening: T
/// Content that expresses, incites, or promotes harassing language towards any target.
public let harassment: T
/// Harassment content that also includes violence or serious harm towards any target.
public let harassmentThreatening: T
/// Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
public let selfHarm: T
/// Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders.
public let selfHarmIntent: T
/// Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts.
public let selfHarmInstructions: T
/// Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
public let sexual: T
/// Sexual content that includes an individual who is under 18 years old.
public let sexualMinors: T
/// Content that depicts death, violence, or physical injury.
public let violence: T
/// Content that depicts death, violence, or physical injury in graphic detail.
public let violenceGraphic: T

enum CodingKeys: String, CodingKey {
case hate
case hateThreatening = "hate/threatening"
case harassment
case harassmentThreatening = "harassment/threatening"
case selfHarm = "self-harm"
case selfHarmIntent = "self-harm/intent"
case selfHarmInstructions = "self-harm/instructions"
case sexual
case sexualMinors = "sexual/minors"
case violence
case violenceGraphic = "violence/graphic"
}
}

enum CodingKeys: String, CodingKey {
case categories
case categoryScores = "category_scores"
case flagged
}
}
}
```
Usage
```swift
/// Single prompt
let prompt = "I am going to kill him"
let parameters = ModerationParameter(input: prompt)
let isFlagged = try await service.createModerationFromText(parameters: parameters)
```
```swift
/// Multiple prompts
let prompts = ["I am going to kill him", "I am going to die"]
let parameters = ModerationParameter(input: prompts)
let isFlagged = try await service.createModerationFromTexts(parameters: parameters)
```


0 comments on commit e066c2a

Please sign in to comment.