Releases: jamesrochabrun/SwiftOpenAI
SwiftOpenAI v4.0.1
SwiftOpenAI v4.0.0
What's Changed
- Fixing Typo in READ.me by @jamesrochabrun in #115
Full Changelog: v4.0.0...v4.0.1
SwiftOpenAI v4.0.0
DeepSeek
The DeepSeek API uses an API format compatible with OpenAI. By modifying the configuration, you can use SwiftOpenAI to access the DeepSeek API.
Creating the service
let apiKey = "your_api_key"
let service = OpenAIServiceFactory.service(
apiKey: apiKey,
overrideBaseURL: "https://api.deepseek.com")
Non-Streaming Example
let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text(prompt))],
model: .custom("deepseek-reasoner")
)
do {
let result = try await service.chat(parameters: parameters)
// Access the response content
if let content = result.choices.first?.message.content {
print("Response: \(content)")
}
// Access reasoning content if available
if let reasoning = result.choices.first?.message.reasoningContent {
print("Reasoning: \(reasoning)")
}
} catch {
print("Error: \(error)")
}
Streaming Example
let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(
messages: [.init(role: .user, content: .text(prompt))],
model: .custom("deepseek-reasoner")
)
// Start the stream
do {
let stream = try await service.startStreamedChat(parameters: parameters)
for try await result in stream {
let content = result.choices.first?.delta.content ?? ""
self.message += content
// Optional: Handle reasoning content if available
if let reasoning = result.choices.first?.delta.reasoningContent {
self.reasoningMessage += reasoning
}
}
} catch APIError.responseUnsuccessful(let description, let statusCode) {
self.errorMessage = "Network error with status code: \(statusCode) and description: \(description)"
} catch {
self.errorMessage = error.localizedDescription
}
Notes
- The DeepSeek API is compatible with OpenAI's format but uses different model names
- Use .custom("deepseek-reasoner") to specify the DeepSeek model
- The
reasoningContent
field is optional and specific to DeepSeek's API - Error handling follows the same pattern as standard OpenAI requests.
For more inofrmation about the DeepSeek
api visit its documentation.
SwiftOpenAI v3.9.9
OpenRouter
OpenRouter provides an OpenAI-compatible completion API to 314 models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.
// Creating the service
let apiKey = "your_api_key"
let servcie = OpenAIServiceFactory.service(apiKey: apiKey,
overrideBaseURL: "https://openrouter.ai",
proxyPath: "api",
extraHeaders: [
"HTTP-Referer": "<YOUR_SITE_URL>", // Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>" // Optional. Site title for rankings on openrouter.ai.
])
// Making a request
let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("deepseek/deepseek-r1:free"))
let stream = service.startStreamedChat(parameters: parameters)
For more inofrmation about the OpenRouter
api visit its documentation.
DeepSeek
The DeepSeek API uses an API format compatible with OpenAI. By modifying the configuration, you can use SwiftOpenAI to access the DeepSeek API.
// Creating the service
let apiKey = "your_api_key"
let service = OpenAIServiceFactory.service(
apiKey: apiKey,
overrideBaseURL: "https://api.deepseek.com")
// Making a request
let prompt = "What is the Manhattan project?"
let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom("deepseek-reasoner"))
let stream = service.startStreamedChat(parameters: parameters)
For more inofrmation about the DeepSeek
api visit its documentation.
SwiftOpenAI v3.9.8
What's Changed
- Removing Static properties from OpenAI API by @jamesrochabrun in #110
Full Changelog: v.3.9.6...v3.9.8
SwiftOpenAI v3.9.7
What's Changed
- Update: adds new embeddings models by @macistador in #106
New Contributors
- @macistador made their first contribution in #106
Full Changelog: v3.9.5...v3.9.7
SwiftOpenAI v.3.9.6
What's Changed
Predicted outputs support:
Usage:
let code = """
ScrollView {
VStack {
textArea
Text(chatProvider.errorMessage)
.foregroundColor(.red)
streamedChatResultView
}
}
"""
let content: ChatCompletionParameters.Message.ContentType = .text("Change this Scrollview to be a list" )
let parameters = ChatCompletionParameters(
messages: [
.init(role: .user, content: content),
.init(role: .user, content: .text(code))],
model: .gpt4o,
prediction: .init(content: .text(code)))
try await openAIService.startChat(parameters: parameters)
Other:
- Update: adds new embeddings models by @macistador in #106
New Contributors
- @macistador made their first contribution in #106
Full Changelog: v3.9.5...v.3.9.6
SwiftOpenAI v3.9.5
What's Changed
- Updates from latest OpenAI API. by @jamesrochabrun in #104
Added reasoning_effort parameter for o1 models.
Added metadata for Chat Completions.
Model updates.
Full Changelog: v3.9.4...v3.9.5
SwiftOpenAI v3.9.4
SwiftOpenAI v3.9.3
Support for additional parameters, users now can use open router.
let service = OpenAIServiceFactory.service(
apikey: "${OPENROUTER_API_KEY}",
baseURL: "https://openrouter.ai",
proxyPath: "api"
headers: ["HTTP-Referer": "${YOUR_SITE_URL}", "X-Title": "${YOUR_SITE_NAME}")
What's Changed
- Adding additional parameters in API. by @jamesrochabrun in #101
Full Changelog: v3.9.2...v3.9.3
SwiftOpenAI v3.9.2
Gemini
Gemini is now accessible from the OpenAI Library. Announcement .
SwiftOpenAI
support all OpenAI endpoints, however Please refer to Gemini documentation to understand which API's are currently compatible'
Gemini is now accessible through the OpenAI Library. See the announcement here.
SwiftOpenAI supports all OpenAI endpoints. However, please refer to the Gemini documentation to understand which APIs are currently compatible."
You can instantiate a OpenAIService
using your Gemini token like this...
let geminiAPIKey = "your_api_key"
let baseURL = "https://generativelanguage.googleapis.com"
let version = "v1beta"
let service = OpenAIServiceFactory.service(
apiKey: apiKey,
overrideBaseURL: baseURL,
overrideVersion: version)
You can now create a chat request using the .custom model parameter and pass the model name as a string.
let parameters = ChatCompletionParameters(
messages: [.init(
role: .user,
content: content)],
model: .custom("gemini-1.5-flash"))
let stream = try await service.startStreamedChat(parameters: parameters)