Skip to content
This repository has been archived by the owner on Nov 24, 2023. It is now read-only.

[WIP] OpenAI nodes - nodes PR:196 #388

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@

[//]: # (Custom component imports)

import DocString from '@site/src/components/DocString';
import PythonCode from '@site/src/components/PythonCode';
import AppDisplay from '@site/src/components/AppDisplay';
import SectionBreak from '@site/src/components/SectionBreak';
import AppendixSection from '@site/src/components/AppendixSection';

[//]: # (Docstring)

import DocstringSource from '!!raw-loader!./a1-[autogen]/docstring.txt';
import PythonSource from '!!raw-loader!./a1-[autogen]/python_code.txt';

<DocString>{DocstringSource}</DocString>
<PythonCode GLink='AI_ML/OPENAI/DALLE_IMAGE_GENERATOR/DALLE_IMAGE_GENERATOR.py'>{PythonSource}</PythonCode>

<SectionBreak />



[//]: # (Examples)

## Examples

import Example1 from './examples/EX1/example.md';
import App1 from '!!raw-loader!./examples/EX1/app.json';



<AppDisplay
nodeLabel='DALLE_IMAGE_GENERATOR'
appImg={''}
outputImg={''}
>
{App1}
</AppDisplay>

<Example1 />

<SectionBreak />



[//]: # (Appendix)

import Notes from './appendix/notes.md';
import Hardware from './appendix/hardware.md';
import Media from './appendix/media.md';

## Appendix

<AppendixSection index={0} folderPath='nodes/AI_ML/OPENAI/DALLE_IMAGE_GENERATOR/appendix/'><Notes /></AppendixSection>
<AppendixSection index={1} folderPath='nodes/AI_ML/OPENAI/DALLE_IMAGE_GENERATOR/appendix/'><Hardware /></AppendixSection>
<AppendixSection index={2} folderPath='nodes/AI_ML/OPENAI/DALLE_IMAGE_GENERATOR/appendix/'><Media /></AppendixSection>


Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@

The DALLE_IMAGE_GENERATOR node takes a prompt and generates an image
using OpenAI's DALL-E model.
The prompt should be a sentence describing the image you want to generate.
The image will be returned as a DataContainer with the type 'image'.

Parameters
----------
prompt: string
A sentence describing the image you want to generate.
width: int
The width of the generated image.
height: int
The height of the generated image.
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import numpy as np
from flojoy import flojoy, Image, run_in_venv
import base64
from io import BytesIO
import os
import time

API_RETRY_ATTEMPTS = 5
API_RETRY_INTERVAL_IN_SECONDS = 1


@flojoy
@run_in_venv(pip_dependencies=["openai==0.27.8", "Pillow==10.0.0", "requests==2.28.1"])
def DALLE_IMAGE_GENERATOR(
prompt: str,
width: int = 1024,
height: int = 1024,
) -> Image:

import openai
from PIL import Image as PilImage

api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise Exception("OPENAI_API_KEY environment variable not set")

openai.api_key = api_key

for i in range(API_RETRY_ATTEMPTS):
try:
result = openai.Image.create(
prompt=prompt, n=1, size=f"{width}x{height}", response_format="b64_json"
)
print(f"No error in attempt {i} of generating image")
break
except openai.error.RateLimitError:
if i > API_RETRY_ATTEMPTS:
raise Exception("Rate limit error. Max retries exceeded.")

print(
f"Rate limit error, retrying in {API_RETRY_INTERVAL_IN_SECONDS} seconds"
)
time.sleep(API_RETRY_INTERVAL_IN_SECONDS)

if not result.data:
raise Exception("No image data in result")

base64_content = result.get("data")[0].get("b64_json")
image_data = base64.b64decode(base64_content)
img = PilImage.open(BytesIO(image_data))

img_array = np.asarray(img)
red_channel = img_array[:, :, 0]
green_channel = img_array[:, :, 1]
blue_channel = img_array[:, :, 2]

alpha_channel = None
if img_array.shape[2] == 4:
alpha_channel = img_array[:, :, 3]

return Image(
r=red_channel,
g=green_channel,
b=blue_channel,
a=alpha_channel,
)
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
This node does not require any peripheral hardware to operate. Please see INSTRUMENTS for nodes that interact with the physical world through connected hardware.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
No supporting screenshots, photos, or videos have been added to the media.md file for this node.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
No theory or technical notes have been contributed for this node yet.
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
This app uses the DALL-E node to generate images from text.
Based on the users input, the node will generate an image and pass it to the next node.
The input text is defined in the node's form, for this app we are using the following text:
```
A cute baby sea otter
```
57 changes: 57 additions & 0 deletions docs/nodes/AI_ML/OPENAI/JSON_EXTRACTOR/JSON_EXTRACTOR.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@

[//]: # (Custom component imports)

import DocString from '@site/src/components/DocString';
import PythonCode from '@site/src/components/PythonCode';
import AppDisplay from '@site/src/components/AppDisplay';
import SectionBreak from '@site/src/components/SectionBreak';
import AppendixSection from '@site/src/components/AppendixSection';

[//]: # (Docstring)

import DocstringSource from '!!raw-loader!./a1-[autogen]/docstring.txt';
import PythonSource from '!!raw-loader!./a1-[autogen]/python_code.txt';

<DocString>{DocstringSource}</DocString>
<PythonCode GLink='AI_ML/OPENAI/JSON_EXTRACTOR/JSON_EXTRACTOR.py'>{PythonSource}</PythonCode>

<SectionBreak />



[//]: # (Examples)

## Examples

import Example1 from './examples/EX1/example.md';
import App1 from '!!raw-loader!./examples/EX1/app.json';



<AppDisplay
nodeLabel='JSON_EXTRACTOR'
appImg={''}
outputImg={''}
>
{App1}
</AppDisplay>

<Example1 />

<SectionBreak />



[//]: # (Appendix)

import Notes from './appendix/notes.md';
import Hardware from './appendix/hardware.md';
import Media from './appendix/media.md';

## Appendix

<AppendixSection index={0} folderPath='nodes/AI_ML/OPENAI/JSON_EXTRACTOR/appendix/'><Notes /></AppendixSection>
<AppendixSection index={1} folderPath='nodes/AI_ML/OPENAI/JSON_EXTRACTOR/appendix/'><Hardware /></AppendixSection>
<AppendixSection index={2} folderPath='nodes/AI_ML/OPENAI/JSON_EXTRACTOR/appendix/'><Media /></AppendixSection>


Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@

The JSON_EXTRACTOR node extract specific properties information from a text using JSON schema.

Parameters
----------
properties: string
Comma separated list of properties to extract. Example: "name,age,location"
prompt: string
Text to extract information from. Example: "I'm John, I am 30 years old and I live in New York."
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
from flojoy import flojoy, DataFrame as FlojoyDataFrame, run_in_venv
import os
import json
from copy import deepcopy
import time


ACCEPTED_SCHEMA_FORMATS = [".json"]

BASE_SCHEMA = {
"name": "information_extraction",
"description": "Extracts the information as JSON.",
"parameters": {"type": "object", "properties": {}, "required": []},
}

API_RETRY_ATTEMPTS = 5
API_RETRY_INTERVAL_IN_SECONDS = 1


@flojoy
@run_in_venv(pip_dependencies=["openai==0.27.8", "pandas==2.0.2"])
def JSON_EXTRACTOR(
properties: list[str],
prompt: str,
) -> FlojoyDataFrame:

import openai
import pandas as pd

api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise Exception("OPENAI_API_KEY environment variable not set")
openai.api_key = api_key

if not properties:
raise Exception("No properties found to extract.")

schema = deepcopy(BASE_SCHEMA)
for property in properties:
schema["parameters"]["properties"][property] = {
"title": property,
"type": "string",
}
schema["parameters"]["required"].append(property)

for i in range(API_RETRY_ATTEMPTS):
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=[
{"role": "user", "content": prompt},
],
temperature=0,
functions=[schema],
function_call={"name": schema["name"]},
)
print(f"No error in attempt {i} of extraction.")
break
except openai.error.RateLimitError:
if i > API_RETRY_ATTEMPTS:
raise Exception("Rate limit error. Max retries exceeded.")

print(
f"Rate limit error, retrying in {API_RETRY_INTERVAL_IN_SECONDS} seconds"
)
time.sleep(API_RETRY_INTERVAL_IN_SECONDS)

if not response.choices:
raise Exception("No extraction choices found in response.")

data = json.loads(response.choices[0].message.function_call.arguments)
df = pd.DataFrame(data=[data])
return FlojoyDataFrame(df=df)
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
This node does not require any peripheral hardware to operate. Please see INSTRUMENTS for nodes that interact with the physical world through connected hardware.
1 change: 1 addition & 0 deletions docs/nodes/AI_ML/OPENAI/JSON_EXTRACTOR/appendix/media.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
No supporting screenshots, photos, or videos have been added to the media.md file for this node.
1 change: 1 addition & 0 deletions docs/nodes/AI_ML/OPENAI/JSON_EXTRACTOR/appendix/notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
No theory or technical notes have been contributed for this node yet.
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
This example app uses the JSON_EXTRACTOR node for extracting estructured data (as JSON) from unstructured text.
In this example we are trying to extract 2 properties (price and name) from the following text:
**Headset Gamer Bluetooth MJ23 - $100**
The properties and text are defined in the node's form.
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@

[//]: # (Custom component imports)

import DocString from '@site/src/components/DocString';
import PythonCode from '@site/src/components/PythonCode';
import AppDisplay from '@site/src/components/AppDisplay';
import SectionBreak from '@site/src/components/SectionBreak';
import AppendixSection from '@site/src/components/AppendixSection';

[//]: # (Docstring)

import DocstringSource from '!!raw-loader!./a1-[autogen]/docstring.txt';
import PythonSource from '!!raw-loader!./a1-[autogen]/python_code.txt';

<DocString>{DocstringSource}</DocString>
<PythonCode GLink='AI_ML/OPENAI/WHISPER_SPEECH_TO_TEXT/WHISPER_SPEECH_TO_TEXT.py'>{PythonSource}</PythonCode>

<SectionBreak />



[//]: # (Examples)

## Examples

import Example1 from './examples/EX1/example.md';
import App1 from '!!raw-loader!./examples/EX1/app.json';



<AppDisplay
nodeLabel='WHISPER_SPEECH_TO_TEXT'
appImg={''}
outputImg={''}
>
{App1}
</AppDisplay>

<Example1 />

<SectionBreak />



[//]: # (Appendix)

import Notes from './appendix/notes.md';
import Hardware from './appendix/hardware.md';
import Media from './appendix/media.md';

## Appendix

<AppendixSection index={0} folderPath='nodes/AI_ML/OPENAI/WHISPER_SPEECH_TO_TEXT/appendix/'><Notes /></AppendixSection>
<AppendixSection index={1} folderPath='nodes/AI_ML/OPENAI/WHISPER_SPEECH_TO_TEXT/appendix/'><Hardware /></AppendixSection>
<AppendixSection index={2} folderPath='nodes/AI_ML/OPENAI/WHISPER_SPEECH_TO_TEXT/appendix/'><Media /></AppendixSection>


Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

the node WHISPER_SPEECH_TO_TEXT uses OpenAI whisper transcription model to convert audio to text. The audio can be provided as a file path or as bytes from a previous node. The previous node value has priority over the file path.

Parameters:
-----------
file_path: string
Path to the audio file to be transcribed. Only mp3 format is supported.
Loading