Skip to content

Commit

Permalink
Fix minor issues
Browse files Browse the repository at this point in the history
  • Loading branch information
wesselb committed Jan 20, 2025
1 parent 2bb1688 commit 0aa706b
Show file tree
Hide file tree
Showing 8 changed files with 30 additions and 170 deletions.
137 changes: 0 additions & 137 deletions _docker_requirements.txt

This file was deleted.

19 changes: 7 additions & 12 deletions aurora/foundry/client/foundry.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,20 +34,15 @@ def _req(
"Authorization": f"Bearer {self.token}",
"Content-Type": "application/json",
},
json={
# "inputs": wrapped, # mlflow local testing only
"input_data": wrapped # AML
},
json={"input_data": wrapped},
)

def _unwrap(self, answer: requests.Response) -> dict:
if not answer.ok:
logger.error(answer.text)
answer.raise_for_status()
obj = answer.json()
if "predictions" in obj: # Local mlflow testing only.
return obj["predictions"]
return obj
def _unwrap(self, response: requests.Response) -> dict:
if not response.ok:
logger.error(response.text)
response.raise_for_status()
response_json = response.json()
return response_json

def submit_task(self, data: dict) -> dict:
"""Send `data` to the scoring path.
Expand Down
12 changes: 8 additions & 4 deletions aurora/foundry/server/mlflow_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
)
from aurora.foundry.common.model import MLFLOW_ARTIFACTS, models

__all__ = ["AuroraModelWrapper"]

# Need to give the name explicitly here, because the script may be run stand-alone.
logger = logging.getLogger("aurora.foundry.server.score")

Expand Down Expand Up @@ -111,15 +113,17 @@ def __call__(self) -> None:


class AuroraModelWrapper(mlflow.pyfunc.PythonModel):
def load_context(self, context):
"""A wrapper around an async workflow for making predictions with Aurora."""

def load_context(self, context) -> None:
logging.getLogger("aurora").setLevel(logging.INFO)
logger.info("Starting `ThreadPoolExecutor`.")
self.POOL = ThreadPoolExecutor(max_workers=1)
self.TASKS = dict()
self.TASKS: dict[str, Task] = {}
self.POOL.__enter__()
MLFLOW_ARTIFACTS.update(context.artifacts)

def predict(self, context, model_input, params=None):
def predict(self, context, model_input: dict, params=None) -> dict:
data = json.loads(model_input["data"].item())

if data["type"] == "submission":
Expand Down Expand Up @@ -173,4 +177,4 @@ def predict(self, context, model_input, params=None):
return task.task_info.dict()

else:
raise ValueError(f"Unknown data type: {data['type']}")
raise ValueError(f"Unknown data type: `{data['type']}`.")
4 changes: 1 addition & 3 deletions docs/foundry/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,4 @@ These models need to be referred to by the value of their attribute `name`.

Server
------
.. autofunction:: aurora.foundry.server.score.init

.. autofunction:: aurora.foundry.server.score.run
.. autofunction:: aurora.foundry.server.mlflow_wrapper.AuroraModelWrapper
2 changes: 1 addition & 1 deletion docs/foundry/intro.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Aurora on Azure AI Foundry

Aurora can be run as a model on [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio).
Aurora can be hosted on [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio).

This part of the documentation describes how you can produce predictions with Aurora running on a Foundry endpoint,
and how you can launch a Foundry endpoint that hosts Aurora.
14 changes: 6 additions & 8 deletions docs/foundry/server.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
# Running the Inference Server
# Hosting Aurora

Build the Docker image:
The model is served via [MLflow](https://mlflow.org/).
First, make sure that `mlflow` is installed:

```bash
make docker
pip install mlflow
```

Then upload the resulting image to Azure AI foundry.

Building the Docker image depends on a list of precompiled dependencies.
If you change the requirements in `pyproject.toml`, this list must be updated:
Then build the MLflow model as follows:

```bash
make docker-requirements
python package_mlflow.py
```
11 changes: 6 additions & 5 deletions docs/foundry/submission.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# Submitting Predictions

To produce predictions on Azure AI Foundry, the client will communicate with the host through
a blob storage container, so `azcopy` needs to be available in the local path.
[See here for instructions on how to install `azcopy`.](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)
a blob storage container.

First, create a client that can communicate with your Azure AI Foundry endpoint:

Expand All @@ -21,13 +20,15 @@ Then set up a blob storage container for communication with the host:
from aurora.foundry import BlobStorageChannel

channel = BlobStorageChannel(
"https://my.blob.core.windows.net/container/folder?<SAS_TOKEN>"
"https://my.blob.core.windows.net/container/folder?<READ_WRITE_SAS_TOKEN>"
)
```

The SAS token needs read and write rights.
This blob storage container will be used to send the initial condition to the host and to retrieve
```warning
The SAS token needs both read and write rights!
The blob storage container will be used to send the initial condition to the host and to retrieve
the predictions from the host.
```

You can now submit requests in the following way:

Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ dependencies = [
"pydantic",
"xarray",
"netcdf4",
"azure-blob-storage",
]

[project.optional-dependencies]
Expand Down

0 comments on commit 0aa706b

Please sign in to comment.