diff --git a/gemini/code-execution/intro_code_execution.ipynb b/gemini/code-execution/intro_code_execution.ipynb
index 30741a1b2d8..967c103b89a 100644
--- a/gemini/code-execution/intro_code_execution.ipynb
+++ b/gemini/code-execution/intro_code_execution.ipynb
@@ -1,1310 +1,1973 @@
{
- "cells": [
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "id": "ur8xi4C7S06n"
- },
- "outputs": [],
- "source": [
- "# Copyright 2024 Google LLC\n",
- "#\n",
- "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
- "# you may not use this file except in compliance with the License.\n",
- "# You may obtain a copy of the License at\n",
- "#\n",
- "# https://www.apache.org/licenses/LICENSE-2.0\n",
- "#\n",
- "# Unless required by applicable law or agreed to in writing, software\n",
- "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
- "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
- "# See the License for the specific language governing permissions and\n",
- "# limitations under the License."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "JAPoU8Sm5E6e"
- },
- "source": [
- "# Intro to Generating and Executing Python Code with Gemini 2.0\n",
- "\n",
- "
\n",
- " \n",
- " \n",
- " Open in Colab\n",
- " \n",
- " | \n",
- " \n",
- " \n",
- " Open in Colab Enterprise\n",
- " \n",
- " | \n",
- " \n",
- " \n",
- " Open in Vertex AI Workbench\n",
- " \n",
- " | \n",
- " \n",
- " \n",
- " View on GitHub\n",
- " \n",
- " | \n",
- "
\n",
- "\n",
- "\n",
- "\n",
- "Share to:\n",
- "\n",
- "\n",
- " \n",
- "\n",
- "\n",
- "\n",
- " \n",
- "\n",
- "\n",
- "\n",
- " \n",
- "\n",
- "\n",
- "\n",
- " \n",
- "\n",
- "\n",
- "\n",
- " \n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "84f0f73a0f76"
- },
- "source": [
- "| | |\n",
- "|-|-|\n",
- "| Author(s) | [Kristopher Overholt](https://github.com/koverholt/) |"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "tvgnzT1CKxrO"
- },
- "source": [
- "## Overview\n",
- "\n",
- "This notebook introduces the code execution capabilities of the [Gemini 2.0 Flash model](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2), a new multimodal generative AI model from Google [DeepMind](https://deepmind.google/). Gemini 2.0 Flash offers improvements in speed, quality, and advanced reasoning capabilities including enhanced understanding, coding, and instruction following.\n",
- "\n",
- "## Code Execution\n",
- "\n",
- "A key feature of this model is [code execution](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution), which is the ability to generate and execute Python code directly within the API. If you want the API to generate and run Python code and return the results, you can use code execution as demonstrated in this notebook.\n",
- "\n",
- "This code execution capability enables the model to generate code, execute and observe the results, correct the code if needed, and learn iteratively from the results until it produces a final output. This is particularly useful for applications that involve code-based reasoning such as solving mathematical equations or processing text.\n",
- "\n",
- "## Objectives\n",
- "\n",
- "In this tutorial, you will learn how to generate and execute code using the Gemini API in Vertex AI and the Google Gen AI SDK for Python with the Gemini 2.0 Flash model.\n",
- "\n",
- "You will complete the following tasks:\n",
- "\n",
- "- Generating and running sample Python code from text prompts\n",
- "- Exploring data using code execution in multi-turn chats\n",
- "- Using code execution in streaming sessions"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "61RBz8LLbxCR"
- },
- "source": [
- "## Getting started"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "No17Cw5hgx12"
- },
- "source": [
- "### Install Google Gen AI SDK for Python\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "id": "tFy3H3aPgx12"
- },
- "outputs": [],
- "source": [
- "# %pip install --upgrade --quiet google-genai"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dmWOrTJ3gx13"
- },
- "source": [
- "### Authenticate your notebook environment (Colab only)\n",
- "\n",
- "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "id": "NyKGtVQjgx13"
- },
- "outputs": [],
- "source": [
- "import sys\n",
- "\n",
- "if \"google.colab\" in sys.modules:\n",
- " from google.colab import auth\n",
- "\n",
- " auth.authenticate_user()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "0fggiCx13zxX"
- },
- "source": [
- "### Import libraries"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {
- "id": "JbrnA9yv3zMC"
- },
- "outputs": [],
- "source": [
- "import os\n",
- "\n",
- "from IPython.display import HTML, Markdown, display\n",
- "from google import genai\n",
- "from google.genai.types import GenerateContentConfig, Tool, ToolCodeExecution"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "vXiC1rOE3gSZ"
- },
- "source": [
- "### Connect to a generative AI API service\n",
- "\n",
- "Google Gen AI APIs and models including Gemini are available in the following two API services:\n",
- "\n",
- "- [Google AI for Developers](https://ai.google.dev/gemini-api/docs): Experiment, prototype, and deploy small projects.\n",
- "- [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview): Build enterprise-ready projects on Google Cloud.\n",
- "The Google Gen AI SDK provides a unified interface to these two API services.\n",
- "\n",
- "This notebook shows how to use the Google Gen AI SDK with the Gemini API in Vertex AI."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "DF4l8DTdWgPY"
- },
- "source": [
- "### Set Google Cloud project information and create client\n",
- "\n",
- "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
- "\n",
- "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "metadata": {
- "id": "Nqwi-5ufWp_B"
- },
- "outputs": [],
- "source": [
- "PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
- "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
- " PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
- "\n",
- "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "metadata": {
- "id": "3Ab5NQwr4B8j"
- },
- "outputs": [],
- "source": [
- "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "x1vpnyk-q-fz"
- },
- "source": [
- "## Working with code execution in Gemini 2.0\n",
- "\n",
- "### Load the Gemini model\n",
- "\n",
- "The following code loads the Gemini 2.0 Flash model. You can learn about all Gemini models on Vertex AI by visiting the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models):"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {
- "id": "L8gLWcOFqqF2"
- },
- "outputs": [],
- "source": [
- "MODEL_ID = \"gemini-2.0-flash-exp\" # @param {type: \"string\"}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "q-jdBwXlM67j"
- },
- "source": [
- "### Define the code execution tool\n",
- "\n",
- "The following code initializes the code execution tool by passing `code_execution` in a `Tool` definition.\n",
- "\n",
- "Later we'll register this tool with the model that it can use to generate and run Python code:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "metadata": {
- "id": "BFxIcGkxbq3_"
- },
- "outputs": [],
- "source": [
- "code_execution_tool = Tool(code_execution=ToolCodeExecution())"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "mZgn5tm-NCfH"
- },
- "source": [
- "### Generate and execute code\n",
- "\n",
- "The following code sends a prompt to the Gemini model, asking it to generate and execute Python code to calculate the sum of the first 50 prime numbers. The code execution tool is passed in so the model can generate and run the code:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "metadata": {
- "id": "b52qMx0IGA0K"
- },
- "outputs": [],
- "source": [
- "PROMPT = \"\"\"\n",
- "What is the sum of the first 50 prime numbers?\n",
- "Generate and run code for the calculation.\n",
- "\"\"\"\n",
- "\n",
- "response = client.models.generate_content(\n",
- " model=MODEL_ID,\n",
- " contents=PROMPT,\n",
- " config=GenerateContentConfig(\n",
- " tools=[code_execution_tool],\n",
- " temperature=0,\n",
- " ),\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "l-mfiMNasgqH"
- },
- "source": [
- "### View the generated code\n",
- "\n",
- "The following code iterates through the response and displays any generated Python code by checking for `part.executable_code` in the response parts:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "metadata": {
- "id": "J5mcXw6ZraLS"
- },
- "outputs": [
+ "cells": [
{
- "data": {
- "text/markdown": [
- "\n",
- "```py\n",
- "\n",
- "def is_prime(n):\n",
- " if n <= 1:\n",
- " return False\n",
- " if n <= 3:\n",
- " return True\n",
- " if n % 2 == 0 or n % 3 == 0:\n",
- " return False\n",
- " i = 5\n",
- " while i * i <= n:\n",
- " if n % i == 0 or n % (i + 2) == 0:\n",
- " return False\n",
- " i += 6\n",
- " return True\n",
- "\n",
- "primes = []\n",
- "num = 2\n",
- "while len(primes) < 50:\n",
- " if is_prime(num):\n",
- " primes.append(num)\n",
- " num += 1\n",
- "\n",
- "sum_of_primes = sum(primes)\n",
- "print(f'{sum_of_primes=}')\n",
- "\n",
- "```\n"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ur8xi4C7S06n"
+ },
+ "outputs": [],
+ "source": [
+ "# Copyright 2024 Google LLC\n",
+ "#\n",
+ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "for part in response.candidates[0].content.parts:\n",
- " if part.executable_code:\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```py\n",
- "{part.executable_code.code}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ppumif-94xTF"
- },
- "source": [
- "### View the code execution results\n",
- "\n",
- "The following code iterates through the response and displays the execution result and outcome by checking for `part.code_execution_result` in the response parts:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "metadata": {
- "id": "J891OBjc4xn9"
- },
- "outputs": [
+ },
{
- "data": {
- "text/markdown": [
- "`sum_of_primes=5117\n",
- "`"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JAPoU8Sm5E6e"
+ },
+ "source": [
+ "# Intro to Generating and Executing Python Code with Gemini 2.0\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " Open in Colab\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " Open in Colab Enterprise\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " Open in Vertex AI Workbench\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ " View on GitHub\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "\n",
+ "\n",
+ "\n",
+ "Share to:\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ ""
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "Outcome: Outcome.OUTCOME_OK\n"
- ]
- }
- ],
- "source": [
- "for part in response.candidates[0].content.parts:\n",
- " if part.code_execution_result:\n",
- " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
- " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "5u_XuZlMnH9S"
- },
- "source": [
- "Great! Now you have the answer (`5117`) as well as the generated (and verified via execution!) Python code.\n",
- "\n",
- "At this point in your application, you would save the output code, result, or outcome and display it to the end-user or use it downstream in your application."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "8uJ-Fk1I_AH8"
- },
- "source": [
- "### Code execution in a chat session\n",
- "\n",
- "This section shows how to use code execution in an interactive chat with history using the Gemini API.\n",
- "\n",
- "You can use `client.chats.create` to create a chat session and passes in the code execution tool, enabling the model to generate and run code:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {
- "id": "puL91bq7tirC"
- },
- "outputs": [],
- "source": [
- "chat = client.chats.create(\n",
- " model=MODEL_ID,\n",
- " config=GenerateContentConfig(\n",
- " tools=[code_execution_tool],\n",
- " temperature=0,\n",
- " ),\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Bmu4bSApoECT"
- },
- "source": [
- "You'll start the chat by asking the model to generate sample time series data with noise and then output a sample of 10 data points:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {
- "id": "8iyq5sKCtstH"
- },
- "outputs": [],
- "source": [
- "PROMPT = \"\"\"Create sample time series data of temperature vs. time in a test furnace.\n",
- "Add noise to the data.\n",
- "Output a sample of 10 data points from the time series data.\"\"\"\n",
- "\n",
- "response = chat.send_message(PROMPT)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "vVhCKKBioJga"
- },
- "source": [
- "Now you can iterate through the response to display any generated Python code and execution results by checking for `part.executable_code` and `part.code_execution_result` in the response parts:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {
- "id": "8pjwEGzft29N"
- },
- "outputs": [
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "84f0f73a0f76"
+ },
+ "source": [
+ "| | |\n",
+ "|-|-|\n",
+ "| Author(s) | [Kristopher Overholt](https://github.com/koverholt/) |"
+ ]
+ },
{
- "data": {
- "text/markdown": [
- "\n",
- "```py\n",
- "\n",
- "import numpy as np\n",
- "import pandas as pd\n",
- "\n",
- "# Define time range\n",
- "time = np.linspace(0, 10, 100) # 100 data points over 10 hours\n",
- "\n",
- "# Base temperature profile (linear increase then plateau)\n",
- "temperature = np.zeros_like(time)\n",
- "for i, t in enumerate(time):\n",
- " if t <= 5:\n",
- " temperature[i] = 25 + 50 * t # Linear increase from 25 to 275\n",
- " else:\n",
- " temperature[i] = 275 # Plateau at 275\n",
- "\n",
- "# Add noise\n",
- "noise = np.random.normal(0, 5, len(time)) # Gaussian noise with std dev 5\n",
- "temperature += noise\n",
- "\n",
- "# Create Pandas DataFrame\n",
- "df = pd.DataFrame({'Time': time, 'Temperature': temperature})\n",
- "\n",
- "# Sample 10 data points\n",
- "sample = df.sample(10, random_state=42) # Set random_state for reproducibility\n",
- "sample = sample.sort_values('Time') # Sort by time\n",
- "\n",
- "print(sample)\n",
- "\n",
- "```\n"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tvgnzT1CKxrO"
+ },
+ "source": [
+ "## Overview\n",
+ "\n",
+ "This notebook introduces the code execution capabilities of the [Gemini 2.0 Flash model](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2), a new multimodal generative AI model from Google [DeepMind](https://deepmind.google/). Gemini 2.0 Flash offers improvements in speed, quality, and advanced reasoning capabilities including enhanced understanding, coding, and instruction following.\n",
+ "\n",
+ "## Code Execution\n",
+ "\n",
+ "A key feature of this model is [code execution](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution), which is the ability to generate and execute Python code directly within the API. If you want the API to generate and run Python code and return the results, you can use code execution as demonstrated in this notebook.\n",
+ "\n",
+ "This code execution capability enables the model to generate code, execute and observe the results, correct the code if needed, and learn iteratively from the results until it produces a final output. This is particularly useful for applications that involve code-based reasoning such as solving mathematical equations or processing text.\n",
+ "\n",
+ "## Objectives\n",
+ "\n",
+ "In this tutorial, you will learn how to generate and execute code using the Gemini API in Vertex AI and the Google Gen AI SDK for Python with the Gemini 2.0 Flash model.\n",
+ "\n",
+ "You will complete the following tasks:\n",
+ "\n",
+ "- Generating and running sample Python code from text prompts\n",
+ "- Exploring data using code execution in multi-turn chats\n",
+ "- Using code execution in streaming sessions"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "` Time Temperature\n",
- "0 0.000000 24.748812\n",
- "10 1.010101 71.401641\n",
- "22 2.222222 137.188229\n",
- "39 3.939394 217.834258\n",
- "44 4.444444 245.030405\n",
- "45 4.545455 257.520168\n",
- "53 5.353535 279.624973\n",
- "70 7.070707 270.408216\n",
- "80 8.080808 278.709755\n",
- "83 8.383838 277.251496\n",
- "`"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "61RBz8LLbxCR"
+ },
+ "source": [
+ "## Getting started"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "Outcome: Outcome.OUTCOME_OK\n"
- ]
- }
- ],
- "source": [
- "for part in response.candidates[0].content.parts:\n",
- " if part.executable_code:\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```py\n",
- "{part.executable_code.code}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )\n",
- " if part.code_execution_result:\n",
- " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
- " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "4AHoGmDBQuxn"
- },
- "source": [
- "Now in the next request, you can ask the model to add a smoothed data series:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "metadata": {
- "id": "alR_tq3pss7j"
- },
- "outputs": [],
- "source": [
- "PROMPT = \"Now add a data series that smooths the sample data.\"\n",
- "\n",
- "response = chat.send_message(PROMPT)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "MnSlnA5FQ9UH"
- },
- "source": [
- "And then display the generated Python code and execution results:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {
- "id": "uMXRpE0NtRYC"
- },
- "outputs": [
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "No17Cw5hgx12"
+ },
+ "source": [
+ "### Install Google Gen AI SDK for Python\n"
+ ]
+ },
{
- "data": {
- "text/markdown": [
- "\n",
- "```py\n",
- "\n",
- "import numpy as np\n",
- "import pandas as pd\n",
- "\n",
- "# Define time range\n",
- "time = np.linspace(0, 10, 100) # 100 data points over 10 hours\n",
- "\n",
- "# Base temperature profile (linear increase then plateau)\n",
- "temperature = np.zeros_like(time)\n",
- "for i, t in enumerate(time):\n",
- " if t <= 5:\n",
- " temperature[i] = 25 + 50 * t # Linear increase from 25 to 275\n",
- " else:\n",
- " temperature[i] = 275 # Plateau at 275\n",
- "\n",
- "# Add noise\n",
- "noise = np.random.normal(0, 5, len(time)) # Gaussian noise with std dev 5\n",
- "temperature += noise\n",
- "\n",
- "# Create Pandas DataFrame\n",
- "df = pd.DataFrame({'Time': time, 'Temperature': temperature})\n",
- "\n",
- "# Calculate moving average (smoothing)\n",
- "window_size = 3\n",
- "df['Smoothed_Temperature'] = df['Temperature'].rolling(window=window_size, center=True).mean()\n",
- "df = df.fillna(method='bfill') # Fill NaN values at the beginning and end\n",
- "\n",
- "# Sample 10 data points\n",
- "sample = df.sample(10, random_state=42) # Set random_state for reproducibility\n",
- "sample = sample.sort_values('Time') # Sort by time\n",
- "\n",
- "print(sample)\n",
- "\n",
- "```\n"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tFy3H3aPgx12"
+ },
+ "outputs": [],
+ "source": [
+ "%pip install --upgrade --quiet google-genai"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "` Time Temperature Smoothed_Temperature\n",
- "0 0.000000 29.042592 30.406978\n",
- "10 1.010101 82.297888 77.763486\n",
- "22 2.222222 132.569180 135.002055\n",
- "39 3.939394 224.851120 221.755356\n",
- "44 4.444444 254.735797 253.092113\n",
- "45 4.545455 257.309869 255.032427\n",
- "53 5.353535 271.536488 278.643866\n",
- "70 7.070707 271.209958 276.612039\n",
- "80 8.080808 280.715799 279.051592\n",
- "83 8.383838 277.481250 275.355277\n",
- "`"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dmWOrTJ3gx13"
+ },
+ "source": [
+ "### Authenticate your notebook environment (Colab only)\n",
+ "\n",
+ "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "Outcome: Outcome.OUTCOME_OK\n"
- ]
- }
- ],
- "source": [
- "for part in response.candidates[0].content.parts:\n",
- " if part.executable_code:\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```py\n",
- "{part.executable_code.code}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )\n",
- " if part.code_execution_result:\n",
- " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
- " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "I4VacTEyQ4lD"
- },
- "source": [
- "Finally, you can ask the model to generate descriptive statistics for the time series data:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 17,
- "metadata": {
- "id": "dmhPzmP8tywL"
- },
- "outputs": [],
- "source": [
- "PROMPT = \"Now generate and output descriptive statistics on the time series data.\"\n",
- "\n",
- "response = chat.send_message(PROMPT)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "I1t_zA5jRHsB"
- },
- "source": [
- "And then display the generated Python code and execution results:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 18,
- "metadata": {
- "id": "hIsMH3fPuKr5"
- },
- "outputs": [
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "NyKGtVQjgx13"
+ },
+ "outputs": [],
+ "source": [
+ "import sys\n",
+ "\n",
+ "if \"google.colab\" in sys.modules:\n",
+ " from google.colab import auth\n",
+ "\n",
+ " auth.authenticate_user()"
+ ]
+ },
{
- "data": {
- "text/markdown": [
- "\n",
- "```py\n",
- "\n",
- "import numpy as np\n",
- "import pandas as pd\n",
- "\n",
- "# Define time range\n",
- "time = np.linspace(0, 10, 100) # 100 data points over 10 hours\n",
- "\n",
- "# Base temperature profile (linear increase then plateau)\n",
- "temperature = np.zeros_like(time)\n",
- "for i, t in enumerate(time):\n",
- " if t <= 5:\n",
- " temperature[i] = 25 + 50 * t # Linear increase from 25 to 275\n",
- " else:\n",
- " temperature[i] = 275 # Plateau at 275\n",
- "\n",
- "# Add noise\n",
- "noise = np.random.normal(0, 5, len(time)) # Gaussian noise with std dev 5\n",
- "temperature += noise\n",
- "\n",
- "# Create Pandas DataFrame\n",
- "df = pd.DataFrame({'Time': time, 'Temperature': temperature})\n",
- "\n",
- "# Calculate moving average (smoothing)\n",
- "window_size = 3\n",
- "df['Smoothed_Temperature'] = df['Temperature'].rolling(window=window_size, center=True).mean()\n",
- "df = df.fillna(method='bfill') # Fill NaN values at the beginning and end\n",
- "\n",
- "# Generate descriptive statistics\n",
- "descriptive_stats = df[['Temperature', 'Smoothed_Temperature']].describe()\n",
- "\n",
- "print(descriptive_stats)\n",
- "\n",
- "```\n"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0fggiCx13zxX"
+ },
+ "source": [
+ "### Import libraries"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "` Temperature Smoothed_Temperature\n",
- "count 100.000000 99.000000\n",
- "mean 211.692796 211.143843\n",
- "std 82.671967 82.611219\n",
- "min 22.110080 28.756087\n",
- "25% 145.375172 145.270342\n",
- "50% 264.561842 266.767021\n",
- "75% 276.851390 276.362261\n",
- "max 289.151166 280.794910\n",
- "`"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "JbrnA9yv3zMC"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "from IPython.display import HTML, Markdown, display\n",
+ "from google import genai\n",
+ "from google.genai.types import GenerateContentConfig, Tool, ToolCodeExecution"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "Outcome: Outcome.OUTCOME_OK\n"
- ]
- }
- ],
- "source": [
- "for part in response.candidates[0].content.parts:\n",
- " if part.executable_code:\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```py\n",
- "{part.executable_code.code}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )\n",
- " if part.code_execution_result:\n",
- " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
- " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "TBbNyWtDRZto"
- },
- "source": [
- "This chat example demonstrates how you can use the Gemini API with code execution as a powerful tool for exploratory data analysis and more. Go forth and adapt this approach to your own projects and use cases!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Bl6KG5Ufu5XQ"
- },
- "source": [
- "### Code execution in a streaming session\n",
- "\n",
- "You can also use the code execution functionality with streaming output from the Gemini API.\n",
- "\n",
- "The following code demonstrates how the Gemini API can generate and execute code while streaming the results:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "metadata": {
- "id": "gTNMMLkNu5JH"
- },
- "outputs": [
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vXiC1rOE3gSZ"
+ },
+ "source": [
+ "### Connect to a generative AI API service\n",
+ "\n",
+ "Google Gen AI APIs and models including Gemini are available in the following two API services:\n",
+ "\n",
+ "- [Google AI for Developers](https://ai.google.dev/gemini-api/docs): Experiment, prototype, and deploy small projects.\n",
+ "- [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview): Build enterprise-ready projects on Google Cloud.\n",
+ "The Google Gen AI SDK provides a unified interface to these two API services.\n",
+ "\n",
+ "This notebook shows how to use the Google Gen AI SDK with the Gemini API in Vertex AI."
+ ]
+ },
{
- "data": {
- "text/markdown": [
- "#### Natural language stream"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "DF4l8DTdWgPY"
+ },
+ "source": [
+ "### Set Google Cloud project information and create client\n",
+ "\n",
+ "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
+ "\n",
+ "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "Okay"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "Nqwi-5ufWp_B"
+ },
+ "outputs": [],
+ "source": [
+ "PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
+ "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
+ " PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
+ "\n",
+ "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "3Ab5NQwr4B8j"
+ },
+ "outputs": [],
+ "source": [
+ "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Natural language stream"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YZNpgtKJDdPZ"
+ },
+ "source": [
+ "### Improve code rendering in cell outputs"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- ", I can do that. Here's the process:\n",
- "\n",
- "1. Generate"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "Y2e1lK_f_YWN"
+ },
+ "outputs": [],
+ "source": [
+ "# Modify CSS to display the results more clearly in Colab\n",
+ "def set_css_in_cell_output():\n",
+ " display(\n",
+ " HTML(\n",
+ " \"\"\"\"\"\"\n",
+ " )\n",
+ " )\n",
+ "\n",
+ "\n",
+ "get_ipython().events.register(\"pre_run_cell\", set_css_in_cell_output)"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "x1vpnyk-q-fz"
+ },
+ "source": [
+ "## Working with code execution in Gemini 2.0\n",
+ "\n",
+ "### Load the Gemini model\n",
+ "\n",
+ "The following code loads the Gemini 2.0 Flash model. You can learn about all Gemini models on Vertex AI by visiting the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models):"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Natural language stream"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "L8gLWcOFqqF2"
+ },
+ "outputs": [],
+ "source": [
+ "MODEL_ID = \"gemini-2.0-flash-exp\" # @param {type: \"string\"}"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- " a list of 20 random names.\n",
- "2. Create a new list"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "q-jdBwXlM67j"
+ },
+ "source": [
+ "### Define the code execution tool\n",
+ "\n",
+ "The following code initializes the code execution tool by passing `code_execution` in a `Tool` definition.\n",
+ "\n",
+ "Later we'll register this tool with the model that it can use to generate and run Python code:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
- ],
- "text/plain": [
- ""
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "BFxIcGkxbq3_"
+ },
+ "outputs": [],
+ "source": [
+ "code_execution_tool = Tool(code_execution=ToolCodeExecution())"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Natural language stream"
- ],
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mZgn5tm-NCfH"
+ },
+ "source": [
+ "### Generate and execute code\n",
+ "\n",
+ "The following code sends a prompt to the Gemini model, asking it to generate and execute Python code to calculate the sum of the first 50 prime numbers. The code execution tool is passed in so the model can generate and run the code:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- " containing only the names from the first list that include the letter 'a'.\n",
- "3. Output the number of names in the new list.\n",
- "4. Output"
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "b52qMx0IGA0K"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "PROMPT = \"\"\"\n",
+ "What is the sum of the first 50 prime numbers?\n",
+ "Generate and run code for the calculation.\n",
+ "\"\"\"\n",
+ "\n",
+ "response = client.models.generate_content(\n",
+ " model=MODEL_ID,\n",
+ " contents=PROMPT,\n",
+ " config=GenerateContentConfig(\n",
+ " tools=[code_execution_tool],\n",
+ " temperature=0,\n",
+ " ),\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "l-mfiMNasgqH"
+ },
+ "source": [
+ "### View the generated code\n",
+ "\n",
+ "The following code iterates through the response and displays any generated Python code by checking for `part.executable_code` in the response parts:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "J5mcXw6ZraLS"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```py\n",
+ "\n",
+ "def is_prime(n):\n",
+ " if n <= 1:\n",
+ " return False\n",
+ " if n <= 3:\n",
+ " return True\n",
+ " if n % 2 == 0 or n % 3 == 0:\n",
+ " return False\n",
+ " i = 5\n",
+ " while i * i <= n:\n",
+ " if n % i == 0 or n % (i + 2) == 0:\n",
+ " return False\n",
+ " i += 6\n",
+ " return True\n",
+ "\n",
+ "primes = []\n",
+ "num = 2\n",
+ "while len(primes) < 50:\n",
+ " if is_prime(num):\n",
+ " primes.append(num)\n",
+ " num += 1\n",
+ "\n",
+ "sum_of_primes = sum(primes)\n",
+ "print(f'{sum_of_primes=}')\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "for part in response.candidates[0].content.parts:\n",
+ " if part.executable_code:\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```py\n",
+ "{part.executable_code.code}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ppumif-94xTF"
+ },
+ "source": [
+ "### View the code execution results\n",
+ "\n",
+ "The following code iterates through the response and displays the execution result and outcome by checking for `part.code_execution_result` in the response parts:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Natural language stream"
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "J891OBjc4xn9"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "`sum_of_primes=5117\n",
+ "`"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Outcome: OUTCOME_OK\n"
+ ]
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "for part in response.candidates[0].content.parts:\n",
+ " if part.code_execution_result:\n",
+ " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
+ " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5u_XuZlMnH9S"
+ },
+ "source": [
+ "Great! Now you have the answer (`5117`) as well as the generated (and verified via execution!) Python code.\n",
+ "\n",
+ "At this point in your application, you would save the output code, result, or outcome and display it to the end-user or use it downstream in your application."
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- " the new list itself.\n",
- "\n",
- "Here's the code to accomplish this:\n",
- "\n"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8uJ-Fk1I_AH8"
+ },
+ "source": [
+ "### Code execution in a chat session\n",
+ "\n",
+ "This section shows how to use code execution in an interactive chat with history using the Gemini API.\n",
+ "\n",
+ "You can use `client.chats.create` to create a chat session and passes in the code execution tool, enabling the model to generate and run code:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "puL91bq7tirC"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "chat = client.chats.create(\n",
+ " model=MODEL_ID,\n",
+ " config=GenerateContentConfig(\n",
+ " tools=[code_execution_tool],\n",
+ " temperature=0,\n",
+ " ),\n",
+ ")"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Bmu4bSApoECT"
+ },
+ "source": [
+ "You'll start the chat by asking the model to generate sample time series data with noise and then output a sample of 10 data points:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "8iyq5sKCtstH"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "response = chat.send_message(\n",
+ " \"\"\"Generate code that creates sample time series\n",
+ "data of temperature vs. time in a test furnace. Add noise to the data. Output\n",
+ "a sample of 10 data points from the time series data.\"\"\"\n",
+ ")"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Code stream"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vVhCKKBioJga"
+ },
+ "source": [
+ "Now you can iterate through the response to display any generated Python code and execution results by checking for `part.executable_code` and `part.code_execution_result` in the response parts:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "8pjwEGzft29N"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```py\n",
+ "\n",
+ "import numpy as np\n",
+ "\n",
+ "# 1. Define time range\n",
+ "time = np.linspace(0, 100, 1000) # 1000 points from 0 to 100 seconds\n",
+ "\n",
+ "# 2. Create a base temperature profile (example: increasing temperature with a curve)\n",
+ "base_temp = 25 + 50 * (1 - np.exp(-time / 30)) # Starts at 25 and increases to 75\n",
+ "\n",
+ "# 3. Add noise\n",
+ "noise = np.random.normal(0, 2, len(time)) # Gaussian noise with mean 0 and std dev 2\n",
+ "temperature = base_temp + noise\n",
+ "\n",
+ "# 4. Output 10 data points\n",
+ "sample_indices = np.linspace(0, len(time) - 1, 10, dtype=int)\n",
+ "sample_time = time[sample_indices]\n",
+ "sample_temperature = temperature[sample_indices]\n",
+ "\n",
+ "print(\"Sample Time Series Data (Time, Temperature):\")\n",
+ "for t, temp in zip(sample_time, sample_temperature):\n",
+ " print(f\"({t:.2f}, {temp:.2f})\")\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "`Sample Time Series Data (Time, Temperature):\n",
+ "(0.00, 24.66)\n",
+ "(11.11, 41.32)\n",
+ "(22.22, 48.47)\n",
+ "(33.33, 59.90)\n",
+ "(44.44, 63.03)\n",
+ "(55.56, 69.59)\n",
+ "(66.67, 68.61)\n",
+ "(77.78, 71.45)\n",
+ "(88.89, 72.76)\n",
+ "(100.00, 75.88)\n",
+ "`"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Outcome: OUTCOME_OK\n"
+ ]
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "for part in response.candidates[0].content.parts:\n",
+ " if part.executable_code:\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```py\n",
+ "{part.executable_code.code}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )\n",
+ " if part.code_execution_result:\n",
+ " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
+ " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4AHoGmDBQuxn"
+ },
+ "source": [
+ "Now you can ask the model to add a smoothed data series to the time series data:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "\n",
- "```py\n",
- "\n",
- "import random\n",
- "\n",
- "def generate_random_names(n):\n",
- " names = []\n",
- " for _ in range(n):\n",
- " length = random.randint(4, 8)\n",
- " name = ''.join(random.choice('abcdefghijklmnopqrstuvwxyz') for _ in range(length))\n",
- " names.append(name)\n",
- " return names\n",
- "\n",
- "def filter_names_with_a(names):\n",
- " names_with_a = [name for name in names if 'a' in name]\n",
- " return names_with_a\n",
- "\n",
- "# Generate 20 random names\n",
- "random_names = generate_random_names(20)\n",
- "\n",
- "# Filter names containing 'a'\n",
- "names_with_a = filter_names_with_a(random_names)\n",
- "\n",
- "# Output the results\n",
- "print(f\"Original list of names: {random_names}\")\n",
- "print(f\"Number of names containing 'a': {len(names_with_a)}\")\n",
- "print(f\"List of names containing 'a': {names_with_a}\")\n",
- "\n",
- "```\n"
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "alR_tq3pss7j"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "response = chat.send_message(\n",
+ " \"\"\"Now add a data series that smooths the data using an appropriate method.\"\"\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MnSlnA5FQ9UH"
+ },
+ "source": [
+ "And then display the generated Python code and execution results:"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "uMXRpE0NtRYC"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```py\n",
+ "\n",
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "\n",
+ "# 1. Generate time series data (same as before)\n",
+ "time = np.linspace(0, 100, 1000)\n",
+ "base_temp = 25 + 50 * (1 - np.exp(-time / 30))\n",
+ "noise = np.random.normal(0, 2, len(time))\n",
+ "temperature = base_temp + noise\n",
+ "\n",
+ "# 2. Apply a moving average filter\n",
+ "window_size = 20 # Choose a window size for the moving average\n",
+ "smoothed_temperature = pd.Series(temperature).rolling(window=window_size, min_periods=1).mean().to_numpy()\n",
+ "\n",
+ "\n",
+ "# 3. Output 10 data points\n",
+ "sample_indices = np.linspace(0, len(time) - 1, 10, dtype=int)\n",
+ "sample_time = time[sample_indices]\n",
+ "sample_temperature = temperature[sample_indices]\n",
+ "sample_smoothed_temperature = smoothed_temperature[sample_indices]\n",
+ "\n",
+ "print(\"Sample Time Series Data (Time, Temperature, Smoothed Temperature):\")\n",
+ "for t, temp, smooth_temp in zip(sample_time, sample_temperature, sample_smoothed_temperature):\n",
+ " print(f\"({t:.2f}, {temp:.2f}, {smooth_temp:.2f})\")\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "`Sample Time Series Data (Time, Temperature, Smoothed Temperature):\n",
+ "(0.00, 26.57, 26.57)\n",
+ "(11.11, 39.02, 39.20)\n",
+ "(22.22, 51.49, 50.66)\n",
+ "(33.33, 58.64, 59.27)\n",
+ "(44.44, 63.01, 63.42)\n",
+ "(55.56, 68.41, 67.38)\n",
+ "(66.67, 67.22, 69.29)\n",
+ "(77.78, 70.09, 70.99)\n",
+ "(88.89, 71.91, 71.57)\n",
+ "(100.00, 72.73, 73.23)\n",
+ "`"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Outcome: OUTCOME_OK\n"
+ ]
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "for part in response.candidates[0].content.parts:\n",
+ " if part.executable_code:\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```py\n",
+ "{part.executable_code.code}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )\n",
+ " if part.code_execution_result:\n",
+ " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
+ " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "#### Code result"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "I4VacTEyQ4lD"
+ },
+ "source": [
+ "Finally, you can ask the model to generate descriptive statistics for the time series data:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "dmhPzmP8tywL"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "response = chat.send_message(\n",
+ " \"\"\"Now generate and output descriptive statistics on the time series data.\"\"\"\n",
+ ")"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "\n",
- "```\n",
- "Original list of names: ['adegmkn', 'xwxdie', 'aqmz', 'ncamgy', 'yvpqhe', 'hdfeb', 'mmoko', 'bvozwjev', 'zwhigum', 'mkniwn', 'yghvv', 'hhmmtg', 'nnksvzei', 'xwsb', 'kyohy', 'caksos', 'ejvwnt', 'hhfo', 'zkrxqqkl', 'cevz']\n",
- "Number of names containing 'a': 4\n",
- "List of names containing 'a': ['adegmkn', 'aqmz', 'ncamgy', 'caksos']\n",
- "\n",
- "```\n"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "I1t_zA5jRHsB"
+ },
+ "source": [
+ "And then display the generated Python code and execution results:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "hIsMH3fPuKr5"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```py\n",
+ "\n",
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "\n",
+ "# 1. Generate time series data (same as before)\n",
+ "time = np.linspace(0, 100, 1000)\n",
+ "base_temp = 25 + 50 * (1 - np.exp(-time / 30))\n",
+ "noise = np.random.normal(0, 2, len(time))\n",
+ "temperature = base_temp + noise\n",
+ "\n",
+ "# 2. Apply a moving average filter\n",
+ "window_size = 20\n",
+ "smoothed_temperature = pd.Series(temperature).rolling(window=window_size, min_periods=1).mean().to_numpy()\n",
+ "\n",
+ "# 3. Calculate descriptive statistics\n",
+ "def calculate_stats(data, name):\n",
+ " stats = {\n",
+ " f'{name}_mean': np.mean(data),\n",
+ " f'{name}_std': np.std(data),\n",
+ " f'{name}_min': np.min(data),\n",
+ " f'{name}_max': np.max(data),\n",
+ " f'{name}_25th': np.percentile(data, 25),\n",
+ " f'{name}_50th': np.percentile(data, 50),\n",
+ " f'{name}_75th': np.percentile(data, 75)\n",
+ " }\n",
+ " return stats\n",
+ "\n",
+ "noisy_stats = calculate_stats(temperature, \"noisy_temp\")\n",
+ "smoothed_stats = calculate_stats(smoothed_temperature, \"smoothed_temp\")\n",
+ "\n",
+ "# 4. Output the statistics\n",
+ "print(\"Descriptive Statistics:\")\n",
+ "for key, value in noisy_stats.items():\n",
+ " print(f\"{key}: {value:.2f}\")\n",
+ "for key, value in smoothed_stats.items():\n",
+ " print(f\"{key}: {value:.2f}\")\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "`Descriptive Statistics:\n",
+ "noisy_temp_mean: 60.44\n",
+ "noisy_temp_std: 13.05\n",
+ "noisy_temp_min: 21.50\n",
+ "noisy_temp_max: 78.02\n",
+ "noisy_temp_25th: 52.83\n",
+ "noisy_temp_50th: 65.42\n",
+ "noisy_temp_75th: 70.60\n",
+ "smoothed_temp_mean: 59.99\n",
+ "smoothed_temp_std: 13.26\n",
+ "smoothed_temp_min: 24.71\n",
+ "smoothed_temp_max: 73.62\n",
+ "smoothed_temp_25th: 52.35\n",
+ "smoothed_temp_50th: 65.58\n",
+ "smoothed_temp_75th: 70.54\n",
+ "`"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Outcome: OUTCOME_OK\n"
+ ]
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "for part in response.candidates[0].content.parts:\n",
+ " if part.executable_code:\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```py\n",
+ "{part.executable_code.code}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )\n",
+ " if part.code_execution_result:\n",
+ " display(Markdown(f\"`{part.code_execution_result.output}`\"))\n",
+ " print(\"\\nOutcome:\", part.code_execution_result.outcome)"
]
- },
- "metadata": {},
- "output_type": "display_data"
},
{
- "data": {
- "text/markdown": [
- "---"
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TBbNyWtDRZto"
+ },
+ "source": [
+ "This chat example demonstrates how you can use the Gemini API with code execution as a powerful tool for exploratory data analysis and more. Go forth and adapt this approach to your own projects and use cases!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Bl6KG5Ufu5XQ"
+ },
+ "source": [
+ "### Code execution in a streaming session\n",
+ "\n",
+ "You can also use the code execution functionality with streaming output from the Gemini API.\n",
+ "\n",
+ "The following code demonstrates how the Gemini API can generate and execute code while streaming the results:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "gTNMMLkNu5JH"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "Okay"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ ", I can do that. Here's how I'll approach this:"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "\n",
+ "1. **Generate 20 random names:** I'll use"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ " Python's `random` module to generate a list of 20 random names. For simplicity, I'll use a combination of common first names."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "2. **Filter for names with 'a':** I'll iterate through the list and create a new list containing only the names that include the"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ " letter 'a' (case-insensitive).\n",
+ "3. **Count and output:** I'll count the number of names in the filtered list and output that count, along with the filtered list itself.\n",
+ "\n",
+ "Here's the code"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ ":\n",
+ "\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Code stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```py\n",
+ "\n",
+ "import random\n",
+ "\n",
+ "def generate_random_names(num_names):\n",
+ " first_names = [\"Alice\", \"Bob\", \"Charlie\", \"David\", \"Eve\", \"Frank\", \"Grace\", \"Henry\", \"Ivy\", \"Jack\", \"Kate\", \"Liam\", \"Mia\", \"Noah\", \"Olivia\", \"Peter\", \"Quinn\", \"Ryan\", \"Sophia\", \"Tom\"]\n",
+ " return random.choices(first_names, k=num_names)\n",
+ "\n",
+ "def filter_names_with_a(names):\n",
+ " return [name for name in names if 'a' in name.lower()]\n",
+ "\n",
+ "# Generate 20 random names\n",
+ "random_names = generate_random_names(20)\n",
+ "\n",
+ "# Filter names containing 'a'\n",
+ "names_with_a = filter_names_with_a(random_names)\n",
+ "\n",
+ "# Count the names with 'a'\n",
+ "count_of_names_with_a = len(names_with_a)\n",
+ "\n",
+ "# Output the results\n",
+ "print(f'{random_names=}')\n",
+ "print(f'{count_of_names_with_a=}')\n",
+ "print(f'{names_with_a=}')\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Code result"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "```\n",
+ "random_names=['Liam', 'Sophia', 'David', 'Kate', 'Olivia', 'Tom', 'David', 'Olivia', 'Charlie', 'Grace', 'Olivia', 'Alice', 'David', 'Jack', 'Peter', 'Ivy', 'Charlie', 'Tom', 'Jack', 'Eve']\n",
+ "count_of_names_with_a=15\n",
+ "names_with_a=['Liam', 'Sophia', 'David', 'Kate', 'Olivia', 'David', 'Olivia', 'Charlie', 'Grace', 'Olivia', 'Alice', 'David', 'Jack', 'Charlie', 'Jack']\n",
+ "\n",
+ "```\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "Okay"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ ", here's the output:\n",
+ "\n",
+ "The original list of 20 random"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ " names is: `['Liam', 'Sophia', 'David', 'Kate',"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ " 'Olivia', 'Tom', 'David', 'Olivia', 'Charlie', 'Grace', 'Olivia', 'Alice', 'David', 'Jack', 'Peter"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "', 'Ivy', 'Charlie', 'Tom', 'Jack', 'Eve']`\n",
+ "\n",
+ "The number of names containing the letter 'a' is: 1"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "5\n",
+ "\n",
+ "The list of names containing the letter 'a' is: `['Liam', 'Sophia', 'David', 'Kate', 'Olivia', 'David', 'Olivia', 'Charlie', 'Grace', 'Olivia', 'Alice',"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "#### Natural language stream"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ " 'David', 'Jack', 'Charlie', 'Jack']`\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "---"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
],
- "text/plain": [
- ""
+ "source": [
+ "PROMPT = \"\"\"\n",
+ "Generate a list of 20 random names, then create a new list with just the names\n",
+ "containing the letter 'a', then output the number of names that contain 'a' and\n",
+ "finally show me that new list.\n",
+ "\"\"\"\n",
+ "\n",
+ "for chunk in client.models.generate_content_stream(\n",
+ " model=MODEL_ID,\n",
+ " contents=PROMPT,\n",
+ " config=GenerateContentConfig(\n",
+ " tools=[code_execution_tool],\n",
+ " temperature=0,\n",
+ " ),\n",
+ "):\n",
+ " for part in chunk.candidates[0].content.parts:\n",
+ " if part.text:\n",
+ " display(Markdown(\"#### Natural language stream\"))\n",
+ " display(Markdown(part.text))\n",
+ " display(Markdown(\"---\"))\n",
+ " if part.executable_code:\n",
+ " display(Markdown(\"#### Code stream\"))\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```py\n",
+ "{part.executable_code.code}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )\n",
+ " display(Markdown(\"---\"))\n",
+ " if part.code_execution_result:\n",
+ " display(Markdown(\"#### Code result\"))\n",
+ " display(\n",
+ " Markdown(\n",
+ " f\"\"\"\n",
+ "```\n",
+ "{part.code_execution_result.output}\n",
+ "```\n",
+ "\"\"\"\n",
+ " )\n",
+ " )\n",
+ " display(Markdown(\"---\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2a4e033321ad"
+ },
+ "source": [
+ "This streaming example demonstrated how the Gemini API can generate, execute code, and provide results within a streaming session.\n",
+ "\n",
+ "## Summary\n",
+ "\n",
+ "Refer to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution) for more details about code execution, and in particular, the [recommendations](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution#code-execution-vs-function-calling) regarding differences between code execution and [function calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling).\n",
+ "\n",
+ "### Next steps\n",
+ "\n",
+ "- See the [Google Gen AI SDK reference docs](https://googleapis.github.io/python-genai/)\n",
+ "- Explore other notebooks in the [Google Cloud Generative AI GitHub repository](https://github.com/GoogleCloudPlatform/generative-ai)\n",
+ "- Explore AI models in [Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models)"
]
- },
- "metadata": {},
- "output_type": "display_data"
}
- ],
- "source": [
- "PROMPT = \"\"\"Generate a list of 20 random names, then create a new list with just the names\n",
- "containing the letter 'a', then output the number of names that contain 'a' and\n",
- "finally show me that new list.\"\"\"\n",
- "\n",
- "for chunk in client.models.generate_content_stream(\n",
- " model=MODEL_ID,\n",
- " contents=PROMPT,\n",
- " config=GenerateContentConfig(\n",
- " tools=[code_execution_tool],\n",
- " temperature=0,\n",
- " ),\n",
- "):\n",
- " for part in chunk.candidates[0].content.parts:\n",
- " if part.text:\n",
- " display(Markdown(\"#### Natural language stream\"))\n",
- " display(Markdown(part.text))\n",
- " display(Markdown(\"---\"))\n",
- " if part.executable_code:\n",
- " display(Markdown(\"#### Code stream\"))\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```py\n",
- "{part.executable_code.code}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )\n",
- " display(Markdown(\"---\"))\n",
- " if part.code_execution_result:\n",
- " display(Markdown(\"#### Code result\"))\n",
- " display(\n",
- " Markdown(\n",
- " f\"\"\"\n",
- "```\n",
- "{part.code_execution_result.output}\n",
- "```\n",
- "\"\"\"\n",
- " )\n",
- " )\n",
- " display(Markdown(\"---\"))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2a4e033321ad"
- },
- "source": [
- "This streaming example demonstrated how the Gemini API can generate, execute code, and provide results within a streaming session.\n",
- "\n",
- "## Summary\n",
- "\n",
- "Refer to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution) for more details about code execution, and in particular, the [recommendations](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/code-execution#code-execution-vs-function-calling) regarding differences between code execution and [function calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling).\n",
- "\n",
- "### Next steps\n",
- "\n",
- "- See the [Google Gen AI SDK reference docs](https://googleapis.github.io/python-genai/)\n",
- "- Explore other notebooks in the [Google Cloud Generative AI GitHub repository](https://github.com/GoogleCloudPlatform/generative-ai)\n",
- "- Explore AI models in [Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models)"
- ]
- }
- ],
- "metadata": {
- "colab": {
- "collapsed_sections": [
- "YZNpgtKJDdPZ"
- ],
- "name": "intro_code_execution.ipynb",
- "toc_visible": true
- },
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [
+ "YZNpgtKJDdPZ"
+ ],
+ "name": "intro_code_execution.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python",
+ "version": "3.11.0"
+ }
},
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.11.11"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
+ "nbformat": 4,
+ "nbformat_minor": 0
}