diff --git a/README.md b/README.md index 2a8f3e3..e24ece5 100644 --- a/README.md +++ b/README.md @@ -2,11 +2,11 @@ # DomainTools Iris Investigate Publisher: DomainTools -Connector Version: 1.5.1 +Connector Version: 1.5.2 Product Vendor: DomainTools Product Name: DomainTools Iris Investigate Product Version Supported (regex): ".\*" -Minimum Product Version: 6.1.1 +Minimum Product Version: 6.3.0 This app supports investigative actions to profile domain names, get risk scores, and find connected domains that share the same Whois details, web hosting profiles, SSL certificates, and more on DomainTools Iris Investigate @@ -75,8 +75,8 @@ In this example, we've specified to run three separate monitoring playbooks on d [this](https://github.com/DomainTools/playbooks/tree/main/Splunk%20SOAR) Github repository. -### Configuration Variables -The below configuration variables are required for this Connector to operate. These variables are specified when configuring a DomainTools Iris Investigate asset in SOAR. +### Configuration variables +This table lists the configuration variables required to operate DomainTools Iris Investigate. These variables are specified when configuring a DomainTools Iris Investigate asset in Splunk SOAR. VARIABLE | REQUIRED | TYPE | DESCRIPTION -------- | -------- | ---- | ----------- @@ -105,6 +105,8 @@ VARIABLE | REQUIRED | TYPE | DESCRIPTION [enrich domain](#action-enrich-domain) - Get all Iris Investigate data for a domain except counts using the high volume Iris Enrich API endpoint (if provisioned) [configure scheduled playbooks](#action-configure-scheduled-playbooks) - Run on initial setup to configure the optional monitoring playbooks. This action creates a custom list to manage the playbook scheduling and run status [on poll](#action-on-poll) - Execute scheduled playbooks based on the set interval(mins) in 'domaintools_scheduled_playbooks' custom list. Smaller intervals will result in more accurate schedules +[nod feed](#action-nod-feed) - Apex-level domains (e.g. example.com but not www.example.com) observed for the first time by the DomainTools sensor network, and which are not present in our DNSDB historical database +[nad feed](#action-nad-feed) - Apex-level domains (e.g. example.com but not www.example.com) DomainTools has newly observed in our DNS sensor network. This includes domains observed in DNS for the first time as well as domains observed in DNS again after not being observed for at least 10 days ## action: 'test connectivity' Validate the asset configuration for connectivity @@ -635,4 +637,64 @@ Read only: **True** No parameters are required for this action #### Action Output -No Output \ No newline at end of file +No Output + +## action: 'nod feed' +Apex-level domains (e.g. example.com but not www.example.com) observed for the first time by the DomainTools sensor network, and which are not present in our DNSDB historical database + +Type: **investigate** +Read only: **True** + +#### Action Parameters +PARAMETER | REQUIRED | DESCRIPTION | TYPE | CONTAINS +--------- | -------- | ----------- | ---- | -------- +**domain** | optional | Used to filter feed results. The filter can be an exact match or a partial match when the \* character is included at the beginning and/or end of the value. | string | +**after** | optional | A negative integer (in seconds) representing the start of the time window, relative to the current time in seconds, for which data will be provided. | string | +**session_id** | optional | Serves as a unique identifier for the session. This parameter ensures that data retrieval begins from the latest timestamp recorded in the previous data pull. | string | +**top** | optional | The number of results to return in the response payload. Primarily used for testing. | string | + +#### Action Output +DATA PATH | TYPE | CONTAINS | EXAMPLE VALUES +--------- | ---- | -------- | -------------- +action_result.data | string | | +action_result.data.\*.domain | string | `domain` | +action_result.data.\*.timestamp | string | | +action_result.status | string | | success failed +action_result.summary | string | | +action_result.message | string | | +action_result.parameter.after | string | | +action_result.parameter.domain | string | | +action_result.parameter.session_id | string | | +action_result.parameter.top | string | | +summary.total_objects | numeric | | 1 +summary.total_objects_successful | numeric | | 1 + +## action: 'nad feed' +Apex-level domains (e.g. example.com but not www.example.com) DomainTools has newly observed in our DNS sensor network. This includes domains observed in DNS for the first time as well as domains observed in DNS again after not being observed for at least 10 days + +Type: **investigate** +Read only: **True** + +#### Action Parameters +PARAMETER | REQUIRED | DESCRIPTION | TYPE | CONTAINS +--------- | -------- | ----------- | ---- | -------- +**domain** | optional | Used to filter feed results. The filter can be an exact match or a partial match when the \* character is included at the beginning and/or end of the value. | string | +**after** | optional | A negative integer (in seconds) representing the start of the time window, relative to the current time in seconds, for which data will be provided. | string | +**session_id** | optional | Serves as a unique identifier for the session. This parameter ensures that data retrieval begins from the latest timestamp recorded in the previous data pull. | string | +**top** | optional | The number of results to return in the response payload. Primarily used for testing. | string | + +#### Action Output +DATA PATH | TYPE | CONTAINS | EXAMPLE VALUES +--------- | ---- | -------- | -------------- +action_result.data | string | | +action_result.data.\*.domain | string | `domain` | +action_result.data.\*.timestamp | string | | +action_result.status | string | | success failed +action_result.summary | string | | +action_result.message | string | | +action_result.parameter.after | string | | +action_result.parameter.domain | string | | +action_result.parameter.session_id | string | | +action_result.parameter.top | string | | +summary.total_objects | numeric | | 1 +summary.total_objects_successful | numeric | | 1 \ No newline at end of file diff --git a/domaintools_iris.json b/domaintools_iris.json index 98d2779..cec91b2 100644 --- a/domaintools_iris.json +++ b/domaintools_iris.json @@ -7,12 +7,12 @@ "type": "information", "license": "Copyright (c) 2019-2024 DomainTools, LLC", "main_module": "domaintools_iris_connector.py", - "app_version": "1.5.1", - "utctime_updated": "2023-10-25T15:44:30.000000Z", + "app_version": "1.5.2", + "utctime_updated": "2024-12-20T18:16:45.550619Z", "product_vendor": "DomainTools", "product_name": "DomainTools Iris Investigate", "product_version_regex": ".*", - "min_phantom_version": "6.1.1", + "min_phantom_version": "6.3.0", "python_version": "3", "logo": "logo_domaintools_iris.svg", "logo_dark": "logo_domaintools_iris_dark.svg", @@ -2102,57 +2102,287 @@ "parameters": {}, "output": [], "versions": "EQ(*)" + }, + { + "action": "nod feed", + "description": "Apex-level domains (e.g. example.com but not www.example.com) observed for the first time by the DomainTools sensor network, and which are not present in our DNSDB historical database", + "type": "investigate", + "identifier": "nod_feed", + "read_only": true, + "parameters": { + "domain": { + "description": "Used to filter feed results. The filter can be an exact match or a partial match when the * character is included at the beginning and/or end of the value.", + "data_type": "string", + "order": 0 + }, + "after": { + "description": "A negative integer (in seconds) representing the start of the time window, relative to the current time in seconds, for which data will be provided.", + "data_type": "string", + "order": 1 + }, + "session_id": { + "description": "Serves as a unique identifier for the session. This parameter ensures that data retrieval begins from the latest timestamp recorded in the previous data pull.", + "data_type": "string", + "order": 2 + }, + "top": { + "description": "The number of results to return in the response payload. Primarily used for testing.", + "data_type": "string", + "order": 3 + } + }, + "render": { + "width": 12, + "title": "Newly Observed Domains List", + "type": "table", + "height": 10 + }, + "output": [ + { + "data_path": "action_result.data", + "data_type": "string" + }, + { + "data_path": "action_result.data.*.domain", + "data_type": "string", + "column_name": "Domain Names", + "column_order": 0, + "contains": [ + "domain" + ] + }, + { + "data_path": "action_result.data.*.timestamp", + "data_type": "string", + "column_name": "Time Stamp", + "column_order": 1 + }, + { + "data_path": "action_result.status", + "data_type": "string", + "example_values": [ + "success", + "failed" + ] + }, + { + "data_path": "action_result.summary", + "data_type": "string" + }, + { + "data_path": "action_result.message", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.after", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.domain", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.session_id", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.top", + "data_type": "string" + }, + { + "data_path": "summary.total_objects", + "data_type": "numeric", + "example_values": [ + 1 + ] + }, + { + "data_path": "summary.total_objects_successful", + "data_type": "numeric", + "example_values": [ + 1 + ] + } + ], + "versions": "EQ(*)" + }, + { + "action": "nad feed", + "description": "Apex-level domains (e.g. example.com but not www.example.com) DomainTools has newly observed in our DNS sensor network. This includes domains observed in DNS for the first time as well as domains observed in DNS again after not being observed for at least 10 days", + "type": "investigate", + "identifier": "nad_feed", + "read_only": true, + "parameters": { + "domain": { + "description": "Used to filter feed results. The filter can be an exact match or a partial match when the * character is included at the beginning and/or end of the value.", + "data_type": "string", + "order": 0 + }, + "after": { + "description": "A negative integer (in seconds) representing the start of the time window, relative to the current time in seconds, for which data will be provided.", + "data_type": "string", + "order": 1 + }, + "session_id": { + "description": "Serves as a unique identifier for the session. This parameter ensures that data retrieval begins from the latest timestamp recorded in the previous data pull.", + "data_type": "string", + "order": 2 + }, + "top": { + "description": "The number of results to return in the response payload. Primarily used for testing.", + "data_type": "string", + "order": 3 + } + }, + "render": { + "width": 12, + "title": "Newly Active Domains List", + "type": "table", + "height": 10 + }, + "output": [ + { + "data_path": "action_result.data", + "data_type": "string" + }, + { + "data_path": "action_result.data.*.domain", + "data_type": "string", + "column_name": "Domain Names", + "column_order": 0, + "contains": [ + "domain" + ] + }, + { + "data_path": "action_result.data.*.timestamp", + "data_type": "string", + "column_name": "Time Stamp", + "column_order": 1 + }, + { + "data_path": "action_result.status", + "data_type": "string", + "example_values": [ + "success", + "failed" + ] + }, + { + "data_path": "action_result.summary", + "data_type": "string" + }, + { + "data_path": "action_result.message", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.after", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.domain", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.session_id", + "data_type": "string" + }, + { + "data_path": "action_result.parameter.top", + "data_type": "string" + }, + { + "data_path": "summary.total_objects", + "data_type": "numeric", + "example_values": [ + 1 + ] + }, + { + "data_path": "summary.total_objects_successful", + "data_type": "numeric", + "example_values": [ + 1 + ] + } + ], + "versions": "EQ(*)" } ], "pip39_dependencies": { "wheel": [ { "module": "anyio", - "input_file": "wheels/py3/anyio-4.2.0-py3-none-any.whl" + "input_file": "wheels/py3/anyio-3.6.1-py3-none-any.whl" }, { - "module": "dateparser", - "input_file": "wheels/shared/dateparser-1.2.0-py2.py3-none-any.whl" + "module": "certifi", + "input_file": "wheels/py3/certifi-2022.6.15-py3-none-any.whl" }, { - "module": "domaintools_api", - "input_file": "wheels/shared/domaintools_api-1.0.1-py2.py3-none-any.whl" + "module": "charset_normalizer", + "input_file": "wheels/py3/charset_normalizer-2.0.12-py3-none-any.whl" + }, + { + "module": "click", + "input_file": "wheels/py3/click-8.1.7-py3-none-any.whl" }, { - "module": "exceptiongroup", - "input_file": "wheels/py3/exceptiongroup-1.2.0-py3-none-any.whl" + "module": "domaintools_api", + "input_file": "wheels/shared/domaintools_api-2.1.0-py2.py3-none-any.whl" }, { "module": "filelock", - "input_file": "wheels/py3/filelock-3.13.1-py3-none-any.whl" + "input_file": "wheels/py3/filelock-3.7.1-py3-none-any.whl" }, { "module": "h11", - "input_file": "wheels/py3/h11-0.14.0-py3-none-any.whl" + "input_file": "wheels/py3/h11-0.12.0-py3-none-any.whl" }, { "module": "httpcore", - "input_file": "wheels/py3/httpcore-1.0.2-py3-none-any.whl" + "input_file": "wheels/py3/httpcore-0.15.0-py3-none-any.whl" }, { "module": "httpx", - "input_file": "wheels/py3/httpx-0.26.0-py3-none-any.whl" + "input_file": "wheels/py3/httpx-0.23.0-py3-none-any.whl" }, { - "module": "python_dateutil", - "input_file": "wheels/shared/python_dateutil-2.8.2-py2.py3-none-any.whl" + "module": "idna", + "input_file": "wheels/py3/idna-3.3-py3-none-any.whl" }, { - "module": "pytz", - "input_file": "wheels/shared/pytz-2024.1-py2.py3-none-any.whl" + "module": "markdown_it_py", + "input_file": "wheels/py3/markdown_it_py-3.0.0-py3-none-any.whl" }, { - "module": "regex", - "input_file": "wheels/py39/regex-2023.12.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl" + "module": "mdurl", + "input_file": "wheels/py3/mdurl-0.1.2-py3-none-any.whl" + }, + { + "module": "pygments", + "input_file": "wheels/py3/pygments-2.18.0-py3-none-any.whl" + }, + { + "module": "requests", + "input_file": "wheels/py3/requests-2.28.0-py3-none-any.whl" }, { "module": "requests_file", - "input_file": "wheels/shared/requests_file-2.0.0-py2.py3-none-any.whl" + "input_file": "wheels/shared/requests_file-1.5.1-py2.py3-none-any.whl" + }, + { + "module": "rfc3986", + "input_file": "wheels/shared/rfc3986-1.5.0-py2.py3-none-any.whl" + }, + { + "module": "rich", + "input_file": "wheels/py3/rich-13.9.4-py3-none-any.whl" + }, + { + "module": "shellingham", + "input_file": "wheels/shared/shellingham-1.5.4-py2.py3-none-any.whl" }, { "module": "six", @@ -2160,19 +2390,23 @@ }, { "module": "sniffio", - "input_file": "wheels/py3/sniffio-1.3.0-py3-none-any.whl" + "input_file": "wheels/py3/sniffio-1.2.0-py3-none-any.whl" }, { "module": "tldextract", "input_file": "wheels/py3/tldextract-3.4.4-py3-none-any.whl" }, + { + "module": "typer", + "input_file": "wheels/py3/typer-0.13.0-py3-none-any.whl" + }, { "module": "typing_extensions", - "input_file": "wheels/py3/typing_extensions-4.5.0-py3-none-any.whl" + "input_file": "wheels/py3/typing_extensions-4.12.2-py3-none-any.whl" }, { - "module": "tzlocal", - "input_file": "wheels/py3/tzlocal-5.2-py3-none-any.whl" + "module": "urllib3", + "input_file": "wheels/shared/urllib3-1.26.9-py2.py3-none-any.whl" } ] }, diff --git a/domaintools_iris_connector.py b/domaintools_iris_connector.py index 8876eba..0a2558c 100644 --- a/domaintools_iris_connector.py +++ b/domaintools_iris_connector.py @@ -14,6 +14,7 @@ import phantom.app as phantom import requests import tldextract + # 3rd party imports from domaintools import API from phantom.action_result import ActionResult @@ -32,6 +33,8 @@ class DomainToolsConnector(BaseConnector): ACTION_ID_LOAD_HASH = "load_hash" ACTION_ID_ON_POLL = "on_poll" ACTION_ID_CONFIGURE_SCHEDULED_PLAYBOOK = "configure_monitoring_scheduled_playbooks" + ACTION_ID_NOD_FEED = "nod_feed" + ACTION_ID_NAD_FEED = "nad_feed" def __init__(self): # Call the BaseConnectors init first @@ -64,6 +67,9 @@ def initialize(self): return phantom.APP_SUCCESS + def _is_feeds_service(self, service): + return service in ("nod", "nad") + def _handle_py_ver_for_byte(self, input_str): """ This method returns the binary|original string based on the Python version. @@ -90,7 +96,7 @@ def _get_error_message_from_exception(self, e): error_msg = e.args[1] elif len(e.args) == 1: error_msg = e.args[0] - except: + except BaseException: pass return error_code, error_msg @@ -100,6 +106,21 @@ def _clean_empty_response(self, response): if response.get("domains") == []: del response["domains"] + def _parse_feeds_response(self, action_result, response_json): + rows = response_json.strip().split("\n") + data = [] + for row in rows: + feed_result = json.loads(row) + data.append( + { + "timestamp": feed_result.get("timestamp"), + "domain": feed_result.get("domain"), + } + ) + + action_result.update_data(data) + return action_result.set_status(phantom.APP_SUCCESS) + def _parse_response(self, action_result, response_json): """ No need to do exception handling, since this function call has a try...except around it. @@ -148,17 +169,13 @@ def _parse_response(self, action_result, response_json): self._clean_empty_response(response) if "results" in response: - action_result.update_summary( - {"Connected Domains Count": len(response["results"])} - ) + action_result.update_summary({"Connected Domains Count": len(response["results"])}) action_result.update_data(response["results"]) else: action_result.add_data(response) if response.get("limit_exceeded"): - msg = response.get( - "message", "Response limit exceeded, please narrow your search" - ) + msg = response.get("message", "Response limit exceeded, please narrow your search") action_result.update_summary({"Error": msg}) return action_result.set_status(phantom.APP_ERROR, msg) @@ -166,9 +183,7 @@ def _parse_response(self, action_result, response_json): return action_result.set_status( phantom.APP_ERROR, - error.get( - "message", "An unknown error occurred while querying domaintools" - ), + error.get("message", "An unknown error occurred while querying domaintools"), ) def _do_query(self, service, action_result, query_args=None): @@ -183,6 +198,7 @@ def _do_query(self, service, action_result, query_args=None): """ self.save_progress("Connecting to domaintools") + always_sign_api_key = query_args.pop("always_sign_api_key", True) try: dt_api = API( @@ -194,12 +210,10 @@ def _do_query(self, service, action_result, query_args=None): proxy_url=self._proxy_url, verify_ssl=self._ssl, https=self._ssl, - always_sign_api_key=True, + always_sign_api_key=always_sign_api_key, ) except Exception as e: - return action_result.set_status( - phantom.APP_ERROR, "Unable connect to DomainTools API", e - ) + return action_result.set_status(phantom.APP_ERROR, "Unable connect to DomainTools API", e) try: domains = query_args.get("domains") @@ -222,6 +236,11 @@ def _do_query(self, service, action_result, query_args=None): try: response_json = response.data() + + if self._is_feeds_service(service): + # Separate parsing of feeds product + return self._parse_feeds_response(action_result, response_json) + except Exception as e: return action_result.set_status( phantom.APP_ERROR, @@ -243,9 +262,7 @@ def _do_query(self, service, action_result, query_args=None): ) self.save_progress(f"Parsing {len(results_data)} results...") - response_json["response"]["results"] = self._convert_risk_scores_to_string( - results_data - ) + response_json["response"]["results"] = self._convert_risk_scores_to_string(results_data) try: return self._parse_response(action_result, response_json) @@ -263,20 +280,14 @@ def _convert_risk_scores_to_string(self, results_data): final_result = [] for result in results_data: result.get("domain_risk").update( - { - "risk_score_string": self._convert_null_value_to_empty_string( - result.get("domain_risk", {}).get("risk_score") - ) - } + {"risk_score_string": self._convert_null_value_to_empty_string(result.get("domain_risk", {}).get("risk_score"))} ) final_result.append(result) # Make the final result sorted in descending order by default return sorted( final_result, - key=lambda d: 0 - if d.get("domain_risk", {}).get("risk_score_string") == "" - else d.get("domain_risk", {}).get("risk_score"), + key=lambda d: (0 if d.get("domain_risk", {}).get("risk_score_string") == "" else d.get("domain_risk", {}).get("risk_score")), reverse=True, ) @@ -354,6 +365,10 @@ def handle_action(self, param): ret_val = self._on_poll(param) elif action_id == self.ACTION_ID_CONFIGURE_SCHEDULED_PLAYBOOK: ret_val = self._configure_monitoring_scheduled_playbooks(param) + elif action_id == self.ACTION_ID_NOD_FEED: + ret_val = self._nod_feed(param) + elif action_id == self.ACTION_ID_NAD_FEED: + ret_val = self._nad_feed(param) return ret_val @@ -381,13 +396,9 @@ def _get_proxy_url(self, config): proxy_password = config.get("proxy_password") if not (proxy_username and proxy_password): - raise Exception( - "Must provide both a Proxy Username and Proxy Password." - ) + raise Exception("Must provide both a Proxy Username and Proxy Password.") - proxy_url = ( - f"{protocol}://{proxy_username}:{proxy_password}@{server_address}" - ) + proxy_url = f"{protocol}://{proxy_username}:{proxy_password}@{server_address}" return proxy_url @@ -465,9 +476,7 @@ def _reverse_lookup_domain(self, param): "ip": a["address"]["value"], "type": "Host IP", "count": a["address"]["count"], - "count_string": self._convert_null_value_to_empty_string( - a["address"]["count"] - ), + "count_string": self._convert_null_value_to_empty_string(a["address"]["count"]), } ) @@ -479,9 +488,7 @@ def _reverse_lookup_domain(self, param): "ip": b["value"], "type": "MX IP", "count": b["count"], - "count_string": self._convert_null_value_to_empty_string( - b["count"] - ), + "count_string": self._convert_null_value_to_empty_string(b["count"]), } ) @@ -493,9 +500,7 @@ def _reverse_lookup_domain(self, param): "ip": b["value"], "type": "NS IP", "count": b["count"], - "count_string": self._convert_null_value_to_empty_string( - b["count"] - ), + "count_string": self._convert_null_value_to_empty_string(b["count"]), } ) @@ -546,9 +551,7 @@ def _domain_reputation(self, param): if not data: return action_result.get_status() - action_result.update_summary( - {"domain_risk": data[0]["domain_risk"]["risk_score"]} - ) + action_result.update_summary({"domain_risk": data[0]["domain_risk"]["risk_score"]}) for a in data[0]["domain_risk"]["components"]: if a["name"] == "zerolist": @@ -584,9 +587,7 @@ def _load_hash(self, param): def _pivot_action(self, param): action_result = self.add_action_result(ActionResult(param)) - query_field = ( - param["pivot_type"] if param["pivot_type"] != "domain" else "domains" - ) + query_field = param["pivot_type"] if param["pivot_type"] != "domain" else "domains" if query_field == "domains": query_value = self._domains else: @@ -602,27 +603,21 @@ def _pivot_action(self, param): if params["data_updated_after"].lower() == "today": params["data_updated_after"] = datetime.today().strftime("%Y-%m-%d") if params["data_updated_after"].lower() == "yesterday": - params["data_updated_after"] = ( - datetime.now() - timedelta(days=1) - ).strftime("%Y-%m-%d") + params["data_updated_after"] = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d") if "create_date" in param: params["create_date"] = param["create_date"] if params["create_date"].lower() == "today": params["create_date"] = datetime.today().strftime("%Y-%m-%d") if params["create_date"].lower() == "yesterday": - params["create_date"] = (datetime.now() - timedelta(days=1)).strftime( - "%Y-%m-%d" - ) + params["create_date"] = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d") if "expiration_date" in param: params["expiration_date"] = param["expiration_date"] if params["expiration_date"].lower() == "today": params["expiration_date"] = datetime.today().strftime("%Y-%m-%d") if params["expiration_date"].lower() == "yesterday": - params["expiration_date"] = ( - datetime.now() - timedelta(days=1) - ).strftime("%Y-%m-%d") + params["expiration_date"] = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d") if "status" in param and param["status"].lower() != "any": params["active"] = param["status"].lower() == "active" @@ -667,9 +662,7 @@ def _get_scheduled_playbooks(self): return [], [] def _get_playbook_monitoring_container(self, event_id, playbook_name): - self.debug_print( - f"Getting playbook corresponding container with ID of {event_id}" - ) + self.debug_print(f"Getting playbook corresponding container with ID of {event_id}") config = self.get_config() if not event_id: return ( @@ -677,9 +670,7 @@ def _get_playbook_monitoring_container(self, event_id, playbook_name): f"No event ID set in `{playbook_name}` settings. Please input a valid event ID", ) - response = phantom.requests.get( - f"{self._rest_url}container/{event_id}", verify=False - ) + response = phantom.requests.get(f"{self._rest_url}container/{event_id}", verify=False) response.raise_for_status() container = response.json() ingest_label_name = config.get("ingest", {}).get("container_label", "") @@ -705,9 +696,7 @@ def _check_interval(self, interval: int, last_run: str) -> bool: def _run_playbook(self, data: str): self.debug_print(f"Running playbook: {data.get('playbook_id')}") - response = phantom.requests.post( - f"{self._rest_url}playbook_run/", data=json.dumps(data), verify=False - ) + response = phantom.requests.post(f"{self._rest_url}playbook_run/", data=json.dumps(data), verify=False) response.raise_for_status() if response.json().get("recieved"): return True @@ -715,9 +704,7 @@ def _run_playbook(self, data: str): return False def _create_scheduled_playbook_list(self): - self.debug_print( - f"Creating scheduled playbook list: {self._scheduled_playbooks_list_name}" - ) + self.debug_print(f"Creating scheduled playbook list: {self._scheduled_playbooks_list_name}") request_body = { "content": [ [ @@ -793,9 +780,7 @@ def _on_poll(self, param): headers, scheduled_playbooks = self._get_scheduled_playbooks() if not scheduled_playbooks: - return action_result.set_status( - phantom.APP_ERROR, "No scheduled playbooks found." - ) + return action_result.set_status(phantom.APP_ERROR, "No scheduled playbooks found.") new_content = [headers] for pb in scheduled_playbooks: @@ -851,17 +836,11 @@ def _on_poll(self, param): if not sucess_call: remarks = f"Something went wrong when running {name}." # append new values - new_content.append( - [name, event_id, interval, last_run, last_run_status, remarks] - ) + new_content.append([name, event_id, interval, last_run, last_run_status, remarks]) - self.debug_print( - f"New {self._scheduled_playbooks_list_name} Content: {new_content}" - ) + self.debug_print(f"New {self._scheduled_playbooks_list_name} Content: {new_content}") # update the scheduled playbook list - update_list_status = self._update_scheduled_playbook_list( - {"content": new_content} - ) + update_list_status = self._update_scheduled_playbook_list({"content": new_content}) self.debug_print(f"Updated List Status: {update_list_status}") if update_list_status: return action_result.set_status(phantom.APP_SUCCESS, "Completed.") @@ -883,6 +862,34 @@ def _configure_monitoring_scheduled_playbooks(self, param): f"`{self._scheduled_playbooks_list_name}` custom list {res.get('message')}", ) + def _nod_feed(self, param): + self.save_progress("Starting nod_feed action.") + action_result = self.add_action_result(ActionResult(param)) + params = {"always_sign_api_key": False} + params.update(param) + session_id = params.pop("session_id", None) + if session_id: + params["sessionID"] = session_id + + self._do_query("nod", action_result, query_args=params) + self.save_progress("Completed nod_feed action.") + + return action_result.get_status() + + def _nad_feed(self, param): + self.save_progress("Starting nad_feed action.") + action_result = self.add_action_result(ActionResult(param)) + params = {"always_sign_api_key": False} + params.update(param) + session_id = params.pop("session_id", None) + if session_id: + params["sessionID"] = session_id + + self._do_query("nad", action_result, query_args=params) + self.save_progress("Completed nod_feed action.") + + return action_result.get_status() + if __name__ == "__main__": import argparse diff --git a/iris_view.py b/iris_view.py index e776883..32224e7 100644 --- a/iris_view.py +++ b/iris_view.py @@ -6,7 +6,7 @@ def display_view(provides, all_app_runs, context): - context['results'] = results = [] + context["results"] = results = [] for _, action_results in all_app_runs: for result in action_results: ctx_result = get_ctx_result(result) @@ -22,14 +22,17 @@ def get_ctx_result(result): ctx_result = {} param = result.get_param() data_list = result.get_data() - if (data_list): - ctx_result['param'] = param - ctx_result['data'] = [] - ctx_result['sorted_data'] = [] + if data_list: + ctx_result["param"] = param + ctx_result["data"] = [] + ctx_result["sorted_data"] = [] for data in data_list: extracted_data = extract_data(data) - ctx_result['data'].append(extracted_data) - sorted_keys = sorted(extracted_data, key=lambda kv_pair: (not kv_pair.startswith('domain'), kv_pair)) + ctx_result["data"].append(extracted_data) + sorted_keys = sorted( + extracted_data, + key=lambda kv_pair: (not kv_pair.startswith("domain"), kv_pair), + ) sorted_data = [] # TODO: This is temporary only. Remove this from the sorted_data once update on pivot links is implemented @@ -38,7 +41,7 @@ def get_ctx_result(result): for key in sorted_keys: if extracted_data[key] or extracted_data[key] == 0: data_count = "" - if type(extracted_data[key]) is dict: + if isinstance(extracted_data[key], dict): value = extracted_data[key].get("value") count = extracted_data[key].get("count") if value in ("", "None", None): @@ -51,7 +54,7 @@ def get_ctx_result(result): is_list = isinstance(data_value, list) key = " ".join(unique_list(key.split())) sorted_data.append((key, data_value, data_count, is_list, queried_domain)) - ctx_result['sorted_data'].append(sorted_data) + ctx_result["sorted_data"].append(sorted_data) return ctx_result diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000..0ee83fe --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,4 @@ +[tool.black] +line-length = 145 +target-version = ['py39'] +verbose = true diff --git a/release_notes/1.5.2.md b/release_notes/1.5.2.md new file mode 100644 index 0000000..ce6b178 --- /dev/null +++ b/release_notes/1.5.2.md @@ -0,0 +1,2 @@ +* Added `nod_feed` action to support for NOD Feeds. +* Added `nad_feed` action to support for NAD Feeds. \ No newline at end of file diff --git a/wheels/py3/anyio-3.6.1-py3-none-any.whl b/wheels/py3/anyio-3.6.1-py3-none-any.whl new file mode 100644 index 0000000..60d6fb0 Binary files /dev/null and b/wheels/py3/anyio-3.6.1-py3-none-any.whl differ diff --git a/wheels/py3/anyio-4.2.0-py3-none-any.whl b/wheels/py3/anyio-4.2.0-py3-none-any.whl deleted file mode 100644 index 0e79d9f..0000000 Binary files a/wheels/py3/anyio-4.2.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/certifi-2022.6.15-py3-none-any.whl b/wheels/py3/certifi-2022.6.15-py3-none-any.whl new file mode 100644 index 0000000..6e70631 Binary files /dev/null and b/wheels/py3/certifi-2022.6.15-py3-none-any.whl differ diff --git a/wheels/py3/charset_normalizer-2.0.12-py3-none-any.whl b/wheels/py3/charset_normalizer-2.0.12-py3-none-any.whl new file mode 100644 index 0000000..17a2dfb Binary files /dev/null and b/wheels/py3/charset_normalizer-2.0.12-py3-none-any.whl differ diff --git a/wheels/py3/click-8.1.7-py3-none-any.whl b/wheels/py3/click-8.1.7-py3-none-any.whl new file mode 100644 index 0000000..5e8a550 Binary files /dev/null and b/wheels/py3/click-8.1.7-py3-none-any.whl differ diff --git a/wheels/py3/exceptiongroup-1.2.0-py3-none-any.whl b/wheels/py3/exceptiongroup-1.2.0-py3-none-any.whl deleted file mode 100644 index 9195d36..0000000 Binary files a/wheels/py3/exceptiongroup-1.2.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/filelock-3.13.1-py3-none-any.whl b/wheels/py3/filelock-3.13.1-py3-none-any.whl deleted file mode 100644 index 857b8c2..0000000 Binary files a/wheels/py3/filelock-3.13.1-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/filelock-3.7.1-py3-none-any.whl b/wheels/py3/filelock-3.7.1-py3-none-any.whl new file mode 100644 index 0000000..9ef4eb7 Binary files /dev/null and b/wheels/py3/filelock-3.7.1-py3-none-any.whl differ diff --git a/wheels/py3/h11-0.12.0-py3-none-any.whl b/wheels/py3/h11-0.12.0-py3-none-any.whl new file mode 100644 index 0000000..d2eabf7 Binary files /dev/null and b/wheels/py3/h11-0.12.0-py3-none-any.whl differ diff --git a/wheels/py3/h11-0.14.0-py3-none-any.whl b/wheels/py3/h11-0.14.0-py3-none-any.whl deleted file mode 100644 index a02c8de..0000000 Binary files a/wheels/py3/h11-0.14.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/httpcore-0.15.0-py3-none-any.whl b/wheels/py3/httpcore-0.15.0-py3-none-any.whl new file mode 100644 index 0000000..31d0330 Binary files /dev/null and b/wheels/py3/httpcore-0.15.0-py3-none-any.whl differ diff --git a/wheels/py3/httpcore-1.0.2-py3-none-any.whl b/wheels/py3/httpcore-1.0.2-py3-none-any.whl deleted file mode 100644 index 8bb8a4a..0000000 Binary files a/wheels/py3/httpcore-1.0.2-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/httpx-0.23.0-py3-none-any.whl b/wheels/py3/httpx-0.23.0-py3-none-any.whl new file mode 100644 index 0000000..381d9fe Binary files /dev/null and b/wheels/py3/httpx-0.23.0-py3-none-any.whl differ diff --git a/wheels/py3/httpx-0.26.0-py3-none-any.whl b/wheels/py3/httpx-0.26.0-py3-none-any.whl deleted file mode 100644 index 1d984fd..0000000 Binary files a/wheels/py3/httpx-0.26.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/idna-3.3-py3-none-any.whl b/wheels/py3/idna-3.3-py3-none-any.whl new file mode 100644 index 0000000..060541b Binary files /dev/null and b/wheels/py3/idna-3.3-py3-none-any.whl differ diff --git a/wheels/py3/markdown_it_py-3.0.0-py3-none-any.whl b/wheels/py3/markdown_it_py-3.0.0-py3-none-any.whl new file mode 100644 index 0000000..2691698 Binary files /dev/null and b/wheels/py3/markdown_it_py-3.0.0-py3-none-any.whl differ diff --git a/wheels/py3/mdurl-0.1.2-py3-none-any.whl b/wheels/py3/mdurl-0.1.2-py3-none-any.whl new file mode 100644 index 0000000..6b8b6ab Binary files /dev/null and b/wheels/py3/mdurl-0.1.2-py3-none-any.whl differ diff --git a/wheels/py3/pygments-2.18.0-py3-none-any.whl b/wheels/py3/pygments-2.18.0-py3-none-any.whl new file mode 100644 index 0000000..e67ab03 Binary files /dev/null and b/wheels/py3/pygments-2.18.0-py3-none-any.whl differ diff --git a/wheels/py3/requests-2.28.0-py3-none-any.whl b/wheels/py3/requests-2.28.0-py3-none-any.whl new file mode 100644 index 0000000..7b3c145 Binary files /dev/null and b/wheels/py3/requests-2.28.0-py3-none-any.whl differ diff --git a/wheels/py3/rich-13.9.4-py3-none-any.whl b/wheels/py3/rich-13.9.4-py3-none-any.whl new file mode 100644 index 0000000..e1864d8 Binary files /dev/null and b/wheels/py3/rich-13.9.4-py3-none-any.whl differ diff --git a/wheels/py3/sniffio-1.2.0-py3-none-any.whl b/wheels/py3/sniffio-1.2.0-py3-none-any.whl new file mode 100644 index 0000000..caa44a8 Binary files /dev/null and b/wheels/py3/sniffio-1.2.0-py3-none-any.whl differ diff --git a/wheels/py3/sniffio-1.3.0-py3-none-any.whl b/wheels/py3/sniffio-1.3.0-py3-none-any.whl deleted file mode 100644 index 7d72f26..0000000 Binary files a/wheels/py3/sniffio-1.3.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/typer-0.13.0-py3-none-any.whl b/wheels/py3/typer-0.13.0-py3-none-any.whl new file mode 100644 index 0000000..cb7be2e Binary files /dev/null and b/wheels/py3/typer-0.13.0-py3-none-any.whl differ diff --git a/wheels/py3/typing_extensions-4.12.2-py3-none-any.whl b/wheels/py3/typing_extensions-4.12.2-py3-none-any.whl new file mode 100644 index 0000000..f6cc799 Binary files /dev/null and b/wheels/py3/typing_extensions-4.12.2-py3-none-any.whl differ diff --git a/wheels/py3/typing_extensions-4.5.0-py3-none-any.whl b/wheels/py3/typing_extensions-4.5.0-py3-none-any.whl deleted file mode 100644 index 5d32cb7..0000000 Binary files a/wheels/py3/typing_extensions-4.5.0-py3-none-any.whl and /dev/null differ diff --git a/wheels/py3/tzlocal-5.2-py3-none-any.whl b/wheels/py3/tzlocal-5.2-py3-none-any.whl deleted file mode 100644 index fee6193..0000000 Binary files a/wheels/py3/tzlocal-5.2-py3-none-any.whl and /dev/null differ diff --git a/wheels/py39/regex-2023.12.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl b/wheels/py39/regex-2023.12.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl deleted file mode 100644 index 977c0ab..0000000 Binary files a/wheels/py39/regex-2023.12.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl and /dev/null differ diff --git a/wheels/regex-2023.6.3.tar.gz b/wheels/regex-2023.6.3.tar.gz deleted file mode 100644 index a7d8e9b..0000000 Binary files a/wheels/regex-2023.6.3.tar.gz and /dev/null differ diff --git a/wheels/shared/dateparser-1.2.0-py2.py3-none-any.whl b/wheels/shared/dateparser-1.2.0-py2.py3-none-any.whl deleted file mode 100644 index 6298dfe..0000000 Binary files a/wheels/shared/dateparser-1.2.0-py2.py3-none-any.whl and /dev/null differ diff --git a/wheels/shared/domaintools_api-1.0.1-py2.py3-none-any.whl b/wheels/shared/domaintools_api-1.0.1-py2.py3-none-any.whl deleted file mode 100644 index 484c025..0000000 Binary files a/wheels/shared/domaintools_api-1.0.1-py2.py3-none-any.whl and /dev/null differ diff --git a/wheels/shared/domaintools_api-2.1.0-py2.py3-none-any.whl b/wheels/shared/domaintools_api-2.1.0-py2.py3-none-any.whl new file mode 100644 index 0000000..f07e78b Binary files /dev/null and b/wheels/shared/domaintools_api-2.1.0-py2.py3-none-any.whl differ diff --git a/wheels/shared/python_dateutil-2.8.2-py2.py3-none-any.whl b/wheels/shared/python_dateutil-2.8.2-py2.py3-none-any.whl deleted file mode 100644 index 8ffb923..0000000 Binary files a/wheels/shared/python_dateutil-2.8.2-py2.py3-none-any.whl and /dev/null differ diff --git a/wheels/shared/pytz-2024.1-py2.py3-none-any.whl b/wheels/shared/pytz-2024.1-py2.py3-none-any.whl deleted file mode 100644 index 571f586..0000000 Binary files a/wheels/shared/pytz-2024.1-py2.py3-none-any.whl and /dev/null differ diff --git a/wheels/shared/requests_file-1.5.1-py2.py3-none-any.whl b/wheels/shared/requests_file-1.5.1-py2.py3-none-any.whl new file mode 100644 index 0000000..1631a97 Binary files /dev/null and b/wheels/shared/requests_file-1.5.1-py2.py3-none-any.whl differ diff --git a/wheels/shared/requests_file-2.0.0-py2.py3-none-any.whl b/wheels/shared/requests_file-2.0.0-py2.py3-none-any.whl deleted file mode 100644 index 4f378b6..0000000 Binary files a/wheels/shared/requests_file-2.0.0-py2.py3-none-any.whl and /dev/null differ diff --git a/wheels/shared/rfc3986-1.5.0-py2.py3-none-any.whl b/wheels/shared/rfc3986-1.5.0-py2.py3-none-any.whl new file mode 100644 index 0000000..f723df5 Binary files /dev/null and b/wheels/shared/rfc3986-1.5.0-py2.py3-none-any.whl differ diff --git a/wheels/shared/shellingham-1.5.4-py2.py3-none-any.whl b/wheels/shared/shellingham-1.5.4-py2.py3-none-any.whl new file mode 100644 index 0000000..63a6dc1 Binary files /dev/null and b/wheels/shared/shellingham-1.5.4-py2.py3-none-any.whl differ diff --git a/wheels/shared/urllib3-1.26.9-py2.py3-none-any.whl b/wheels/shared/urllib3-1.26.9-py2.py3-none-any.whl new file mode 100644 index 0000000..5019453 Binary files /dev/null and b/wheels/shared/urllib3-1.26.9-py2.py3-none-any.whl differ