Skip to content

Commit

Permalink
Merge: new docgen, layout from coolharsh55/dpv
Browse files Browse the repository at this point in the history
commit a35bf5a
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Jan 28 22:46:56 2024 +0000

    merge upstream - minutes

commit 88ba348
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Jan 28 22:15:34 2024 +0000

    update templates; 310 Primer

commit a57e0f0
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Jan 28 16:57:31 2024 +0000

    reorganised templates, fixed module vocab index

commit 2243308
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Jan 28 13:12:02 2024 +0000

    remove multilingual support (add in later version)

commit fc89011
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Jan 28 12:57:20 2024 +0000

    removed NACE

commit 864bc33
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 31 11:49:00 2023 +0000

    300 HTML faster/efficient file copy

commit 4d35512
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 31 09:54:30 2023 +0000

    HTML - metadata from RDF, reorganise

commit 76957bb
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Dec 30 00:22:38 2023 +0000

    fix HTML concept duplicate error; references

commit e59b732
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 29 22:54:43 2023 +0000

    conversion to owl manchester syntax

commit e17d85e
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 29 22:27:00 2023 +0000

    ontology metadata for default/OWL; contributors

commit 435e7a9
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 29 18:42:06 2023 +0000

    moved all code under /code ; removed unused/stale files

commit c9bd3ff
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 29 17:51:35 2023 +0000

    delete stale data folders which have been moved

    - dpv-dga: moved to legal/eu/eu-dga
    - dpv-gdpr: moved to legal/eu/eu-gdpr
    - dpv-legal: moved to legal and loc
    - dpv-pd: moved to pd
    - dpv-tech: moved to tech
    - dpv-skos and dpv-owl serialisation folders: skos is now the default
      serialisation and owl serialisations are generated in the same folder
      with a different suffix (-owl)

    Contributors.json removed as these are to be generated per vocabulary
    and added programmatically as metadata

commit 262f2cf
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 29 10:49:27 2023 +0000

    validation testing, filtering results, HTML output

commit 83d98df
Author: Harshvardhan Pandit <[email protected]>
Date:   Thu Dec 28 17:09:30 2023 +0000

    rdfs:isDefinedBy, fixed relative links, added special categories

commit 7fe1386
Author: Harshvardhan Pandit <[email protected]>
Date:   Wed Dec 27 22:57:52 2023 +0000

    Refactor 200,300; Fix bugs

commit d8e737e
Author: Harshvardhan Pandit <[email protected]>
Date:   Wed Dec 27 01:24:34 2023 +0000

    testing dycco for code documentation

commit 7f8251d
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 25 23:32:37 2023 +0000

    translation working - proof of concept

commit da9a4ce
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 25 20:04:49 2023 +0000

    translations - 100 download, 200 RDF

commit a11c9fe
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 25 16:00:41 2023 +0000

    RDF OWL variant with suffix -owl in same filepath

commit 3204c8a
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 25 00:14:46 2023 +0000

    examples - RDF, HTML index, embed anywhere

commit 07dfc33
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 17 21:51:20 2023 +0000

    Legal - consolidated page from all locations

commit 478e0b1
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 17 19:41:14 2023 +0000

    legal ie, gb, us, eu added

commit 4d2e658
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 17 00:21:24 2023 +0000

    Legal-XX page for jurisdiction; tested with EU, DE

commit 164faf1
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Dec 16 21:15:25 2023 +0000

    200,300 LOC Extension; fixed GDPR compliance list

commit f856a9f
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Dec 16 19:31:30 2023 +0000

    200 LOC: added start/end for laws

commit 90a595d
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Dec 16 19:01:13 2023 +0000

    LOC extension for locations

commit 3d7905e
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Dec 16 12:51:11 2023 +0000

    300: fixed non-list iteration, added links to module pages

commit dd5cd53
Author: Harshvardhan Pandit <[email protected]>
Date:   Thu Dec 14 22:13:09 2023 +0000

    300: concepts show relevant properties for subject/object

commit a1c6d45
Author: Harshvardhan Pandit <[email protected]>
Date:   Wed Dec 13 07:06:28 2023 +0000

    update XLSX files (renamed extensions)

commit 41f575c
Author: Harshvardhan Pandit <[email protected]>
Date:   Tue Dec 12 12:41:11 2023 +0000

    EU-GDPR legal basis x rights mappings

commit a640cbe
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 11 23:59:11 2023 +0000

    300 bug fixes, content documentation added

    - correctly displays parent hierarchy
    - can display child hierarchy (not used)
    - fixed RDF generation issue that caused all sub-types to declare
      topconcept as skos:broader
    - sources are generated as href
    - changed term description format to use tables; prettified
    - fixed skos relations used - skos:definition instead of skos:scopeNote

commit 45c120a
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Dec 11 01:38:56 2023 +0000

    WIP: fix parent relations, update RDF, HTML

commit fad076c
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Dec 10 17:32:07 2023 +0000

    100: download and extract CSVs

    Can download all or specified files
    Can extract all or specified files
    Default behaviour is to download NO files and extract ALL files

commit 5e39380
Author: Harshvardhan Pandit <[email protected]>
Date:   Fri Dec 8 12:48:33 2023 +0000

    WIP: hierarchy lists, CSS, test prefix change

    - hierarchy lists have an optional 'head' concept that limits the
      concepts to those under 'head'; see template function calls
    - CSS: button styling, cosmetics
    - JS: expand/collapse buttons won't be shown if the list isn't nested
      beyond 2 levels
    - edited DPV and EU-GDPR for above
    - changing prefix in Namespaces from dpv-gdpr to eu-gdpr works as
      expected; to be rolled out in all spreadsheets
    - next steps: go through spreadsheets and change prefixes and refine
      concept descriptions

commit 155e5b8
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Nov 12 11:08:11 2023 +0000

    300: tech, risk, eu-rights pages

commit 5276657
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Nov 12 10:43:29 2023 +0000

    300: eu-dga page

commit 2982fbb
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Nov 12 10:35:45 2023 +0000

    300: eu-gdpr page

commit 5131740
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Nov 12 10:05:05 2023 +0000

    pd: fixed namespace for correct hierarchy

commit bfa1476
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Nov 12 09:58:06 2023 +0000

    300: lists are collapsible

commit ed474a3
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Nov 11 22:25:33 2023 +0000

    300: adds PD page

commit 4cffa21
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Nov 11 22:08:30 2023 +0000

    300: DPV module pages

commit 4c21c51
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Nov 11 19:18:58 2023 +0000

    300: added module pages for DPV

commit 11c9138
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Nov 11 18:27:55 2023 +0000

    200: skos:ConceptScheme; 300: hierarchical lists

commit 64e6875
Author: Harshvardhan Pandit <[email protected]>
Date:   Wed Nov 1 09:22:28 2023 +0000

    300 sample entities module page; fix rogue rdfs:class dfn

commit 4ad0b23
Author: Harshvardhan Pandit <[email protected]>
Date:   Tue Oct 31 23:23:01 2023 +0000

    300 CSS pretty concept dfn

commit 6038b38
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 30 21:23:17 2023 +0000

    change DPV spreadsheets to new schema

commit df9f8e7
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 30 14:20:06 2023 +0000

    200 refactor paths into vocab_management

commit e79b1f9
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 17:50:51 2023 +0000

    200 handles taxonomy topconcept

commit 16cd876
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 15:20:18 2023 +0000

    300 fixed narrower/subproperty duplicate

commit 781e30a
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 14:57:51 2023 +0000

    300: populate dpv concepts in sections

commit 57b3fb0
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 14:40:09 2023 +0000

    300 generate vocab index: class, prop, ext

commit 260a6fb
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 13:41:39 2023 +0000

    200 fix external use declared skos:definition

commit 8cf5cef
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 13:25:51 2023 +0000

    100 downloads xlsx, extracts csv

commit 4922078
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 29 12:32:48 2023 +0000

    fixed external term redeclaration

commit 980a0c4
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Oct 28 22:42:02 2023 +0100

    RDF: added all vocabs (except legal, locs)

commit 5d45e37
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Oct 28 22:17:49 2023 +0100

    added gdpr, dga to new paths under legal/eu

    resolves #119

commit 48dd15f
Author: Harshvardhan Pandit <[email protected]>
Date:   Sat Oct 28 13:26:34 2023 +0100

    RDF all DPV modules; export notes

commit 17d71bb
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 21:26:11 2023 +0100

    WIP: 300 macro concept table & list items

commit e360554
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 18:22:03 2023 +0100

    WIP 300 generates HTML tables

commit 8e1f2fd
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 16:35:52 2023 +0100

    WIP 300 generates html with classes, properties

commit a6eff67
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 11:39:56 2023 +0100

    WIP: 002 generated purposes

commit ccd86ec
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 09:27:44 2023 +0100

    WIP 002 generate properties

commit 381dbd0
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 09:05:52 2023 +0100

    WIP: refactor CSV, generate pd

commit 7bcc7a1
Author: Harshvardhan Pandit <[email protected]>
Date:   Mon Oct 16 08:26:39 2023 +0100

    WIP: 002 generate rdf terms

commit c537440
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 15 22:01:08 2023 +0100

    WIP: refactoring 002

commit fb42dd1
Author: Harshvardhan Pandit <[email protected]>
Date:   Sun Oct 15 20:44:38 2023 +0100

    updated CSV files to OCT-15
  • Loading branch information
coolharsh55 committed Jan 28, 2024
1 parent 31382bc commit 73a1165
Show file tree
Hide file tree
Showing 2,234 changed files with 1,775,943 additions and 1,738,048 deletions.
380 changes: 380 additions & 0 deletions code/100_download_CSV.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,380 @@
#!/usr/bin/env python3
#author: Harshvardhan J. Pandit

# This script will Download Spreadsheets and Extract CSVs.
# The spreadsheets are currently stored in a [shared google drive
# folder](https://drive.google.com/drive/folders/1oDJBjxukEZantJL82gg4zbiRugRqyePT)
# from which the spreadsheet file IDs are used to download the
# entire document as Excel (.xlsx) and then individual CSVs are
# extracted from it using an external tool [xlsx2csv
# ](https://github.com/dilshod/xlsx2csv).

import logging
logging.basicConfig(
level=logging.DEBUG, format='%(levelname)s - %(funcName)s :: %(lineno)d - %(message)s')
DEBUG = logging.debug
INFO = logging.info

# == Export ==

# This is the Google Excel Export link which requires the
# document ID to download the document as XLSX
GOOGLE_EXCEL_EXPORT_LINK = (
'https://docs.google.com/spreadsheets/d/'
'%s/export?exportFormat=xlsx&format=xlsx&title=%s')
# Documents are to be stored in this folder
DOCS_FOLDER = './vocab_csv'

# == DPV Files ==
DPV_FILES = {
# The files to download are indicated in the following structure:
# Each group/dictionary represents the
# 'document level' grouping - this is the collection
# of 'sheets' in a single document
# The documents in Google Drive are organised per
# topic already, so this mapping reflects that.
'process': {
# The name is the document name the downloaded file is saved as
'name': 'process',
# The ID of the document through which it is downloaded
# (obtained by copying the info in
# link where https://docs.google.com/spreadsheets/d/`<DOC_ID>`)
'doc_id': '1x5Wfl-2Xp22R89lhNwNpYP0xN0zfQLFlNe3On6pwNM4',
# Only the list of 'sheets' will be downloaded
'sheets': (
# > Note: the 'name' of the 'sheet' MUST be exact
'Namespaces',
'Namespaces_Other',
'Process',
'Process_properties',
),
},
# Sheets for Data concepts in main DPV and Personal Data extension
'pd': {
'name': 'pd',
'doc_id': '1SI6gZh9-dq1rf_etfrlYHj0QZwq9Vd25f_OHPX5hbSQ',
'sheets': (
'PersonalData',
'PersonalData_properties',
'pd-core',
'pd-extended',
),
},
# Sheets for Purpose, Processing, and Processing Context/Scale
'purpose_processing': {
'name': 'purpose_processing',
'doc_id': '1ePg6BU2Zp9fiSDuEnKuVi6dIRrFLEVdatbVxjHRk-8s',
'sheets': (
'Purpose',
'Purpose_properties',
'Processing',
'Processing_properties',
'ProcessingContext',
'ProcessingContext_properties',
'ProcessingScale',
'ProcessingScale_properties',
),
},
# Sheets for Context and Statuses
'context_status': {
'name': 'context_status',
'doc_id': '1VPQW1DanprQhMwnhSqyKSGbEXdTmLHdc6UjpWJhyLMA',
'sheets': (
'Context',
'Context_properties',
'Status',
'Status_properties',
),
},
# Sheets for Tech/Org Measures
'toms': {
'name': 'toms',
'doc_id': '16d0_k6ueoXxXRTgecih9Ny7NpeXYF8icm4QX99cPYJA',
'sheets': (
'TOM',
'TOM_properties',
'TechnicalMeasure',
'OrganisationalMeasure',
# TODO: Add sheets for Legal and Physical measure
),
},
# Sheets for Entities
'entities': {
'name': 'entities',
'doc_id': '1g6zLqVt5FlNlgsXq_NW2W9INv3KdGEFjJCyOd03UmOg',
'sheets': (
'Entities',
'Entities_properties',
'Entities_Authority',
'Entities_Authority_properties',
'Entities_LegalRole',
'Entities_LegalRole_properties',
'Entities_Organisation',
'Entities_DataSubject',
'Entities_DataSubject_properties',
),
},
# Sheets for Locations and Jurisdiction
'location_jurisdiction': {
'name': 'location_jurisdiction',
'doc_id': '19exhY34jq6VDApRp2abHD-br6rpm6Q7BOP7H_pm5sKM',
'sheets': (
'Jurisdiction',
'Jurisdiction_properties',
'location',
'location_properties',
'location_memberships',
),
},
# Sheets for Legal Basis and Consent
'legal_basis': {
'name': 'legal_basis',
'doc_id': '13Ub4LXHruocffYnd7JKCMvzi1MYv3Gy61d3UmQBhARc',
'sheets': (
'LegalBasis',
'LegalBasis_properties',
'ConsentTypes',
'ConsentStatus',
'Consent_properties',
),
},
# Sheets for Tech extension
'tech': {
'name': 'tech',
'doc_id': '1GVmF4c7b-9xMSs0TyT45kXoCLLUVs8bbW34tfcozbuA',
'sheets': (
'tech-core',
'tech-core-properties',
'tech-data',
'tech-ops',
'tech-security',
'tech-surveillance',
'tech-provision',
'tech-provision-properties',
'tech-actors',
'tech-actors-properties',
'tech-comms',
'tech-tools',
'tech-algorithms',
),
},
# Sheets for Risk extension
'risk': {
'name': 'risk',
'doc_id': '1y8r3Vk-_Gi1MqbyAM6Ot4DoNDJpa2ZVhCyCyFQkGBy0',
'sheets': (
'Risk',
'Risk_properties',
'RiskConsequences',
'RiskLevels',
'RiskMatrix',
'RiskControls',
'RiskAssessment',
'RiskManagement',
'RiskMethodology',
'Justifications',
),
},
# Sheets for Rights extension
'rights': {
'name': 'rights',
'doc_id': '1XW-L6rGWbgGGp62q8eA22SWvh4wUWK5BpC0zfD6wAxM',
'sheets': (
'Rights',
'Rights_properties',
'EUFundamentalRights',
),
},
# Sheets for Rules extension
'rules': {
'name': 'rules',
'doc_id': '1SDmlzSo1Ax_35v754Jzx4oFGKvGo5nyNtEAL0vSBbM0',
'sheets': (
'Rules',
'Rules_properties',
),
},
# Sheets for Standards extension
'standards': {
'name': 'standards',
'doc_id': '1z-qaB2m6lD1ROmPVf9yhfG05D68Z7H4glYLERj6ZCRk',
'sheets': (
'Standards_ISO',
),
},
# Sheets for Legal extension
'laws-authorities': {
'name': 'laws-authorities',
'doc_id': '1pqGE67I5kyoGrkhMItJbi18VLguVqE1jVecnfki1ujY',
'sheets': (
'legal-eu',
'legal-de',
'legal-gb',
'legal-ie',
'legal-us',
)
},
# Sheets for EU-GDPR extension
'eu-gdpr': {
'name': 'eu-gdpr',
'doc_id': '1lDJZpl0UND8Bm_4iWKVQtgmMUz0YwP2R63CgP7Gro-U',
'sheets': (
'GDPR_LegalBasis',
'GDPR_LegalBasis_SpecialCategory',
'GDPR_LegalBasis_DataTransfer',
'GDPR_LegalRights',
'GDPR_LegalBasis_Rights_Mapping',
'GDPR_DataTransfers',
'GDPR_DPIA',
'GDPR_DPIA_properties',
'GDPR_compliance'
),
},
# Sheets for EU-DGA extension
'eu-dga': {
'name': 'eu-dga',
'doc_id': '1wKsf0Vqr0Gg1C91MqshtI5tjGXmQvXu4p4xF0yK0KaA',
'sheets': (
'DGA_LegalBasis',
'DGA_LegalRights',
'DGA_Services',
'DGA_Registers',
'DGA_TOMs',
'DGA_entities',
'DGA_properties',
),
},
# Sheets for Use-Cases, Requirements, and Examples
'ucr': {
'name': 'ucr',
'doc_id': '1__STWvOEZRc1u2J-8teOYjLpnTPlZ80_ebTytrUlWgQ',
'sheets': (
'UseCase',
'Requirement',
'Example',
),
},
# Sheets for Translations
'translations': {
'name': 'translations',
'doc_id': '1HqIWw8VdWatYnbRwKoW3gAdXWmVTnSAJ9d9x8FSDsWM',
'sheets': (
'DE_prod',
'DE_verify',
'DE_glossary',
# TODO: Add sheets for FR, IT, etc. languages
)
}
}

# == Downloading files ==
from urllib import request


def download_document(
document_id:str, document_name:str, export_link:str, ext:str='xlsx') -> None:
'''Download the document and save to specified path in specified format'''
# - `document_id`: ID of document to be downloaded
# - `document_name`: name to save the document with
# - `export_link`: string template to call to download the document
# (this will be the google export link)
# - `ext`: extension to save the document as (XLSX default)
url = export_link % (document_id, document_name)
try:
request.urlretrieve(url, f'{DOCS_FOLDER}/{document_name}.{ext}')
INFO(f'Downloaded {document_name}.{ext}')
except Exception as E:
logging.error(f'ERROR :: {E}')


def _download_spreadsheets(document_id, document_name, export_link):
# This is just a wrapper function that calls `download_document`
# with the Google excel export link
download_document(
document_id=document_id,
document_name=document_name,
export_link=GOOGLE_EXCEL_EXPORT_LINK,
ext='xlsx')


def _extract_CSVs(document_name, sheets):
# Extracts sheets from XLSX file and saves them as individual CSVs
# > Note: XLSX2CSV is an **external** tool called as a subprocess
INFO(document_name)
import subprocess
for sheet_name in sheets:
with open(f'{DOCS_FOLDER}/{sheet_name}.csv', 'w') as outfile:
subprocess.run(["xlsx2csv", f"{DOCS_FOLDER}/{document_name}.xlsx", "-n", f"{sheet_name}"], stdout=outfile)
INFO(f'Wrote {sheet_name}.csv from {document_name}.xlsx')


def _download_all_spreadsheets():
# Iterate and download all spreadsheets
# as specified in DPV_FILES variable
for data in DPV_FILES.values():
doc_id = data['doc_id']
document_name = data['name']
_download_spreadsheets(
doc_id, document_name, GOOGLE_EXCEL_EXPORT_LINK)


def _extract_all_CSVs():
# Iterate and extract all CSVs
# as specified in the DPV_FILES variable
for data in DPV_FILES.values():
document_name = data['name']
sheets = data['sheets']
_extract_CSVs(document_name, sheets)


# == script ==
if __name__ == '__main__':
# The script has a default behaviour where it will NOT download
# any file and will extract ALL CSVs from existing files.
import argparse
parser = argparse.ArgumentParser()
# - `-d` will download and extract ALL files
parser.add_argument('-d', '--d', action='store_true', help="download data files")
# - `-x` will extract ALL files
parser.add_argument('-x', '--x', action='store_true', default=True, help="extract CSVs from all data files")
# - `-ds <foo>` will download and extract ONLY `foo` files
parser.add_argument('--ds', nargs='+', default=False, help="download only indicated data files")
# - `-xs <foo>` will extract ONLY `foo` files
parser.add_argument('--xs', nargs='+', default=False, help="extract CSVs from indicated data files")
args = parser.parse_args()

# If files are to be downloaded, do the following.
if args.d or args.ds:
INFO('-'*40)
INFO('Downloading spreadsheets...')
INFO('-'*40)
if not args.ds: # download all files
_download_all_spreadsheets()
args.x = True # set extraction param
else: # download only indicated files
if not args.ds:
args.ds = [] # error handling for empty input
for document_name in args.ds:
if document_name not in DPV_FILES:
raise NameError(f'{document_name} is not a DPV File')
_download_spreadsheets(
DPV_FILES[document_name]['doc_id'],
document_name, GOOGLE_EXCEL_EXPORT_LINK)
args.xs = args.ds # set extraction queue to be same as download
INFO('-'*40)
# If files are to be extracted do the following.
if args.x is True:
INFO('-'*40)
INFO('Extracting CSVs...')
INFO('-'*40)
if not args.xs: # extract all CSVs
_extract_all_CSVs()
else: # extract specified CSVs
if not args.xs:
args.xs = [] # error handling for empty input
for document_name in args.xs:
if document_name not in DPV_FILES:
raise NameError(f'{document_name} is not a DPV File')
_extract_CSVs(
document_name, DPV_FILES[document_name]['sheets'])
INFO('-'*40)
Loading

0 comments on commit 73a1165

Please sign in to comment.