Releases: LabeliaLabs/referentiel-evaluation-dsrc
Releases · LabeliaLabs/referentiel-evaluation-dsrc
v202301 - Release 2023 semester 1
Changes in elements and items
- Issue #196 on logging of inferences/predictions: as a new element 5.6
- Issue #197 on enhancing element 6.1 with aggegated carbon footprint of AI activities, and/or a SCAP
- Broaden language of element 6.1 to energy consumption measurement (issue #197)
Changes in resources
- Add https://arxiv.org/ftp/arxiv/papers/2104/2104.10350.pdf to element 6.1 (issue #205)
- Add https://arxiv.org/abs/2111.00364 to element 6.1 or to section 6 directly? (issue #206)
- Add Quantmetry's blog article on frugality (issue #202)
- Issue #203 to introduce a first reference of compliance of foundational models / LLMs with AI Act
v202202 - Release 2022 semester 2
v202201 - Release 2022 semester 1
Changelog of the 2022 H1 release
Changes in elements and items
- Finetune wording of elements 2.3 and 2.4 to widen discrimination to population bias in general
- Finetune wording of item 2.3.b to widen to knowledgeable and/or trained
- Remove item 3.1.d on multiple testsets as it didn't prove operationnally relevant
- Add a answer item 3.1.d on documenting the train-test split technical choices (#175)
Changes in resources
- Indicate in 1.1 that the CNIL MOOC on GDPR is currently being upgraded (#180)
- Add new CNIL technical resources on AI compliance with GDPR in elements 1.1 and 1.2
- Add a new academic paper on reconstructing training samples from a model, in elements 1.7 (thank you @celinejacques)
- Add paper on System Cards in elements 4.1 resources (#182)
Misc. changes
- Add new misc. articles in the references section of the repository (e.g. FTC 'algorithm destruction' capability, Covid19 AI models attempts)
v202102 - Release 2021 semester 2
Changelog of the 2021 H2 release
New evaluation elements and answer items
- Add a new element, numbered 2.1, on the gathering of data and the preparation of datasets for model training and evaluation (#173)
- Add a new element, numbered 2.3, on the evaluation of the risk of discrimination in data science projects (#166)
- Add an intra-element condition for non-concerned org. to elements 2.4 (ex-2.3) and 2.5 (ex-2.4) (#166)
- Add new answer item 6.1.f on transparency of CO2 impact (#164) by @gmartinonQM
- Add new answer item 6.3.b for work in progress on an ethical policy (#165 and #169)
Changes within evaluation elements and answer items
- Add a slight precision to answer item 1.9.b on communicating to stakeholders (#166)
- Rearrange answer items of element 1.9 in progressive order to make it a single-answer element (#169)
- Add a slight precision to answer item 3.7.a on needing or not to communicate on performance metrics (#165)
- Add a slight precision to answer item 4.2.a on relying on the practices of collaborators involved (#169)
- Split answer item 6.1.c into 6.1.c and 6.1.d to facilitate unambiguous answers (#164)
Misc. changes
- Add examples to element 1.4 on certifications related to personal data (#166)
- Add examples to element 2.2 (ex-2.1) on sensors/capture bias, and attention to data labels/annotations (#173)
- Replace references to "predictive models" by "AI models" to enable a more generic perspective (#166)
- Replace wording "model genealogy" by "model lifecycle documentation" for clarity (#170)
- Add numerous references on environmental impact of AI (#164) by @gmartinonQM
- Add Numeum's guide and the LNE's certification framework
v202101 - Release 2021 semester 1
Resources/references to add
- Fixes Add OpenDP as a ref and resource on diff priv #97 : OpenDP on differential privacy
- Fixes Add HRX's article into the "various controversies" section #99 : HRX article on AI use cases controversies (via @meuce)
- Fixes Add interesting references #103 : Misc. references
- Fixes Add interesting reference: factsheet by IBM #106 : IBM factsheet
- Fixes Add Shapash and FACET in resources #112 : Shapash & FACET on explainabillity
- Fixes Add DataforGood ressources #114 : Resources elaborated during the Dataforgood season 8 project (fairness, genealogy, robustness)
- Fixes Références : The Global Landscape of AI Guidelines & Code Carbon #116 : Code Carbon (via @SaboniAmine)
- Fixes Reference: Principled Artificial Intelligence | Berkman Klein Center (harvard.edu) #120 : Berkman meta study on AI ethics principles
- Fixes ML Exploit : Remote code execution from pickle files #134 : ML Exploit with pickle files
- Fixes Add Counterfit to resources on ML security #144 : Counterfit to test ML models vulnerabilities
Changes in the evaluation elements and answer items
- Fixes Add an answer to Q4.3 #108 : New answer item to Q4.3 on sharing/publishing AI incidents (via D. Bartolo)
- Fixes AI registers #123 : Public AI Registers (new reference + new answer item to 5.5)
- Fixes Moving beyond "algorithmic bias is a data problem" #136 : New evaluation element on modeling- and learning-related biases
- Fixes [User feedback on the assessment] Missing answer item #140 : New answer item to 1.4 on compliance to personal data regulations (user feedback)
- Fixes A bunch of suggestions #133 : New answer item to Q1.5 on alternatives to minimisation principle (via @JustineBoulant)
Misc. fixes
- Fixes Set an explanation to each evaluation element #109 : Add missing explanations/hints in some elements
- Fixes Resource - dead link #118 : Broken link
- Fixes Missing blank lines between section titles and keywords #119 : Missing newlines
- Fixes A bunch of suggestions #133 : Clarify README and References.md, fix typos, enhance explanations/context infos on ML vulnerabilities in Q1.7-1.8)
Beta version 0.6 - Updated prior to sixth participative workshop
- Merge of former sections 4 and 5 into a new Section 4 titled "Assurer la reproductibilité des modèles et en établir la chaîne de responsabilité"
- Updated and finetuned wording for evaluation elements 1.7, 1.8, 2.1, 2.3, 3.3
- New technical resources and real-life illustrations of risks
Alpha version 0.5 - Updated prior to fifth participative workshop
Complement the assessment referential with new evaluation elements, updated answer items, and a lot of new and reorganised resources.
Alpha version 0.4 - Updated prior to fourth participative workshop
Merge pull request #63 from SubstraFoundation/improve-formulations Improve formulations of several evaluation items
Alpha version 0.3 - Updated prior to third participative workshop
Merge pull request #32 from SubstraFoundation/prepare-third-workshop WIP: Prepare third workshop