Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sort documentation and add github action for tests #229

Merged
merged 8 commits into from
Oct 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,5 +29,5 @@ jobs:
uses: mhausenblas/mkdocs-deploy-gh-pages@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CONFIG_FILE: mkdocs.yml
REQUIREMENTS: requirements.txt
CONFIG_FILE: docs/mkdocs.yml
REQUIREMENTS: docs/requirements.txt
57 changes: 57 additions & 0 deletions .github/workflows/pytest.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: Unit Tests

on: [push]

jobs:
test:
runs-on: ${{ matrix.os }}

strategy:
matrix:
os: ["ubuntu-latest", "macos-latest"]
python-version: ["3.9", "3.10"]
steps:
#----------------------------------------------
# check-out repo and set-up python
#----------------------------------------------
- name: Check out repository
uses: actions/checkout@v4
- name: Set up python ${{ matrix.python-version }}
id: setup-python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
#----------------------------------------------
# ----- install & configure poetry -----
#----------------------------------------------
- name: Install Poetry
uses: snok/install-poetry@v1
with:
virtualenvs-create: true
virtualenvs-in-project: true
#----------------------------------------------
# load cached venv if cache exists
#----------------------------------------------
- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v3
with:
path: .venv
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
#----------------------------------------------
# install dependencies if cache does not exist
#----------------------------------------------
- name: Install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-root
#----------------------------------------------
# install your root project, if required
#----------------------------------------------
- name: Install additional dependencies
run: |
poetry install --no-interaction
#----------------------------------------------
# add matrix specifics and run test suite
#----------------------------------------------
- name: Run tests
run: poetry run pytest tests/ --verbose
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This page contains information on how to install and use Nesta's skills extracti

We currently support three different taxonomies to map onto: the [European Commission’s European Skills, Competences, and Occupations (ESCO)](https://esco.ec.europa.eu/en/about-esco/what-esco), [Lightcast’s Open Skills](https://skills.lightcast.io/) and a “toy” taxonomy developed internally for the purpose of testing.

If you'd like to learn more about the models used in the library, please refer to the [model card page](https://nestauk.github.io/ojd_daps_skills/build/html/model_card.html).
If you'd like to learn more about the models used in the library, please refer to the [model card page](https://nestauk.github.io/ojd_daps_skills/source/model_card.md).

You may also want to read more about the wider project by reading:

Expand Down Expand Up @@ -113,4 +113,4 @@ If contributing, changes will need to be pushed to a new branch in order for our

<small><p>Project template is based on <a target="_blank" href="https://github.com/nestauk/ds-cookiecutter">Nesta's data science project template</a>
(<a href="http://nestauk.github.io/ds-cookiecutter">Read the docs here</a>).
</small>
</small>
Binary file added docs/images/label_eg1.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/label_eg4.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/label_eg5.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/label_studio.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/match_flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/overview_example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/predict_flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 2 additions & 4 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

- [Installation](#installation)
- [Using Nesta’s Skills Extractor library](#tldr-using-nestas-skills-extractor-library)
- Development

## Welcome to Nesta’s Skills Extractor Library

Expand All @@ -14,7 +13,7 @@ This page contains information on how to install and use Nesta’s skills extrac

We currently support three different taxonomies to map onto: the European Commission’s European Skills, Competences, and Occupations (ESCO), Lightcast’s Open Skills and a “toy” taxonomy developed internally for the purpose of testing.

If you’d like to learn more about the models used in the library, please refer to the model card page.
If you’d like to learn more about the models used in the library, please refer to [the model card page](source/model_card.md). For more information on how we labelled the training data for the models see [the labelling page](source/labelling.md). A more in depth discussion of the pipeline and evaluation of it can be found in [the pipeline summary and metrics page](source/pipeline_summary.md).

You may also want to read more about the wider project by reading:

Expand All @@ -33,7 +32,7 @@ You will also need to install spaCy’s English language model:

Note that this package was developed on MacOS and tested on Ubuntu. Changes have been made to be compatible on a Windows system but are not tested and cannot be guaranteed.

When the package is first used it will automatically download a folder of neccessary data and models. (~1GB)
When the package is first used it will automatically download a folder of neccessary data and models (~1GB).

## TL;DR: Using Nesta’s Skills Extractor library

Expand Down Expand Up @@ -115,4 +114,3 @@ If you would like to demo the library using a front end, we have also built a st
The technical and working style guidelines can be found [here](https://github.com/nestauk/ds-cookiecutter/blob/master/GUIDELINES.md).

If contributing, changes will need to be pushed to a new branch in order for our code checks to be triggered.

13 changes: 8 additions & 5 deletions mkdocs.yml → docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ extra:
homepage: https://nestauk.github.io/ojd_daps_skills
docs_dir: .
extra_css:
- docs/style.css
- styles.css
theme:
name: material
logo: docs/images/favicon.ico
favicon: docs/images/favicon.ico
logo: images/favicon.png
favicon: images/favicon.png
features:
- navigation.instant
- navigation.tracking
Expand All @@ -35,6 +35,9 @@ theme:
icon: material/weather-sunny
name: Switch to light mode
nav:
- Home: docs/index.md
- Home: index.md
- Model cards: source/model_card.md
- Pipeline summary and metrics: source/pipeline_summary.md
- Entity labelling: source/labelling.md
plugins:
- same-dir
- same-dir
33 changes: 33 additions & 0 deletions docs/source/labelling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Entity Labelling

[June 2024 update: The training of the models used in the skills extraction algorithm is now done using code from the [ojd_daps_language_models](https://github.com/nestauk/ojd_daps_language_models/blob/dev/skillner/README.md) Github repo. Read more up to date information about the process and metrics there.]

To extract skills from job adverts we took an approach of training a named entity recognition (NER) model to predict which parts of job adverts were skills ("skill entities"), which were experiences ("experience entities") and which were job benefits ("benefit entities").

To train the NER model we needed labelled data. First we created a random sample of job adverts and got them into a form needed for labelling using [Label Studio](https://labelstud.io/) and also [Prodigy](https://prodi.gy/).

There are 4 entity labels in our training data:

1. `SKILL`
2. `MULTISKILL`
3. `EXPERIENCE`
4. `BENEFIT`

The user interface for the labelling task in label-studio looks like:

![](../images/label_studio.png)

We tried our best to label from the start to end of each individual skill, starting at the verb (if given):
![](../images/label_eg1.jpg)

Sometimes it wasn't easy to label individual skills, for example an earlier part of the sentence might be needed to define the later part. An example of this is "Working in a team and on an individual basis" - we could label "Working in a team" as a single skill, but "on an individual basis" makes no sense without the "Working" word. In these situations we labelled the whole span as multi skills:
![](../images/label_eg4.jpg)

Sometimes there were no entities to label:
![](../images/label_eg5.jpg)

`EXPERIENCE` labels will often be followed by the word "experience" e.g. "insurance experience", and we included some qualifications as experience, e.g. "Electrical qualifications".

### Training dataset

For the current NER model (20230808), 8971 entities in 500 job adverts from our dataset of job adverts were labelled; 443 are multiskill, 7313 are skill, 852 were experience entities, and 363 were benefit entities. 20% of the labelled entities were held out as a test set to evaluate the models.
88 changes: 88 additions & 0 deletions docs/source/model_card.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Model Cards

[June 2024 update: The training of the models used in the skills extraction algorithm is now done using code from the [ojd_daps_language_models](https://github.com/nestauk/ojd_daps_language_models/blob/dev/skillner/README.md) Github repo. Read more up to date information about the process and metrics there.]

This page contains information for different parts of the skills extraction and mapping pipeline. We detail the two main parts of the pipeline; the extract skills pipeline and the skills to taxonomy mapping pipeline.

Developed by data scientists in Nesta’s Data Analytics Practice, (last updated on 29-09-2023).

- [Model Card: Extract Skills](#extract_skills_card)
- [Model Card: Skills to Taxonomy Mapping](#mapping_card)

![](../images/overview_example.png)
_An example of extracting skills and mapping them to the ESCO taxonomy._

## Model Card: Named Entity Recognition Model <a name="extract_skills_card"></a>

![](../images/predict_flow.png)
_The extracting skills pipeline._

### Summary

- Train a Named Entity Recognition (NER) spaCy component to extract skills, multiskills, experience and benefits entities from job adverts.
- Predict whether or not a skill is multi-skill or not using scikit learn's SVM model. Features are length of entity; if 'and' in entity; if ',' in entity.
- Split multiskills, where possible, based on semantic rules.

### Training

- For the NER model, 500 job adverts were labelled for skills, multiskills, experience and benefits.
- As of 8th August 2023, **8971** entities in 500 job adverts from OJO were labelled;
- **443** are multiskill, **7313** are skill, **852** were experience entities, and **363** were benefit entities. 20% of the labelled entities were held out as a test set to evaluate the models.

The NER model we trained used [spaCy's](https://spacy.io/) NER neural network architecture. Their NER architecture _"features a sophisticated word embedding strategy using subword features and 'Bloom' embeddings, a deep convolutional neural network with residual connections, and a novel transition-based approach to named entity parsing"_ - more about this [here](https://spacy.io/universe/project/video-spacys-ner-model).

You can read more about the creation of the labelling data [here](./labelling.md).

### NER Metrics

- A metric in the python library nerevaluate ([read more here](https://pypi.org/project/nervaluate/)) was used to calculate F1, precision and recall for the NER and SVM classifier on the held-out test set. As of 8th August 2023, the results are as follows:

| Entity | F1 | Precision | Recall |
| ---------- | ----- | --------- | ------ |
| Skill | 0.612 | 0.712 | 0.537 |
| Experience | 0.524 | 0.647 | 0.441 |
| Benefit | 0.531 | 0.708 | 0.425 |
| All | 0.590 | 0.680 | 0.521 |

- These metrics use partial entity matching.

### Multiskill Metrics

- The same training data and held out test set used for the NER model was used to evaluate the SVM model. On a held out test set, the SVM model achieved 94% accuracy.
- When evaluating the multiskill splitter algorithm rules, 253 multiskill spans were labelled as ‘good’, ‘ok’ or ‘bad’ splits. Of the 253 multiskill spans, 80 were split. Of the splits, 66% were ‘good’, 9% were ‘ok’ and 25% were ‘bad’.

### Caveats and Recommendations

- As we take a rules based approach to splitting multiskills, many multiskills do not get split. If a multiskill is unable to be split, we still match to a taxonomy of choice. Future work should add more rules to split multiskills.
- We deduplicate the extracted skills in the output. This means that if a job advert mentions ‘excel skills’ twice and these entities are extracted, the output will just contain "excel skills" once. However, if the string is slightly different, e.g. "excel skills" and "Excel skill", both occurrences will be outputted.
- Future work could look to train embeddings with job-specific texts, disambiguate acronyms and improve NER model performance.

## Model Card: Skills to Taxonomy Mapping <a name="mapping_card"></a>

![](../images/match_flow.png)
_The methodology for matching skills to the ESCO taxonomy - threshold numbers can be changed in the config file._

### Summary

- Match to a taxonomy based on different similarity thresholds.
- First try to match at the most granular level of a taxonomy based on cosine similarity between embedded, extracted skill and taxonomy skills. Extracted and taxonomy skills are embedded using huggingface’s [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.
- If there is no close granular skill above 0.7 cosine similarity (this threshold can be changed in configuration file), we then assign the skill to different levels of the taxonomy in one of two approaches (maximum share and maximum similarity - see diagram above for details).
- If matching to ESCO, 43 commonly occurring skills from a sample of 100,000 job adverts are hard coded.

### Model Factors

The main factors in this matching approach are: 1) the different thresholds at different levels of a taxonomy and 2) the different matching approaches.

### Caveats and Recommendations

This step does less well when:

- The extracted skill is a metaphor: i.e. 'understand the bigger picture' gets matched to 'take pictures'
- The extracted skill is an acronym: i.e. 'drafting ORSAs' gets matched to 'fine arts'
- The extracted skill is not a skill (poor NER model performance): i.e. 'assist with the' gets matched to providing general assistance to people

We recommend that:

- Skill entities might match to the same taxonomy skill; the output does not deduplicate matched skills. If deduplicating is important, you will need to deduplicate at the taxonomy level.
- The current predefined configurations ensures that every extracted skill will be matched to a taxonomy. However, if a skill is matched to the highest skill group, we label it as ‘unmatched’. Under this definition, for ESCO we identify approximately 2% of skills as ‘unmatched’.
- The configuration file contains the relevant thresholds for matching per taxonomy. These thresholds will need to be manually tuned based on different taxonomies.
Loading
Loading