Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face detection #342

Open
wants to merge 36 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
c25a606
face detection scripts added
whyboris Jan 24, 2020
e08b058
semicolons etc
whyboris Jan 24, 2020
c7132e0
extraction working
whyboris Jan 24, 2020
b3d2aec
testing building
whyboris Jan 25, 2020
4bed464
faces folder
whyboris Jan 26, 2020
a01151b
merge master, rebuild package-lock
whyboris Jan 28, 2020
bf152a8
add repository field
whyboris Feb 1, 2020
9c2dac0
views working!
whyboris Feb 1, 2020
93fffbf
add facestrips folder
whyboris Feb 11, 2020
772ee6d
Merge branch 'master' into face-detection
whyboris Feb 12, 2020
0069a42
upgrade version
whyboris Feb 13, 2020
a15c435
Merge branch 'master' into face-detection
whyboris Feb 13, 2020
8d8c7ec
Merge branch 'master' into face-detection
whyboris Feb 16, 2020
f2bd9d1
merge master
whyboris Feb 16, 2020
30fb855
reinstall
whyboris Feb 16, 2020
7ad69a9
Merge branch 'master' into face-detection
whyboris Feb 18, 2020
0b6113c
Merge branch 'master' into face-detection
whyboris Feb 27, 2020
ac60670
merge master
whyboris Feb 27, 2020
a7eb564
Merge branch 'master' into face-detection
whyboris Mar 3, 2020
c02234b
merge master
whyboris Mar 25, 2020
9b3453f
merge master; remove package-lock
whyboris Apr 24, 2020
adcc9a8
merge master
whyboris May 13, 2020
d29dc91
merge master
whyboris May 17, 2020
ab264ed
merge master
whyboris May 19, 2020
2a40ce9
Merge branch 'master' into face-detection
whyboris May 28, 2020
e2b004f
merge master
whyboris Jun 22, 2020
c27961d
merge main reconcile conflicts
whyboris Jul 26, 2020
5da88a3
Merge branch 'main' into face-detection
whyboris Aug 10, 2020
3de4264
merge main
whyboris Oct 12, 2020
847393a
merge main
whyboris Oct 28, 2020
c1379db
merge main
whyboris Nov 29, 2020
57805e7
merge main
whyboris Feb 7, 2021
64dcda0
merge main
whyboris Apr 22, 2021
85dc6ab
merge main, resolve conflicts
whyboris Mar 27, 2022
06d6197
merge main, resolve conflicts
whyboris Feb 19, 2023
efdfd10
merge main
whyboris Oct 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions face/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Face Detection

Face detection code originally created in this repository:

https://github.com/whyboris/extract-faces-node

32 changes: 32 additions & 0 deletions face/detect.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
const tf = require('@tensorflow/tfjs-node');

const faceapi = require('face-api.js');

import { FullDetection } from './interfaces';

/**
* Load the model only once
*/
export async function loadModel() {
await faceapi.nets.ssdMobilenetv1.loadFromDisk('./weights');
await faceapi.nets.ageGenderNet.loadFromDisk('./weights');
// await faceapi.nets.faceLandmark68Net.loadFromDisk('./weights');
// await faceapi.nets.tinyFaceDetector.loadFromDisk('./weights'); // NOT VERY GOOD, but fast
}

/**
* Use face-api.js to detect a rectangle around the face
* @param imgBuffer
*/
export async function findTheFaces(imgBuffer: Buffer): Promise<FullDetection[]> {

const imgTensor = tf.node.decodeJpeg(imgBuffer);

// const detections = await faceapi.detectAllFaces(imgElement);
// const detections = await faceapi.detectAllFaces(imgElement, new faceapi.TinyFaceDetectorOptions());
const detections = await faceapi.detectAllFaces(imgTensor).withAgeAndGender(); // changes output format a bit

// console.log(detections);

return detections;
}
32 changes: 32 additions & 0 deletions face/interfaces.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
export interface CropBox {
top: number;
left: number;
width: number;
height: number;
}

export interface InputMeta {
width: number;
height: number;
eachSSwidth: number;
}

// Face API ===============================

export interface FullDetection {
detection: FaceDetection;
gender: Gender;
}

export interface FaceDetection {
_box: FaceBox;
}

export interface FaceBox {
_x: number;
_y: number;
_width: number;
_height: number;
}

export type Gender = 'male' | 'female';
50 changes: 50 additions & 0 deletions face/pipeline.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import { loadModel, findTheFaces } from './detect';
import { getImageSizes, getSubImageBuffer, getCroppedImageBuffers, saveFinalOutput } from './sharp';
import { InputMeta, FullDetection, Gender } from './interfaces';

// VARIABLES for now ===============================================================================

const RELATIVE_IMAGE_PATH = './images/bbt4.jpg';
const CURRENT_NUMBER_OF_SCREENS = 1; // the number of chunks the image is split into (20 screenshots for example) // HARDCODED FOR NOW
const OUTPUT_FILE_NAME = './output/bbt.jpg';
const GENDER = 'female';

// runEverything(RELATIVE_IMAGE_PATH, CURRENT_NUMBER_OF_SCREENS, OUTPUT_FILE_NAME, GENDER);

// ==== PIPELINE ===================================================================================

/**
* Full pipeline process
* @param inputFile - relative path to INPUT image
* @param numOfScreens - the number of screenshots in the filmstrip
* @param outputFile - relative path to OUTPUT image
*/
export async function runEverything(inputFile: string, numOfScreens: number, outputFile: string, gender: Gender) {

await loadModel();

const sizes: InputMeta = await getImageSizes(inputFile, numOfScreens);

console.log(sizes);

const all_faces = [];

for (let i = 0; i < sizes.width; i = i + sizes.eachSSwidth) {

const imgBuffer: Buffer = await getSubImageBuffer(i, sizes.eachSSwidth, sizes, inputFile);

const detections: FullDetection[] = await findTheFaces(imgBuffer);

// warning -- getCroppedImageBuffers returns an array (possibly empty)
// so use `...` spread operator - it will not add any elements if incoming array is empty
all_faces.push(...(await getCroppedImageBuffers(detections, imgBuffer, sizes, gender)));

}

if (all_faces.length) {
saveFinalOutput(all_faces, outputFile, sizes);
console.log('File saved:', outputFile);
} else {
console.log('no faces found!');
}
}
130 changes: 130 additions & 0 deletions face/sharp.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
const sharp = require('sharp');

import { getBetterBox } from './support';

import { InputMeta, FullDetection, FaceBox, Gender } from './interfaces';

// ====== METHODS ==================================================================================

/**
* Inspect original image, return the width, height, and width of each sub-image
* Later Video Hub App will just provide these 3 pieces of data
* @param imgPath
* @param numOfScreens
*/
export async function getImageSizes(imgPath: string, numOfScreens: number): Promise<InputMeta> {
const fileMeta = await sharp(imgPath).metadata();

return {
width: fileMeta.width,
height: fileMeta.height,
eachSSwidth: fileMeta.width / numOfScreens
};
}


/**
* Extract buffer from sub-image
* @param offset number of pixels offset in original image (for when you have many sub-images horizontally)
* @param width width of the (sub-) image
* @param sizes
* @param imgPath
*/
export async function getSubImageBuffer(offset: number, width: number, sizes: InputMeta, imgPath: string): Promise<Buffer> {
const imgBuffer: Buffer = await sharp(imgPath)
.extract({
left: offset,
top: 0,
width: width,
height: sizes.height,
})
.toBuffer();

return imgBuffer;
}


/**
* Return the cropped buffer
* @param imgBuffer
* @param match
* @param sizes
*/
export async function getFaceCropBuffer(imgBuffer: Buffer, match: FaceBox, sizes: InputMeta) {

const newBox = getBetterBox(match, sizes);

console.log(newBox);

const croppedImageBuffer = await sharp(imgBuffer)
.extract(newBox)
.resize(sizes.eachSSwidth / 2, sizes.height)
.toBuffer();

return croppedImageBuffer;
}


/**
* Save each face in the current image
* - when there is more than one face found in an image
*
* @param matches
* @param imgBuffer
* @param sizes
*
* @returns array of buffers !!!
*/
export async function getCroppedImageBuffers(matches: FullDetection[], imgBuffer: Buffer, sizes: InputMeta, gender: Gender) {

console.log('found', matches.length, 'faces');

const all_faces = [];

for (let i = 0; i < matches.length; i++) {
const box: FaceBox = matches[i].detection._box;
const sex: Gender = matches[i].gender;
if (sex === gender) {
const croppedBuffer = await getFaceCropBuffer(imgBuffer, box, sizes);
all_faces.push(croppedBuffer);
}
}

return all_faces;
}


/**
* Iteracte across face-crop-buffers, combine them into a single photo, and save as output
* @param allFaceBuffers
* @param outputFile
* @param sizes
*/
export function saveFinalOutput(allFaceBuffers: Buffer[], outputFile: string, sizes: InputMeta) {

console.log('Total of', allFaceBuffers.length, 'faces found!');

let tracker = 0;

const composeParams = [];

allFaceBuffers.forEach((face) => {
composeParams.push({
input: allFaceBuffers[tracker],
top: 0,
left: tracker * sizes.eachSSwidth / 2,
});
tracker++;
});

sharp({
create: {
width: tracker * sizes.eachSSwidth / 2,
height: sizes.height,
channels: 3,
background: { r: 0, g: 0, b: 50 }
}
})
.composite(composeParams)
.toFile(outputFile);
}
29 changes: 29 additions & 0 deletions face/support.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import { CropBox, InputMeta } from './interfaces';

/**
* Take cropping box and expand it to include more of the face !!!
* @param box
* @param sizes
*/
export function getBetterBox(box, sizes: InputMeta): CropBox {

// set y to 1/2 height heigher or 0
const new_y: number = (Math.round(box._y) - Math.round(box._height / 2)) < 0 ? 0 : (Math.round(box._y) - Math.round(box._height / 2));

// set x to 1/2 to the left or 0
const new_x: number = (Math.round(box._x) - Math.round(box._width / 2)) < 0 ? 0 : (Math.round(box._x) - Math.round(box._width / 2));

// make width 2x wider or until it hits right edge
const new_w: number = (new_x + Math.round(box._width * 2)) > sizes.eachSSwidth ? sizes.eachSSwidth - new_x : (Math.round(box._width * 2));

// make height 2x as tall or until it hits bottom of image
const new_h: number = (new_y + Math.round(box._height * 2)) > sizes.height ? sizes.height - new_y : (Math.round(box._height * 2));

return {
top: new_y,
left: new_x,
width: new_w,
height: new_h,
};

}
3 changes: 3 additions & 0 deletions i18n/en.json
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,9 @@
"showFoldersDescription": "Folders view",
"showFoldersHint": "Show folders",
"showFoldersMoreInfo": "Only affects Thumbnails, Text, and Clips views",
"showFacesDescription": "Faces view",
"showFacesHint": "Show Faces",
"showFacesMoreInfo": "Show faces found in the extracted screenshots",
"showFreqDescription": "Word cloud",
"showFreqHint": "Word cloud",
"showFreqMoreInfo": "Show the Word cloud which shows the most frequent words in currently shown files",
Expand Down
43 changes: 43 additions & 0 deletions main.ts
Original file line number Diff line number Diff line change
Expand Up @@ -512,3 +512,46 @@ ipcMain.on('open-file', (event, pathToVhaFile) => {
ipcMain.on('clear-recent-documents', (event): void => {
app.clearRecentDocuments();
});


import { runEverything } from './face/pipeline';

// ===========================================================================================
// EXTRACT FACES !!!!!!!!! - electron messages
// -------------------------------------------------------------------------------------------

let hack: number = 0;

ipc.on('extract-face', function (event, currentAngularFinalArray: ImageElement[]) {

const element: ImageElement = currentAngularFinalArray[hack];

console.log(element.fileName);

const inputFile: string = path.join(
globals.selectedOutputFolder,
'vha-' + globals.hubName,
'/filmstrips',
element.hash + '.jpg');

console.log(inputFile);

const outputFile: string = path.join(
globals.selectedOutputFolder,
'vha-' + globals.hubName,
'/faces',
element.hash + '.jpg');

try {
runEverything(inputFile, element.screens, outputFile, 'female');
} catch (err) {
dialog.showMessageBox(win, {
message: systemMessages.noSuchFileFound,
detail: err,
buttons: ['OK']
});
}

hack++;
});

Loading
Loading