Skip to content

MartinSavko/murko

Repository files navigation

Goal

This projects aims to develop a tool to help make sense of optical images of samples people typically work with in macromolecular crystallography experiments. An approach employed at the current stage is the one using an artificial neural network. The current model is based on the architecture, normalization technique, loss definition and other key ideas from the research described in the following papers:

  • The One Hundred Layers Tiramisu: Fully convolutional DenseNets for Semantic Segmentation, arXiv:1611.09326
  • Xception: Deep Learning with Depthwise Separable Convolutions, arXiv:1610.02357
  • Micro-Batch Training with Batch-Channel Normalization and Weight Standardization arXiv:1903.10520
  • Focal loss: Focal loss for Dense Object Detection arXiv:1708.02002

Requirements

We aim to have a tool that given an image such as this one: Example input

will be able to classify all of the pixels as representing salient higher level concepts such as crystal, mother liquor, loop, stem, pin, ice, most likely user click etc ... It should also fulfil the following requirements

  • is invariant to scale
  • is invariant to wide range of illumination conditions
  • is invariant to sample orientation
    • supporting multi axis goniometry
    • supporting both horizontally and vertically mounted goniometers
  • is fast -- it has to work in real time

Current performance

This is how it performs at the moment Result

More details can be gleaned from the following presentation. If you find the code useful or want to learn more about how to deploy it at your beamline please drop me a line.

Installation

pip install -r requirements.txt

Usage

  1. Start server
./server.py &

Or using docker (with port 8008)

docker-compose up -d

Note: model loading and warmup run will take about 10 seconds.

  1. query the server
./client.py -t examples/image.jpg --save

In practice you will most likely use the code from your own python client. You might want to have a look in client.py to get a more precise idea of how to use it. Here is an example

from murko import ( 
    get_predictions,
    get_most_likely_click,
    get_loop_bbox
    )
request_arguments = {}
request_arguments['to_predict'] = 'examples/image.jpg' # may be a path to an image, directory, jpeg string, list of jpegs, list of ndarrays, to_predict, etc... (have a look at segment_multihead() method in murko.py to see how is it handled
request_arguments['model_img_size'] = model_img_size # what resolution will be the prediction run at. May be arbitrary, (256, 320) is the default.
request_arguments['save'] = True # Whether to save predictions or not.
port = 8099 # port on which the server is listening

predicitions = get_predictions(request_arguments, port=port)

anything_in_the_picture = predictions['present']
most_likely_click = predictions['most_likely_click']
loop_present, r, c, h, w = predictions['aoi_bbox']

if anything_in_the_picture:
   print('there seems to be something other than the background in this picture ...')
   
if loop_present:
   print('Loop found! Its bounding box parameters in fractional coordianates are: center (vertical %.3f, horizontal %.3f), height %.3f, width %.3f' % (r, c, h, w))
else:
   print('Loop not found.')

print('Most likely click in fractional coordinates: (vertical %.3f, horizontal %.3f)' % (most_likely_click))